id
stringlengths
10
10
title
stringlengths
7
231
abstract
stringlengths
3
2.43k
authors
stringlengths
5
21.5k
published_date
stringlengths
20
20
link
stringlengths
33
34
markdown
stringlengths
133
1.92M
2302.00890
Neural Common Neighbor with Completion for Link Prediction
In this work, we propose a novel link prediction model and further boost it by studying graph incompleteness. First, we introduce MPNN-then-SF, an innovative architecture leveraging structural feature (SF) to guide MPNN's representation pooling, with its implementation, namely Neural Common Neighbor (NCN). NCN exhibits superior expressiveness and scalability compared with existing models, which can be classified into two categories: SF-then-MPNN, augmenting MPNN's input with SF, and SF-and-MPNN, decoupling SF and MPNN. Second, we investigate the impact of graph incompleteness -- the phenomenon that some links are unobserved in the input graph -- on SF, like the common neighbor. Through dataset visualization, we observe that incompleteness reduces common neighbors and induces distribution shifts, significantly affecting model performance. To address this issue, we propose to use a link prediction model to complete the common neighbor structure. Combining this method with NCN, we propose Neural Common Neighbor with Completion (NCNC). NCN and NCNC outperform recent strong baselines by large margins, and NCNC further surpasses state-of-the-art models in standard link prediction benchmarks. Our code is available at https://github.com/GraphPKU/NeuralCommonNeighbor.
Xiyuan Wang, Haotong Yang, Muhan Zhang
2023-02-02T05:45:09Z
http://arxiv.org/abs/2302.00890v4
# Neural Common Neighbor with Completion for Link Prediction ###### Abstract Despite its outstanding performance in various graph tasks, vanilla Message Passing Neural Network (MPNN) usually fails in link prediction tasks, as it only uses representations of two individual target nodes and ignores the pairwise relation between them. To capture the pairwise relations, some models add manual features to the input graph and use the output of MPNN to produce pairwise representations. In contrast, others directly use manual features as pairwise representations. Though this simplification avoids applying a GNN to each link individually and thus improves scalability, these models still have much room for performance improvement due to the hand-crafted and unlearnable pairwise features. To upgrade performance while maintaining scalability, we propose Neural Common Neighbor (NCN), which uses learnable pairwise representations. To further boost NCN, we study the unobserved link problem. The incompleteness of the graph is ubiquitous and leads to distribution shifts between the training and test set, loss of common neighbor information, and performance degradation of models. Therefore, we propose two intervention methods: common neighbor completion and target link removal. Combining the two methods with NCN, we propose Neural Common Neighbor with Completion (NCNC). NCN and NCNC outperform recent strong baselines by large margins. NCNC achieves state-of-the-art performance in link prediction tasks. Our code is available at [https://github.com/GraphPKU/NeuralCommonNeighbor](https://github.com/GraphPKU/NeuralCommonNeighbor). Machine Learning, ICML ## 1 Introduction Link prediction is a crucial task in graph machine learning. It has various real-world applications, such as recommender systems (Zhang and Chen, 2020), knowledge graph completion (Zhu et al., 2021), and drug interaction prediction (Souri et al., 2022). Graph Neural Networks have been used in link prediction. Among these GNNs, Graph Autoencoder (GAE) (Kipf and Welling, 2016) is one representative method, which uses the representations of two target nodes produced by Message Passing Neural Network (MPNN) (Gilmer et al., 2017) to predict the existence of the link. GAE achieves good performance on some datasets. However, traditional link prediction heuristics, including Common Neighbor (CN) (Barabasi and Albert, 1999), Resource Allocation (RA) (Zhou et al., 2009), and Adamic-Adar (AA) (Adamic and Adar, 2003), sometimes can outperform GAE/MPNN by large margins (Zhang et al., 2021). Therefore, recent works have been trying to boost GNNs for link prediction (Zhang and Chen, 2018; Zhu et al., 2021; Yun et al., 2021; Chamberlain et al., 2022). Zhang et al. (2021) notice that GAE only uses node representations and ignores pairwise relations between target nodes. For example, in Figure 1, MPNN will produce exactly equal representations for node \(v_{2},v_{3}\) as they are symmetric in the graph. So GAE will produce the same prediction for two links \((v_{1},v_{2})\) and \((v_{1},v_{3})\). However, their pairwise relations are different. For example, \(v_{1}\) and \(v_{2}\) have a common neighbor \(v_{4}\), while \(v_{1}\) and \(v_{3}\) do not have any. This suggests the key to boost MPNNs is capturing pairwise relations. Existing works boosting MPNNs vary in how they capture pairwise representations. SEAL (Zhang and Chen, 2018) Figure 1: The failure of MPNN in link prediction task. \(v_{2}\) and \(v_{3}\) have equal MPNN node representations due to symmetry. However, with different pairwise relations, \((v_{1},v_{2})\) and \((v_{1},v_{3})\) should have nonequal representations. adds target-link-specific hand-crafted features to the node features and modifies the input graph of MPNN, whose output node representations are then pooled to produce pairwise representations. Though it outperforms GAE significantly, SEAL has to rerun MPNN on a different graph for each target link, leading to high computation overhead. To accelerate it, Neo-GNN (Yun et al., 2021) and BUDDY (Chamberlain et al., 2022) decouple the pairwise representations from node representation learning. They directly use manual features as pairwise representations and only run MPNN on the original graph. Therefore, these models need to run MPNN only once for all target links and scale much better. However, their pairwise representations are oversimplified and still have much room for improvement. To upgrade performance, we propose Neural Common Neighbor (NCN). Similar to Neo-GNN and BUDDY, NCN runs MPNN on the original graph for node representations. However, instead of using hand-crafted features, NCN uses learnable and flexible pairwise representations, which sum common neighbors' node representations produced by MPNN. In experiments, NCN maintains scalability and outperforms existing models. As our second contribution, we analyze how the incompleteness of the input graph affects link prediction. Incompleteness of input graph is ubiquitous for link prediction, as the task itself is to predict unobserved edges which do not exist in the input graph. In this work, we empirically find that incompleteness leads to distribution shift between the training and test set and loss of common neighbor information. We intervene in the incompleteness to solve this problem. Specifically, we focus on two graph structural properties crucial to NCN, namely common neighbor and the existence of target link, and propose two intervention methods: Common Neighbor Completion (CNC) and Target Link Removal (TLR). CNC iteratively completes unobserved links with a link prediction model. TLR removes the target links from the input graph. In experiments, our intervention methods further improve the performance of NCN. In conclusion, our contributions are as follows: * We propose Neural Common Neighbor (NCN) to boost link prediction with learnable pairwise representations. NCN outperforms existing models and maintains the scalability. * We analyze how graph incompleteness hampers link prediction models. To alleviate the unobserved link problem, we intervene in the incompleteness and propose two methods: target link removal and common neighbor completion. * With these methods, we further improve NCN and propose Neural Common Neighbor with Completion (NCNC). NCN achieves state-of-the-art performance in link prediction tasks. ## 2 Preliminaries We consider an undirected graph \(\mathcal{G}=(V,E,A,X)\), where \(V=\{1,2,\ldots,n\}\) is the set of \(n\) nodes, \(E\subseteq V\times V\) is the set of edges, \(X\in\mathbb{R}^{n\times F}\) is node feature matrix whose \(v\)-th row \(X_{v}\) is of node \(v\), and adjacency matrix \(A\in\mathbb{R}^{n\times n}\) is a symmetric matrix and defined as follows \[A_{uv}=\begin{cases}1&\text{ if }(u,v)\in E,\\ 0&\text{ otherwise.}\end{cases} \tag{1}\] where \(1\) can also be replaced with edge weight. The _degree_ of node \(u\) is \(d(u,A):=\sum_{v=1}^{n}A_{uv}\). Node \(u\)'s neighbors are nodes connected to \(u\), \(N(u,A):=\{v|v\!\in\!V,A_{uv}>0\}\). For simplicity of notations, we use \(N(u)\) to denote \(N(u,A)\) when \(A\) is fixed. _Common neighbor_ means nodes connected to both \(i\) and \(j\): \(N(i)\bigcap N(j)\). High Order Neighbor of Graph.We define \(A^{l}\) as the adjacency matrix of the _high-order graph_, where \(v\) is a neighbor of \(u\) if there is a walk of length \(l\) between them in the original graph. \(N(u,A^{l})=\{v|v\!\in\!V,A^{l}_{uv}>0\}\) denotes the set of \(u\)'s neighbors in the high-order graph. \(N_{l}(u,A)\) denotes the set of nodes whose shortest path distance to \(u\) in graph \(A\) is \(l\). Existing works define _high-order neighbors_ as either \(N(u,A^{l})\) or \(N_{l}(u,A)\). Most generally, the neighborhood of \(u\) can be expressed as \(N_{l_{1}}(u,A^{l_{2}})\), returning all nodes with shortest path distance \(l_{1}\) to \(u\) in the high-order graph \(A^{l_{2}}\). For simplicity of notations, we use \(N^{l_{2}}_{l_{1}}(u)\) to denote \(N_{l_{1}}(u,A^{l_{2}})\) when \(A\) is fixed, and let \(N^{l}(u)=N^{l}_{1}(u)\), \(N_{l}(u)=N^{1}_{l}(u)\). Given a target link \((i,j)\), their neighborhood overlap is given by \(N^{l_{2}}_{l_{1}}(i)\bigcap N^{l_{2}}_{l_{1}}(j)\), and their neighborhood difference is given by \(N^{l_{2}}_{l_{1}}(i)-N^{l_{2}}_{l_{1}^{2}}(j)\). Message Passing Neural Network.Message passing neural network (MPNN) (Gilmer et al., 2017), composed of message passing layers, is a common GNN framework. The \(k^{\text{th}}\) layer (\(k=1,2,...,K\)) can be formulated as follows. \[\mathbf{h}_{v}^{(k)}=U^{(k)}(\mathbf{h}_{v}^{(k-1)},\text{AGG}(\\ \{M^{(k)}(\mathbf{h}_{v}^{(k-1)},\mathbf{h}_{u}^{(k-1)})|u\in N(v)\})), \tag{2}\] where \(\mathbf{h}_{v}^{(k)}\) is the representation of node \(v\) at the \(k^{\text{th}}\) layer, \(U^{(k)},M^{(k)}\) are some functions like multi-layer perceptron (MLP), and AGG denotes an aggregation function like sum and max. For each node, a message passing layer aggregates information from neighbors to update the representation of the node. The initial node representation \(\mathbf{h}_{v}^{(0)}\) is node feature \(X_{v}\). Node representations produced by MPNN are the output of the last message passing layer, MPNN\((v,A,X)=\mathbf{h}_{v}^{(K)}\). ## 3 Related Work ### Link Prediction Heuristics Research on developing hand-crafted graph structure features (heuristics) for link prediction has been longstanding before GNN methods (Liben-Nowell & Kleinberg, 2003). The most relevant heuristics to our paper are Common Neighbor (CN) (Barabasi & Albert, 1999), Resource Allocation (RA) (Zhou et al., 2009), and Adamic Adar (AA) (Adamic & Adar, 2003). Given an graph and a link \((i,j)\) in it, they produce a score \(S(i,j)\) by pooling representations of the common neighbors of \(i\) and \(j\). Links with higher scores are more likely to exist. Equations 3 show the score function used in these different heuristics. \[S_{\text{CN}}(i,j) =\sum_{u\in N(i)\bigcap N(j)}1, \tag{3a}\] \[S_{\text{RA}}(i,j) =\sum_{u\in N(i)\bigcap N(j)}\frac{1}{d(u)},\] (3b) \[S_{\text{AA}}(i,j) =\sum_{u\in N(i)\bigcap N(j)}\frac{1}{\log d(u)}. \tag{3c}\] Specifically, CN simply uses \(1\) as the node representation, while RA and AA further use node degree information and usually outperform CN. ### GNNs for Link Prediction Another research direction is using graph neural networks for link prediction, where Graph Autoencoder (GAE) (Kipf & Welling, 2016) is the representative one. It predicts an edge with node embeddings produced by MPNN as follows, \[\hat{A}_{ij}=\text{sigmoid}(\text{MPNN}(i,A,X)^{T}\text{MPNN}(j,A,X)), \tag{4}\] where \(\hat{A}_{ij}\) is the existence probability of link \((i,j)\) predicted by GAE. Note that GAE ignores pairwise relations completely as we have shown in Section 1 and Figure 1, so CN, RA, and AA outperform GAE in many datasets. Recently, many methods have been proposed to help GNNs to capture pairwise relations. SEAL (Zhang & Chen, 2018) adds target-link-specific distance features to the input graph of MPNN and outperforms GAE, CN, RA, and AA significantly. However, SEAL suffers from high computation overhead. Unlike GAE, which only needs to run MPNN once to predict all target links, SEAL must rerun MPNN for each target link. To improve the scalability, Yun et al. (2021) and Chamberlain et al. (2022) directly add manual pairwise features to the node representations produced by MPNN instead of modifying the input graph of MPNN. Inspired by traditional heuristics, Neo-GNN (Yun et al., 2021) further utilizes high-order neighborhood overlaps and, moreover, uses a learnable node degree function instead of the fixed functions in RA and AA, which leads to significant outperformance over the heuristics. Another recent method is called BUDDY (Chamberlain et al., 2022), which further distinguishes the pairwise feature as the overlap feature and difference feature, but ignores node degree information as CN. Both Neo-GNN and BUDDY only run MPNN once and achieve similar computation overhead to GAE. However, one key limitation is that their pairwise features are hardly learnable and therefore inflexible. Additionally, they are completely decoupled from the node features/representations produced by the MPNN. These problems restrict their performance. ### Incompleteness of Graph As the link prediction task is to predict unobserved edges, the observed graph is deemed to be incomplete. However, existing methods relying heavily on graph structures such as CN might behave very differently under different graph completeness levels. Some existing works also notice this problem. Yang et al. (2022) analyze how unobserved links distort the evaluation score. They focus on metrics and benchmark design. In contrast, we for the first time study the performance degradation caused by incompleteness as well as how to alleviate it. Different from our settings, Das et al. (2020) study unobserved nodes and propose an inductive learning method. Their method uses no intervention method and is completely different from ours. ## 4 Neural Common Neighbor Recent link prediction models use manual and inflexible pairwise features, which restrict their performance and generalization. In this section, we first propose a more general framework, which can include the heuristics introduced in Section 3.1 as well as Neo-GNN and BUDDY in Section 3.2. Then, based on the framework, we propose a realization called Neural Common Neighbor (NCN) by replacing the manual features with an MPNN to achieve better capacity. ### A General Framework of Pairwise Feature Neo-GNN uses the higher-order pairwise feature, which is a weighted summation in different high-order neighborhood as shown in Equation (5). \[s(i,j,A)=\sum_{l_{1}=1}^{l}\sum_{l_{2}=1}^{l}\beta^{l_{1}+l_{2}-2}z_{l_{1}l_{2 }}(i,j,A), \tag{5}\] where \(l\) and \(\beta\) are hyperparameters, and \(z_{l_{1}l_{2}}\) is the feature from high-order neighbors \(N^{l_{1}}(i)\) and \(N^{l_{2}}(j)\) defined as follows. \[z_{l_{1}l_{2}}(i,j,A)=\sum_{u\in N^{l_{1}}(i)\bigcap N^{l_{2}}(j)}A_{iu}^{l_{1}}A_ {ju}^{l_{2}}f(d(u)), \tag{6}\] where \(f\) is a learnable function of node degree \(d(u)\). The pairwise feature \(z_{l_{1}l_{2}}(i,j,A)\) pools degree features of nodes within high-order neighborhood overlap. BUDDY further uses the high-order difference feature. The pair-wise feature is composed of \(k^{2}\) overlap features and \(2k\) difference features as follows. \[\{a_{l_{1},l_{2}}(i,j)\left|l_{1},l_{2}\in[k]\right.\}\bigcup\{b_{l}(i,j),b_{l }(j,i)\left|l\in[k]\right.\} \tag{7}\] where \(k\) is a hyperparameter, and functions \(a\) and \(b\) measure high-order neighborhood overlaps and differences, respectively, which are defined as follows. \[a_{l_{1},l_{2}}(i,j) =\sum_{u\in N_{l_{1}}(i)\bigcap N_{l_{2}}(j)}1, \tag{8a}\] \[b_{l}(i,j) =\sum_{u\in N_{l}(i)-\bigcup_{l^{\prime}=1}^{l}N_{l^{\prime}}(j)} 1. \tag{8b}\] How to use the pairwise feature is the key to link prediction models. Let's focus on the form of the pairwise feature: score function \(S(i,j,A)\) in heuristics, the high-order overlap feature \(z_{l_{1}l_{2}}(i,j,A)\) in Neo-GNN, and high-order overlap feature \(a_{l_{1},l_{2}}\) and difference feature \(b_{l}\) in BUDDY. All these features can be summarized into the following general framework \[\sum_{u\in N_{l_{1}}^{l_{2}}(i)\oplus N_{l_{1}}^{l^{\prime}_{2}}(j)}g(A_{ju}^ {l_{2}})g(A_{ju}^{l_{2}^{\prime}})f(d(u)), \tag{9}\] where \(N_{l_{1}}^{l_{2}}(i)\) and \(N_{l_{1}^{\prime}}^{l_{2}}(j)\) denote the general neighborhood of \(i\) and \(j\) respectively, \(\oplus\) is a set operator, and \(f,g\) are node (degree) and weight functions, respectively. Table 1 shows how the general framework contains the three heurisities CN, RA and AA as well as the two GNN models, Neo-GNN and BUDDY. Models using high-order neighbors are called _high-order models_ and _first-order models_ use first-order neighbors only. ### Neural Common Neighbor Now we introduce Neural Common Neighbor (NCN). We first notice that one of the main differences between existing models is the node function \(f\). However, existing models all use inflexible constant or degree functions, which fail to capture more refined node features such as multi-hop structure and attribute information. Therefore, we propose to use MPNN to replace \(f\), which leads to the following pairwise feature \[\sum_{u\in N_{l_{1}}^{l_{2}}(i)\oplus N_{l_{1}^{\prime}}^{l_{2}^{\prime}}(j)} g(A_{ju}^{l_{2}^{\prime}})\text{MPNN}(u,A,X). \tag{10}\] MPNN is a more powerful and flexible feature extractor than manually-designed degree-based feature functions. Theoretically, because MPNN can fit arbitrary degree functions \(f(d(u))\)(Xu et al., 2019), this model is _strictly more powerful_ than the heuristics, Neo-GNN and BUDDY. The next question is the selection of the neighborhood and the set operator \(\oplus\). Surprisingly, we find the improvement given by explicit higher-order neighbors is marginal after we import the MPNN (see Section 6.3). We believe that it is because the higher-order information could be implicitly learned by the MPNN. Considering the scalability, we choose to only keep the first-order neighbor and set the operator only to intersection, which leads to our NCN model: \[\text{NCN}(i,j,A,X)=\sum_{u\in N(i)\bigcap N(j)}\text{MPNN}(u,A,X), \tag{11}\] where \(g(A_{iu})\) and \(g(A_{ju})\) are constants and ignored. As our first contribution, NCN is a simple but powerful model to capture pairwise features. It is an implicit high-order model by aggregating first-order common neighbors each has its higher-order information implicitly learned by an MPNN. This trick effectively controls the model's time complexity, making it close to Neo-GNN and BUDDY but having a more powerful and flexible feature extractor. More analysis on time complexity can be found in Appendix C. \begin{table} \begin{tabular}{c c c c c c} \hline \hline & Model & Neighbor selection & Set operator \(\oplus\) & \(g(x)\) & \(f(x)\) \\ \hline \multirow{3}{*}{Heuristics} & CN & First-order & Intersection & \(1\) & \(1\) \\ & RA & First-order & Intersection & \(1\) & \(1/x\) \\ & AA & First-order & Intersection & \(1\) & \(1/\log(x)\) \\ \hline \multirow{3}{*}{GNN} & Neo-GNN & High-order & Intersection & \(x\) & MLP \\ & BUDDY & High-order & Intersection \& Difference & \(1\) & \(1\) \\ \cline{1-1} & **NCN** & First-order & Intersection & \(1\) & **MPNN** \\ \hline \hline \end{tabular} \end{table} Table 1: How the general framework shown in Equation (9) include several models. ## 5 Neural Common Neighbor with Completion Incompleteness of graph is ubiquitous in link prediction tasks, as the task is to predict unobserved edges. However, few works have studied this problem. This section first shows that the incompleteness may lead to distribution shifts between the training and test sets, loss of common neighbor information, and therefore performance degradation of models. Then, we intervene in the incompleteness and propose two methods, namely **common neighbor completion** and **target link removal**. With these intervention methods, we further improve NCN and propose Neural Common Neighbor with Completion (NCNC). ### Incompleteness Visualization Assume the graph only containing edges in the training set is the input _incomplete_ graph, while the graph containing edges in the training, validation and test sets is the ground-truth _complete_ graph. We focus on the following questions: _(1) What is different between the complete and incomplete graphs, and (2) whether the difference leads to performance degradation?_ To answer these questions, we investigate two common datasets: ogbl-collab (Hu et al., 2020) and Cora (Yang et al., 2016). Because the common neighbor information is crucial for link prediction as well as our NCN model, we visualize the distribution of the number of common neighbors of training/test edges in the complete/incomplete graphs separately. Firstly, by comparing the blue and green lines in Figure 2(a), we find there is a significant _distribution shift_ between the training and test sets in the incomplete graph of the ogbl-collab dataset, and the shift disappears when the graph is complete (the red and orange lines), which suggests this shift is due to the incompleteness. Such a significant distribution shift between training and test links in the input graph might cause **difficulty in model generalization**. In contrast, there is no such distribution shift between training and test edges in Cora (Figure 2(c)). The reason could be due to the different dataset split methods. Ogbl-collab splits the training and test edges according to the timestamp of edges, and the edges of the test set are all in the same year. Thus, edges in the test set will have stronger correlations with other test edges than with edges in the training set. Therefore, the test edges may have fewer common neighbors in the incomplete graph than the training edges. On the contrary, the Cora dataset randomly chooses the test set and thus avoids the distribution shift. Another phenomenon is the decrease of test edges' common neighbors in the incomplete graph, which can be see by the blue and green lines in Figure 2(c). Comparing the incomplete and complete graphs for the same training/test set, there are fewer common neighbors in the incomplete graph, which indicates _loss of common neighbor information_ due to the incompleteness. In conclusion, for the first question, the incompleteness could lead to at least the following two problems: * **Distribution shift**: with certain dataset splits, there could be a distribution shift between the training and test sets due to the incompleteness. * **Loss of common neighbor information**: there could be fewer common neighbors in the incomplete graph. **Remark.** On the one hand, we acknowledge that there is no guarantee that these two phenomena fully account for the difference between complete and incomplete graphs. On the other hand, these two problems do not necessarily appear in every dataset, and the significance of these problems could also vary in different datasets according to the data type, generation method, data split, and so on. To verify whether the incompletness causes **performance degradation** of link prediction models, we evaluate CN by ranking the training/test edges against the negative test links, under both complete and incomplete graphs, on both datasets. We also use Hits@K metrics as in (Chamberlain et al., 2022). Though CN is not learnable and does not involve generalization, it has no data leakage problem, Figure 2: Visualization of incompleteness on datasets. The incomplete graph only contains edges in the training set, and the complete graph further contains edges in the validation and test set. (a) and (b) visualize the ogbl-collab dataset. (c) and (d) visualize the Cora dataset. (a) and (c) are for distributions of the number of common neighbors of the training edges and test edges. (b) and (d) show performance of CN on the training set and test set. as CN computation for a target edge does not involve the edge itself. Moreover, CN can reveal how incompleteness changes the input graph structure of other learnable models, so it is a good reference here. The performance is shown in Figure 2(b) and (d). On both datasets, CN's performance for test edges degrades a lot between complete and incomplete graphs (green and orange bars), which verifies the performance degradation. This phenomenon indicates that we could have gotten a much better link prediction model if the input graph were more complete. Comparing the blue and green bars in Figure 2(b) (ogbl-collab) and Figure 2(d) (Cora), we see that there is a huge performance difference between training and test sets in the incomplete ogbl-collab, which disappears in the incomplete Cora. This aligns well with the _distribution shift_ problem discussed above, which is less significant under random data split. In conclusion, the incompleteness problem leads to distribution shift and loss of common neighbor information and causes performance degradation of link prediction models. In practice, we can never really evaluate CN in the complete graph but only in the incomplete input graph. Thus, the above phenomena need other alleviation methods, which will be discussed in later subsections. ### Common Neighbor Completion Motivated by the above analysis, our strategy is to first complement the input graph softly using a link prediction model (intervention on the incompleteness), and then apply the model again on the more complete graph to give final predictions. We use NCN as the link prediction model. To illustrate our model, we use the causal graphs (Judea, 2009) in Figure 3(a) and 3(b). In Figure 3(a), \(\tilde{A}\) denotes the ground-truth complete graph, \(T_{ij}\) is a random variable determining link \((i,j)\)'s incompleteness (\(T_{ij}=1\) means link \((i,j)\) is missing from the complete graph), and \(\tilde{A},T_{ij}\) together determine the link existence \(A_{ij}\) in the input graph. In Figure 3(b), \(Y_{uij}\) is a random variable indicating whether node \(u\) is a common neighbor of \((i,j)\) in the input graph, which is determined by \(T_{iu},T_{ju},\tilde{A}\) together. To alleviate the incompleteness, we intervene in \(T\) by setting \(T_{iu}=T_{ju}=0\), so that \(A_{iu}=\tilde{A}_{iu},A_{ju}=c\), i.e., we want to use the edges \((i,u)\) and \((j,u)\) in the complete graph to compute the common neighbor \(Y_{uij}\). However, since \(\tilde{A}_{iu}\) and \(\tilde{A}_{ju}\) are unknown, we choose to predict them using NCN first. Instead of predicting the existence, we let NCN output a probability instead, and the final probability of \(u\) being a common neighbor of \((i,j)\) is modeled as follows: \[P_{uij}=\begin{cases}1&\text{if }u\!\in\!N(i,\!A)\bigcap N(j,\!A)\\ \sigma(\text{NCN}(i,u,A,X))&\text{if }u\!\in\!N(j,\!A)-N(i,\!A)\\ \sigma(\text{NCN}(j,u,A,X))&\text{if }u\!\in\!N(i,\!A)-N(j,\!A)\\ 0&\text{otherwise}\end{cases} \tag{12}\] where \(\sigma\) is the sigmoid function. If both edges \((i,u)\) and \((j,u)\) are observed, \(u\) must be a common neighbor of \(i,j\). If one of \((i,u)\) and \((j,u)\) is unobserved, we use NCN to predict its link existence probability, which is also used as the probability of \(u\) being a common neighbor. The fourth case where both \((i,u)\) and \((j,u)\) are unobserved has a much less probability, so we just set the probability to \(0\). We call the above intervention trick Common Neighbor Completion (CNC). In fact, it can be combined with any link prediction method. After CNC, we apply NCN again to the "completed" graph where soft common neighbor weight \(P_{uij}\) is used, leading to the final model which we call Figure 3: (a) Causal graph of the link existence and incompleteness. (b) Causal graph of common neighbor and incompleteness. \(T\): Edge incompleteness variable. \(T=0\): edge is observed. \(T=1\): edge is unobserved. \(T\) varies with the target link. \(\tilde{A}\): the complete graph. Edge incompleteness and the complete graph jointly determine the observed graph. \(A_{ab}\): the existence of link \((a,b)\) in the observed graph. \(Y_{uij}\): whether \(u\) is a common neighbor of \((i,j)\) in the observed graph. \(Y_{uij}=1\) iff both links \((i,u)\) and \((j,u)\) exist. (c) An example of our two intervention methods. \((i,j)\) is the target link. Solid lines mean observed edges. Dotted lines mean links in the full graph affected by incompleteness. \((i,j)\) and \((u,j)\) are observed in the training set while unobserved in the test set. TLR (target link removal) removes the target link \((i,j)\). CNC (common neighbor completion) completes the observed graph and thus transforms the dotted line \((u,j)\) into a solid line. With the two methods, dotted lines are eliminated, and the incompleteness problem is alleviated. #### Neural Common Neighbor with Completion (NCNC) \[\text{NCNC}(i,j,A,X)=\sum_{u\in V}P_{uij}\text{MPNN}(u,A,X). \tag{13}\] Furthermore, this strategy can be extended to a iterative algorithm. Starting from the original graph \(A^{(0)}=A\), we _iteratively_ complete the graph to get the \(A^{(k)}\) from \(A^{(k-1)}\) until the final iteration \(K\). We call this model NCNC with maximum number of iterations \(K\) (NCNC-\(K\)). For the first iteration \(k=0\), NCNC-\(0\) is the same as NCN. At iteration \(k>0\), NCNC-\(k\) of the target link \((i,j)\), denoted by \(\text{NCNC}(i,j,A^{(k)},X)\), has the following form \[\sum_{\begin{subarray}{c}u\in N(i,A)\cap A\\ N(j,A)\end{subarray}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! **GAE**, as they both need to run MPNN only once. In contrast, SEAL, which reruns MPNN for each target link, takes \(86\) times more time compared with NCN with a small batch size \(2048\), and the disadvantage will be more significant with a larger batch size. To our surprise, Neo-GNN is even slower than SEAL, even if it only needs to run MPNN once. The reason is that it uses pairwise features that are much more complex and time-consuming than CN. We also conduct scalability comparison on other datasets and observe the same results (see Appendix E). ### Ablation Analysis To validate the design of Neural Common Neighbor, we conduct a thorough ablation analysis (see Table 3). GAE uses node representations only. GAE+CN further utilizes CN as pairwise features. On OGB datasets, GAE+CN outperforms GAE by \(70\%\), and NCN further achieves \(5.5\%\) higher score than GAE+CN, which implies that the learnable pairwise representations we propose are effective. On Planetoid datasets though, the improvement is minimal, which indicates that these datasets require less common neighbor information. NCN-diff is NCN plus neighborhood difference information, and NCN2 is NCN with high-order neighborhood overlap. For a target link \((i,j)\), compared with NCN, NCN-diff further sums representations of nodes in \(N(i,A)-N(j,A)\) and \(N(j,A)-N(i,A)\), and NCN2 further uses \(N(i,A^{2})\bigcap N(j,A)\) and \(N(i,A)\bigcap N(j,A^{2})\). As we can see, NCN, NCN-diff and NCN2 have similar performance on most datasets, verifying that first-order neighborhood overlap may be enough. The low score of NCN-diff on DDI might be because the DDI's high node degree makes neighborhood difference noisy and uninformative. NoTLR is NCN without TLR. On average, TLR leads to \(6\%\) performance gain. NCNC-2 is NCNC with maximum number of iterations \(2\). It achieves similar performance to NCNC. Therefore, setting maximum number of iterations to \(1\) is enough for our datasets. Both intervention methods boost NCN significantly. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline & **Cora** & **Citeseer** & **Pubmed** & **Collab** & **PPA** & **Citation2** & **DDI** \\ \hline Metric & HR@100 & HR@100 & HR@100 & HR@50 & HR@100 & MRR & HR@20 \\ \hline **CN** & \(33.92\pm_{0.46}\) & \(29.79\pm_{0.90}\) & \(23.13\pm_{0.15}\) & \(56.44\pm_{0.00}\) & \(27.65\pm_{0.00}\) & \(51.47\pm_{0.00}\) & \(17.73\pm_{0.00}\) \\ **AA** & \(39.85\pm_{1.34}\) & \(35.19\pm_{1.33}\) & \(27.38\pm_{0.11}\) & \(64.35\pm_{0.00}\) & \(32.45\pm_{0.00}\) & \(51.89\pm_{0.00}\) & \(18.61\pm_{0.00}\) \\ **RA** & \(41.07\pm_{0.48}\) & \(33.56\pm_{0.17}\) & \(27.03\pm_{0.35}\) & \(64.00\pm_{0.00}\) & \(49.33\pm_{0.00}\) & \(51.98\pm_{0.00}\) & \(27.60\pm_{0.00}\) \\ \hline **GCN** & \(66.79\pm_{1.65}\) & \(67.08\pm_{2.94}\) & \(53.02\pm_{1.39}\) & \(44.75\pm_{1.07}\) & \(18.67\pm_{1.32}\) & \(84.74\pm_{0.21}\) & \(37.07\pm_{5.07}\) \\ **SAGE** & \(55.02\pm_{4.03}\) & \(57.01\pm_{3.74}\) & \(39.66\pm_{0.72}\) & \(48.10\pm_{0.81}\) & \(16.55\pm_{2.40}\) & \(82.60\pm_{0.36}\) & \(53.90\pm_{4.74}\) \\ \hline **SEAL** & \(81.71\pm_{1.30}\) & \(83.89\pm_{2.15}\) & \(75.54\pm_{1.32}\) & \(64.74\pm_{0.43}\) & \(48.80\pm_{3.16}\) & \(87.67\pm_{0.32}\) & \(30.56\pm_{3.86}\) \\ **NBFNet** & \(71.65\pm_{2.27}\) & \(74.07\pm_{1.75}\) & \(58.73\pm_{1.99}\) & OOM & OOM & OOM & \(4.00\pm_{0.58}\) \\ \hline **Neo-GNN** & \(80.42\pm_{1.31}\) & \(84.67\pm_{2.16}\) & \(73.93\pm_{1.19}\) & \(57.52\pm_{0.37}\) & \(49.13\pm_{0.60}\) & \(87.26\pm_{0.84}\) & \(63.57\pm_{3.52}\) \\ **BUODY** & \(88.00\pm_{0.44}\) & \(92.93\pm_{0.27}\) & \(74.10\pm_{0.78}\) & \(65.94\pm_{0.58}\) & \(49.85\pm_{0.20}\) & \(87.56\pm_{0.11}\) & \(78.51\pm_{1.36}\) \\ \hline **NCN** & \(89.05\pm_{0.96}\) & \(91.56\pm_{1.43}\) & \(79.05\pm_{1.16}\) & \(64.76\pm_{0.87}\) & \(61.19\pm_{0.85}\) & \(88.64\pm_{0.14}\) & \(82.32\pm_{6.10}\) \\ **NCNC** & \(\mathbf{89.65\pm_{1.36}}\) & \(\mathbf{93.47\pm_{0.95}}\) & \(\mathbf{81.29\pm_{0.95}}\) & \(\mathbf{66.61\pm_{0.71}}\) & \(\mathbf{61.42\pm_{0.73}}\) & \(\mathbf{89.12\pm_{0.40}}\) & \(\mathbf{84.11\pm_{3.67}}\) \\ \hline \hline \end{tabular} \end{table} Table 2: Results on link prediction benchmarks. The format is average score \(\pm\) standard deviation. OOM means out of GPU memory. \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline & **Cora** & **Citeseer** & **Pubmed** & **Collab** & **PPA** & **Citation2** & **DDI** \\ \hline Metric & HR@100 & HR@100 & HR@100 & HR@50 & HR@100 & MRR & HR@20 \\ \hline **CN** & \(33.92\pm_{0.46}\) & \(29.79\pm_{0.90}\) & \(23.13\pm_{0.15}\) & \(56.44\pm_{0.00}\) & \(27.65\pm_{0.00}\) & \(51.47\pm_{0.00}\) & \(17.73\pm_{0.00}\) \\ **GAE** & \(89.01\pm_{1.32}\) & \(91.78\pm_{0.94}\) & \(78.81\pm_{1.64}\) & \(36.96\pm_{0.95}\) & \(19.49\pm_{0.75}\) & \(79.95\pm_{0.09}\) & \(61.53\pm_{9.59}\) \\ **GAE+CN** & \(88.61\pm_{1.31}\) & \(91.75\pm_{0.98}\) & \(79.04\pm_{0.83}\) & \(64.47\pm_{0.14}\) & \(51.83\pm_{0.58}\) & \(87.81\pm_{0.06}\) & \(80.71\pm_{5.56}\) \\ \hline **NCN2** & \(88.87\pm_{1.34}\) & \(91.36\pm_{1.02}\) & \(80.21\pm_{0.78}\) & \(65.43\pm_{0.46}\) & OOM & OOM & OOM \\ **NCN-diff** & \(89.12\pm_{1.04}\) & \(91.96\pm_{1.23}\) & \(80.28\pm_{0.88}\) & \(64.08\pm_{0.40}\) & \(57.86\pm_{1.26}\) & \(86.68\pm_{0.16}\) & \(17.67\pm_{8.70}\) \\ \hline **NCN** & \(89.05\pm_{0.96}\) & \(91.56\pm_{1.43}\) & \(79.05\pm_{1.16}\) & \(64.76\pm_{0.87}\) & \(61.19\pm_{0.85}\) & \(88.09\pm_{0.06}\) & \(82.32\pm_{6.10}\) \\ **NoTLR** & \(85.46\pm_{1.65}\) & \(88.08\pm_{1.23}\) & \(76.59\pm_{1.33}\) & \(64.22\pm_{0.49}\) & \(60.66\pm_{0.63}\) & \(88.64\pm ## 7 Conclusion We propose Neural Common Neighbor (NCN), a scalable and powerful model for link prediction leveraging learnable pairwise features. Furthermore, we study the graph incompleteness problem. By visualizing the distribution shift and performance degradation caused by incompleteness, we propose two tricks, target link removal and common neighbor completion. Combining NCN with the two tricks, our final model NCNC outperforms state-of-the-art baselines on all the datasets in both speed and performance.
2304.04978
StageInteractor: Query-based Object Detector with Cross-stage Interaction
Previous object detectors make predictions based on dense grid points or numerous preset anchors. Most of these detectors are trained with one-to-many label assignment strategies. On the contrary, recent query-based object detectors depend on a sparse set of learnable queries and a series of decoder layers. The one-to-one label assignment is independently applied on each layer for the deep supervision during training. Despite the great success of query-based object detection, however, this one-to-one label assignment strategy demands the detectors to have strong fine-grained discrimination and modeling capacity. To solve the above problems, in this paper, we propose a new query-based object detector with cross-stage interaction, coined as StageInteractor. During the forward propagation, we come up with an efficient way to improve this modeling ability by reusing dynamic operators with lightweight adapters. As for the label assignment, a cross-stage label assigner is applied subsequent to the one-to-one label assignment. With this assigner, the training target class labels are gathered across stages and then reallocated to proper predictions at each decoder layer. On MS COCO benchmark, our model improves the baseline by 2.2 AP, and achieves 44.8 AP with ResNet-50 as backbone, 100 queries and 12 training epochs. With longer training time and 300 queries, StageInteractor achieves 51.1 AP and 52.2 AP with ResNeXt-101-DCN and Swin-S, respectively.
Yao Teng, Haisong Liu, Sheng Guo, Limin Wang
2023-04-11T04:50:13Z
http://arxiv.org/abs/2304.04978v2
# StageInteractor: Query-based Object Detector with Cross-stage Interaction ###### Abstract Previous object detectors make predictions based on dense grid points or numerous preset anchors. Most of these detectors are trained with one-to-many label assignment strategies. On the contrary, recent query-based object detectors depend on a sparse set of learnable queries and a series of decoder layers. The one-to-one label assignment is independently applied on each layer for the deep supervision during training. Despite the great success of query-based object detection, however, this one-to-one label assignment strategy demands the detectors to have strong fine-grained discrimination and modeling capacity. To solve the above problems, in this paper, we propose a new query-based object detector with cross-stage interaction, coined as StageInteractor. During the forward propagation, we come up with an efficient way to improve this modeling ability by reusing dynamic operators with lightweight adapters. As for the label assignment, a cross-stage label assigner is applied subsequent to the one-to-one label assignment. With this assigner, the training target class labels are gathered across stages and then reallocated to proper predictions at each decoder layer. On MS COCO benchmark, our model improves the baseline by 2.2 AP, and achieves 44.8 AP with ResNet-50 as backbone, 100 queries and 12 training epochs. With longer training time and 300 queries, StageInteractor achieves 51.1 AP and 52.2 AP with ResNeXt-101-DCN and Swin-S, respectively. + Footnote †: boxtimes\) Corresponding author. ## 1 Introduction Object detection is a fundamental topic in the area of computer vision and acts as a cornerstone for many downstream tasks [50, 10]. It aims to localize and categorize a set of objects in one image. Over the past few decades, spatial dense prior has been widely applied in various detectors. These detectors make predictions based on either a large quantity of pre-defined anchors covering the whole image [20, 43, 3, 35, 32, 42] or dense grid points in the feature map of this image [28, 15, 67, 51, 41, 60]. To deliver supervision signals to these detectors, most researchers employ the _one-to-many label assignment_ strategy [65, 68, 27, 19, 18, 62] (_i.e._, the classification label and localization target of one ground-truth object could be assigned to multiple object predictions). Although this paradigm is widely used in object detection, it suffers from redundant and near-duplicate predictions due to such label assignment [47], and thus relies heavily on specific post-processing algorithms for duplicate removal [2, 25, 24]. Recently, DETR [4] and its variants [70, 48, 17, 8, 34, 16, 39, 9] open a new era of object detection. These query-based object detectors get rid of the dense prior and view object detection as a set prediction problem. Specifically, they use a sparse set of _queries_ (_i.e._, learnable embeddings) to progressively capture the characteristics and location of objects with the help of a series of _decoder layers_. In each layer, image features are sampled and fused into the input queries via attention-like operation [54, 70] or dynamic mixing [48, 17]. Then, the transformed queries are decoded into object predictions and also serve as inputs of the next layer. As for training of this paradigm, a kind of _one-to-one label assignment_ (_i.e._, each ground-truth object has to be assigned with only one prediction), termed as _bipartite matching_, is independently adopted on each decoder layer Figure 1: Convergence curves of our model and other query-based object detectors [4, 70, 48, 17] with ResNet-50 [23] on MS COCO [33] minival set. as deep supervision [64, 1, 29]. For inference, only the high-confidence outputs from the last layer are taken for evaluation. However, this one-to-one label assignment demands the detectors to have strong fine-grained discrimination and modeling capacity. On one hand, this strict bipartite matching requires the detector to capture details to distinguish the predictions. For example, as shown in Fig. 1(a), although the predicted boxes of a query (colored in red) cover most of the ground-truth object (Person) at each decoder layer, this object is assigned to the boxes of _different_ queries at _different_ stages (Stage1, white box; Stage6, red box). In other words, only one of the predicted boxes (red or white) can become the positive sample at each stage. To distinguish these boxes, the detector needs strong fine-grained discriminability to extract high-quality features from the image. Unfortunately, this goal is hard to be fully accomplished. As a result, we are motivated to directly modify the supervision of each decoder layer, _i.e_., introducing additional supervision from _other stages_ to assist the training of this layer, shown in Fig. 1(b). On the other hand, the large modeling capacity is vital to the fine-grained discrimination. To _efficiently_ improve this modeling ability, we resort to adding lightweight modules and reusing heavy dynamic operators in decoder layers. In this paper, we present a new paradigm, _i.e_., query-based object detector with cross-stage interaction, coined as StageInteractor. This interaction of our method lies in two aspects: cross-stage label assignment and cross-stage dynamic filter reuse. Specifically, during the label assignment, a _cross-stage label assigner_ is applied subsequent to the vanilla bipartite matching on each decoder layer. This assigner collects the results of bipartite matching across stages, and then reallocate proper training target labels for each object prediction according to a score and an _index constraint_. As for the forward propagation, in each decoder layer, we _reuse_ the heavy dynamic operators in preceding stages and add lightweight modules to increase the modeling capacity at a relatively low cost. Experiments show that our model with ResNet-50 [23] as backbone can achieve 44.8 AP on MS COCO validation set under the basic setting of 100 queries, with 27.5 AP\({}_{s}\), 48.0 AP\({}_{m}\) and 61.3 AP\({}_{l}\) on small, medium and large object detection, respectively. Equipped with \(3\times\) training time, 300 queries and more data augmentation in line with other query-based detectors, our model can achieve 49.9, 51.1, and 52.2 AP with ResNet-101, ResNeXt-101-DCN [58, 69], and Swin-S [36] as backbones, under the setting of single scale and single model testing. Our model significantly outperforms the previous methods, as shown in Fig. 1, and it has become a new state-of-the-art query-based object detector. ## 2 Related Work DETR [4] is an end-to-end query-based object detector without handcrafting designs such as preset anchors and non-maximum suppression (NMS), but it suffers from the slow training convergence problem. To handle this, a large quantity of works have been proposed. In this part, we divide these methods into two categories: modification of architectures and improvement of training procedures. **Architecture.** Vanilla DETR took a transformer encoder-decoder architecture [54] as the detection head, and used a set of object queries, content vectors, to encode the priors of object detection datasets. In Deformable DETR [70], the attention operator [54] was replaced with multi-scale deformable attention module, and iterative bounding box refinement and a two-stage design which enables the detector to adaptively generate queries with a dense prior were also introduced. In Sparse R-CNN [48], object query is decoupled into a content vector and an explicit bounding box, and image features are progressively aggregated into content vectors by ROIAlign [22] with these boxes and dynamic convolution. Moreover, Sparse R-CNN is only a decoder architecture without any transformer encoder. There were also many works like Conditional DETR [38], DAB-DETR [34], SMCA [16] and REGO [9] studying how to introduce spatial priors for proper features to accelerate convergence. Adamixer [17] improved Sparse R-CNN via deformable operators and spatial dynamic convolution to increase the adaptability. In this paper, we improve query-based object detector from a new perspective of the architecture scalability [66, 46, 55]: we reuse dynamic operators among decoder layers to capture Figure 2: The results of label assignment at various stages. The green box denotes the ground-truth object Person. The red and white boxes denote object prediction derived from two different queries. Pos and Neg denote the positive sample and the negative sample, respectively. (a) The white box is assigned with the ground-truth object Person by bipartite matching at the first stage, while the red box is not. But the opposite is true for the sixth stage. (b) With our cross-stage label assigner, the red box in the first stage can be assigned with the ground-truth Person. more diverse and complex representations. **Training Procedure.** In vanilla DETR, a set prediction loss [4] is adopted for training. Recently, many papers have analyzed how to accelerate the convergence of DETR via improved training procedures. To verify whether the instability of Hungarian loss slow down the convergence, Sun _et al_. [49] utilized a matching distillation with a pre-trained DETR providing label assignment to train another model, and they found this instability only influenced the convergence in the early few epochs. DN-DETR [30] presented a denoising training strategy where a set of additional noised ground-truth objects were passed through the decoder to reconstruct the corresponding raw objects. DINO-DETR [63] improved DN-DETR via contrastive denoising training for hard negative samples. Group DETR [6] introduced multiple groups of object queries for the global one-to-many label assignment during training, but maintained one-to-one label assignment in each group, and thus Group DETR could achieve the ability of duplicate removal with one group of queries. Hybrid Matching [26] was also proposed to combine one-to-one and one-to-many label assignment into one query-based object detector with a large number of queries, and it had three types of implementations: hybrid branch scheme (one-to-many matching for one group of queries, one-to-one matching for another), hybrid epoch scheme (one-to-many matching in early epochs, one-to-one matching in late epochs) and hybrid layer scheme (one-to-many matching in early layers, one-to-one in late layers). Different from the previous methods, in this paper, we focus on the calibration of label assignment without adding additional queries. We collect training target labels across stages by a cross-stage label assigner, and then select proper targets to act as the supervision of each object prediction. ## 3 Proposed Approach In this paper, we focus on the cross-stage interaction in query-based object detectors because it can well mitigate the misalignment between the decoders and supervisions in an object detector. We first revisit the state-of-the-art query-based object detectors, especially AdaMixer [17], and then elaborate on our proposed cross-stage interaction. ### Preliminary on query-based object detectors Generally, the architecture of query-based object detectors is composed by four parts: object queries, a backbone, a series of encoders and decoders. Distinctively, Adamixer removes the encoders and still maintains the desired performance. As shown in Fig. 4, it consists of object queries, a backbone and a series of decoders. **Object Query.** The initial object queries is just a set of learnable embeddings. Recent object detectors [34, 48, 17] decompose them into content vectors and positional vectors. The content vector is a vector \(\mathbf{v}\in\mathbb{R}^{D}\). The positional vector is presented in the format of box coordinates. For example, in AdaMixer, it contains the information about the center point, scale and aspect ratio of an individual box. **Decoder Layer.** In a query-based detector, the decoder layers are stacked multiple times to form a cascade structure. Each decoder layer is generally composed of three components: a multi-head self-attention (MHSA), a dynamic interaction module, and feed-forward networks (FFNs). DETR-like models use the multi-head cross-attention for the dynamic interaction, while AdaMixer adopts a feature sampler and a dynamic mixing module, as shown in Fig. 4. The object queries are sequentially passed through these modules. Specifically, the queries are _first_ processed by the multi-head self-attention module. _Then_, its outputs, the updated content vectors, together with the positional vectors are fed into the feature sampler. In this sampler, each query is allocated with a unique group of regional multi-scale image features, _i.e_., _sampled features_, by using the queries and bilinear interpolation. _Subsequently_, the adaptive mixing is performed on the sampled features with dynamic filters generated from the content vectors. Its outputs are aggregated into the vectors. _Last_, by means of FFNs, the queries updated by these modules can be decoded into object predictions, _i.e_., the relative scaling and offsets to the positional vectors (bounding boxes), and the classification score vectors. These updated queries also serve as inputs of the next stage. Note that any query has and only has Figure 3: **Overview. The cross-stage interaction incorporates two parts: cross-stage label assignment and cross-stage dynamic filter reuse. During the forward propagation, dynamic filters in each stage of decode layer are reused in the subsequent stages, _i.e_., we stack them with specific lightweight adapters to increase the depth of each decoder layer. As for the label assignment, our cross-stage label assigner gathers the results of bipartite matching across stages, and then selects proper target labels as supervision.** one corresponding prediction in every decoder layer. Thus, we simply represent the predictions derived from the same initial query with one index, _i.e_., the _query index_. **Training and testing.** In each decoder layer of a query-based detector, a bipartite matching is directly adopted on the ground-truth objects and the predictions. After some predictions match the ground-truth, these predictions are deemed as positive samples (assigned with foreground classification labels and supervision for localization) while others are the negative (assigned with the background label). Focal loss [32] serves as the classification loss and GIoU loss [44] with \(\ell_{1}\) loss acts as the localization loss. During inference, only the outputs with high confidence from the last layer are used for evaluation. ### Cross-stage Label Assignment To mitigate the training difficulty caused by bipartite matching in query-based object detectors, we propose a new _cross-stage label assigner_ to modify the results of the preceding bipartite matching. As depicted in Fig. 3, our assigner first gathers these assignments across the _stages_ (_i.e_., decoder layers), and then selects appropriate training target class labels for each prediction. When gathering training targets, the cross-stage label assigner adheres to an _index constraint_. This constraint forces each prediction to only access the targets of predictions sharing the same _query index_ (defined in Sec. 3.1) with it. The motivation behind this constraint is that the supervision of a single query may vary across stages, even though its predicted boxes across stages continuously cover the same ground-truth object, shown in Fig. 1(a). To alleviate this inconsistency for each query, we leverage its assigned targets provided by bipartite matching from multiple stages. When introducing targets, we use a score to determine whether a match between a target and a prediction is suitable to be shared across stages. **Algorithm.** Thanks to Focal loss [32] with binary cross entropy loss in prevailing query-based object detectors [17, 48, 30, 63], the multi-class classification task can be viewed as the binary classification on each category, and there is no need to consider a separate background category when training. More importantly, _our cross-stage label assignment can be conducted on each category independently_. The specific algorithm is as follows: Our cross-stage label assignment is performed between two stages with one type of condition. As shown in Fig. 5, given a stage \(\Phi_{i}\) and another stage \(\Phi_{j}\) where \(j\in[\alpha_{i},\beta_{i}]\), and \(\alpha_{i},\beta_{i}\) denote the lower bound and upper bound of stages, we first align the queries from these two stages according to the index constraint. Then, we remove the supervisions of the stage \(\Phi_{i}\) provided by the preceding bipartite matching. We also create a supervision candidate set for each query from \(\Phi_{i}\), and only add the localization supervision from the preceding matching as the initial elements. Subsequently, we select the training targets from the stage \(\Phi_{j}\) as the candidate supervisions according to the condition: \[\vartheta_{i}\big{(}q,t\big{)}\geq\eta_{i}, \tag{1}\] where \(\vartheta_{i}\big{(}q,t\big{)}\) denotes a score between the query \(q\) and the ground-truth object \(t\) at the stage \(\Phi_{i}\). Here, the object \(t\) needs to be assigned to query \(q\) at stage \(\Phi_{j}\) by the bipartite matching. \(\eta_{i}\) denotes a threshold. In practice, we follow the classical setting [43], where the IoU as the score and set the threshold as 0.5. After obtaining the candidate targets, we transform their target class labels into the format of the one-hot vectors, and then take the element-wise maximum of these vectors. If a query is not assigned with any ground-truth objects, we take it as the background sample and supervise it with zero vector in Focal loss [32]. **Discussion.** Although our label assignment seems to have something in common with existing works like TSP-RCNN [49] and Hybrid Layer Matching [26], the discrepancy between their design and ours cannot be ignored: (1) in TSP-RCNN, first, the idea from Faster-R-CNN [43] is directly borrowed into the set prediction problem (_i.e_., a ground-truth object is only assigned to the proposal that shares an IoU score greater than 0.5 with it). TSP-RCNN Figure 4: Overview of AdaMixer. Figure 5: The process of our cross-stage label assignment. Given a stage \(\Phi_{i}\), we enumerate other decoder layers (denoted as \(\Phi_{j}\)) and select their targets based on the condition Eq. (1). The selected targets are gathered into the candidate sets for each query at the stage \(\Phi_{i}\). The elements in each candidate set are formed into the final supervision. adopts such strategy for both the classification and localization. Differently, we can apply it only for the _classification_ task, shown as Tab. 8. Second, TSP-RCNN adopts such strategy with dense proposals for both the classification and localization. Differently, we apply it with _sparse queries_. (2) for Hybrid Layer Matching, ground-truth objects are simply replicated in the first four stages, and then a bipartite matching is conducted. Differently, we do not modify the set of ground-truth objects. We gather results of the bipartite matching across stages, and then only select some targets as supervision. Also, we observe that Hybrid Layer Matching is incompatible with the models given few queries. In Tab. 1, we implement it with the basic setting of our detector with 100 queries. The results show that it is totally infeasible under such a circumstance. In a nutshell, our approach is greatly different from these methods. ### Cross-stage Dynamic Filter Reuse The model capacity is essential for neural networks to capture complex representations [46, 66]. Since one of the major characteristics shared by query-based detectors is a series of decoder layers, we resort to modifying the decoder to improve this capacity. A straightforward way of this modification is adding the attention-like operators. The prevailing DETR-like detectors [34, 30, 63] perform deformable cross-attention [70] once along the spatial dimension in each stage, so directly adding this attention is feasible. By contrast, other detectors [48, 17] like AdaMixer perform more than one dynamic mixing at each stage, and they require a huge number of parameters [56] to generate the corresponding dynamic filters. Taking the channel mixing as an example, in each decoder layer, the specific process of generating a filter is as follows: \[\mathbf{M}_{i}=\mathbf{W}_{0}^{(i)}+\sum_{d=1}^{D}\mathbf{W}_{d}^{(i)} \mathbf{v}_{i,q,d}\in\mathbb{R}^{D_{C}\times D_{C}}, \tag{2}\] where \(\mathbf{v}_{i,q,d}\) denotes the \(d\)-th element of the content vector of query \(q\) at the stage \(\Phi_{i}\), \(\mathbf{W}_{d}^{(i)}\in\mathbb{R}^{D_{C}\times D_{C}}\) denotes a learnable weight matrix, and the number of such matrices in a stage is \(D\), \(D_{C}\) denotes the channel dimension of the input features, and \(\mathbf{M}_{i}\) serves as the kernel for channel mixing (\(1\times 1\) convolution) [52]. Only the parameters used to generate a single filter have already been more than \(D\times D_{G}^{2}\). Thereby, it is impractical to directly stack more decoders given limited resources. Fortunately, there are a large quantity of dynamic filters generated in these cascade decoder layers, and they are only used once, so these operators have the potential to be reused with _lightweight modules_ across stages. As depicted in Fig. 6, we propose a cascade dynamic mixing module with a filter bank as an extension of the original channel mixing. The filters generated in each stage are stored in the filter bank for future use. Given a stage, the _filter adapters_ update these stored filters with the ones from the current stage. In each adapter, the process of updating filters is as follows: \[\mathbf{w}_{j}^{(1)}=\sigma\big{(}\mathbf{W}_{j}^{(1)}\mathbf{v}_{i,q} \big{)}\in\mathbb{R}^{D_{C}},\mathbf{w}_{j}^{(2)}=\sigma\big{(}\mathbf{W}_{j} ^{(2)}\mathbf{v}_{i,q}\big{)}\in\mathbb{R}^{D_{C}}, \tag{3}\] where \(\mathbf{1}\) denotes an all-ones matrix, \(\mathbf{W}_{j}^{(1)},\mathbf{W}_{j}^{(2)}\) denote learnable weights, \(\sigma(\cdot)\) denotes the sigmoid function, \(\odot\) denotes the element-wise product, \(\mathbf{M}_{j}\) denotes a previous filter, \(j\in[\gamma_{i},i)\), \(\gamma_{i}\) is a lower bound, \(\mathbf{M}_{j,i}^{\prime}\) is used for subsequent modules, and the number of these updated filters is \(\Delta\gamma_{i}=i-\gamma_{i}\). We empirically find more filters can lead to more performance gain, so \(\Delta\gamma_{i}=i-1\) is the best choice. Note that this adapter only costs \(2D_{C}^{2}\) parameters, which is more lightweight than the process of generating a filter from scratch as Eq. (2). Then, the updated filters are used in the cascade dynamic mixing. In this structure, since each dynamic mixing is followed by a layer normalization with an activation function, we insert a lightweight linear layer between every two dynamic mixing, consistent with [23]. Moreover, we can modify the original spatial mixing with the dynamic filter reusing. Unlike the channel mixing, the original dynamic spatial mixing costs a lot of computational resources on its ouput features due to its expansion [45] on spatial dimension. To tackle this problem, we \begin{table} \begin{tabular}{l|c c c} \hline \hline Method & AP & AP\({}_{50}\) & AP\({}_{75}\) \\ \hline Baseline & 43.0 & 61.5 & 46.1 \\ Hybrid Layer\({}^{\dagger}\)[26] & 41.8 (-1.2) & 60.5 (-1.0) & 44.7 (-1.4) \\ Ours & 44.8 (+1.8) & 63.0 (+1.5) & 48.4 (+2.3) \\ \hline \hline \end{tabular} \end{table} Table 1: We reproduce (\(\dagger\)) the hybrid matching scheme, and compare it with our cross-stage label assigner with 100 queries. Figure 6: **The dynamic mixing with filter reusing on sampled features**. The content vectors dynamically generate filters through linear layers. These filters are used for mixing on sampled features, and then are stored into a dynamic filter bank for subsequent stages. Previous filters stored in the bank are also reused for mixing. also adopt the bank to store the filters generated for spatial mixing. Then, in each decoder layer, we update these filters with adapters and concatenate them with the current filter along the output dimension. This new filter is used to perform spatial mixing only once, rather than iteratively. This approach allows us to reduce the parameters of the filter generator of the spatial mixing, _i.e_., we employ the filters from preceding stages to replace a proportion of the filter from the current layer. ## 4 Experiments ### Implementation Details The experiments are performed on the MS COCO [33] object detection dataset, where the train2017 split and the val2017 split are for training and testing. All of our experiments on AdaMixer are based on mmdetection codebase [5]. The experiments on DN-DETR and DINO are based on DETREX codebase [13]. The convolutional neural networks [23, 59] or vision trasnformers [21, 36, 14] can be taken as the backbone network. 8 RTX 2080ti with 11G GPU memory are enough to train our model with ResNet-50 and 100 queries. For larger backbones or more queries, we resort to 8 V100 with 32G. The cross-stage label assignment is applied on each decoder layer. The reuse of spatial dynamic filters do not perform on the first two decoder layers. More implementation details are in the Appendix. ### Comparison to State-of-the-Art Detectors Our model significantly outperforms the previous methods, shown in Fig. 1, and it has become a new state-of-the-art query-based object detector. As shown in Tab. 2, with \(\mathbf{1}\times\) training scheme and ResNet-50 as backbone, our detector outperforms various methods on COCO minival set with even with 100 queries. Our model with ResNet-50 [23] as backbone can achieve 44.8 AP on MS COCO validation set under the basic setting of 100 object queries, with 27.5 AP\({}_{s}\), 48.0 AP\({}_{m}\) and 61.3 AP\({}_{l}\) on small, medium and large object detection, respectively. When equipped with 500 queries, our detector performs better, and it can achieve 46.9 AP. As depicted in Tab. 3, equipped with \(3\times\) training time, 300 queries and more data augmentation in line with other query-based object detectors, our model can achieve 48.9, 49.9, 51.1, and 52.2 AP with ResNet-50, ResNet-101, ResNeXt-101-DCN [58, 69], and Swin-S [36] as backbones, under the setting of single scale and single model testing. Moreover, if we extend the quantity of queries into 900 and use tricks (_i.e_., adding more sampling points at each stage), our model can achieve 50.8 AP with ResNet-50. More importantly, our designs can be applied on the state-of-the-art DETR-like detectors such as DN-DETR and DINO. Different from the cross-stage label assignment, our cross-stage filter reuse is tailored for AdaMixer. Thus, in order to achieve more model capacity, we opt to simply double the cross-attention because this attention-like operation is more lightweight than the dynamic mixing. In Tab. 4, our designs yield more than +0.5AP for all detectors. To find the reason for the lower performance on DETRs compared with AdaMixer, we plot the instability of assigned labels (_i.e_., the probability of a ground-truth object transferring from one query to another across stages) in Fig. 7. This figure shows that Deformable-DETR is more stable, and we attribute this to the powerful transformer encoder which provides high-quality features for decoder layers. ### Ablation Studies Because the computational resource is limited, ResNet-50 is employed as our backbone network, the number of queries is set as 100, and \(1\times\) training scheme is used for our ablation studies. Due to the simplicity, we adopt AdaMixer as the baseline for ablation studies. **The effectiveness of our proposed modules.** In Tab. 5, we conduct experiments to verify the effectiveness of our proposed modules. The results show that our modules lead to +2.2 AP gain, and each of them boosts the performance. **The components of our label assigner.** We verify how the components of our cross-stage label assigner influences the performance in Tab. 6. In the first two lines, the results show that directly adding more supervisions based on the index constraint only brings marginal gains. However, if we select the appropriate labels via scores like IoU, the performance is much better. **The number of reused spatial filters.** We explore the effectiveness of reusing spatial filters and the results are in Tab. 7. We find a certain number of reused filters can bring a slight performance gain. This may result from the easier optimization brought by the fewer model parameters. \begin{table} \begin{tabular}{l|c c c c c c} \hline detector & AP & AP\({}_{50}\) & AP\({}_{5}\) & AP\({}_{s}\) & AP\({}_{m}\) & AP\({}_{l}\) \\ \hline FCOS [51] & 38.7 & 57.4 & 41.8 & 22.9 & 42.5 & 50.1 \\ Cascade R-CNN [3] & 40.4 & 58.9 & 44.1 & 22.8 & 43.7 & 54.0 \\ GFocalV2 [31] & 41.1 & 58.8 & 44.9 & 23.5 & 44.9 & 53.3 \\ BorderDet [40] & 41.4 & 59.4 & 44.5 & 23.6 & 45.1 & 54.6 \\ Dynamic Head [12] & 42.6 & 60.1 & 46.4 & 26.1 & 46.8 & 56.0 \\ DETR [4] & 20.0 & 36.2 & 19.3 & 6.0 & 20.5 & 32.2 \\ Deformable DETR [70] & 35.1 & 53.6 & 37.7 & 18.2 & 38.5 & 48.7 \\ Sparse R-CNN [48] & 37.9 & 56.0 & 40.5 & 20.7 & 40.0 & 53.5 \\ AdaMixer [17] & 42.7 & 61.5 & 45.9 & 24.7 & 45.4 & 59.2 \\ AdaMixer\({}^{\dagger}\)[17] & 45.0 & 64.2 & 48.6 & 27.9 & 47.8 & 61.1 \\ \hline **StageInterator** & **44.8** & **63.0** & **48.4** & **27.5** & **48.0** & **61.3** \\ **StageInterator\({}^{\dagger}\)** & **46.9** & **65.2** & **51.1** & **30.0** & **49.7** & **62.3** \\ \hline \end{tabular} \end{table} Table 2: \(\mathbf{1}\times\) **training scheme (12 epochs)** performance of various detectors on COCO minival set with ResNet-50. 100 object queries is the default setting in our method. \(\dagger\) denotes 500 queries. **Modifying the localization supervision.** In our cross-stage label assigner, only classification labels are used for gathering, while the supervision for localization is unchanged. Thus, we explore its influence by consistently updating the supervision of classification and localization, _i.e_. selecting the ground-truth boxes that have the greatest IoU with predicted boxes across stages. The results are shown in the first line of Tab. 8, and we find that the performance under the two settings is very close, so we do not modify the localization part for simplicity. **The number of additional channel mixing.** We conduct ablation studies on how many previous filters are required for the channel mixing, and report performance \begin{table} \begin{tabular}{l|c|c|c c c c c} \hline \hline \multirow{2}{*}{CELA} & \multirow{2}{*}{Reuse} & \multirow{2}{*}{AP} & \multirow{2}{*}{AP\({}_{50}\)} & \multirow{2}{*}{AP\({}_{75}\)} & \multirow{2}{*}{AP\({}_{s}\)} & \multirow{2}{*}{AP\({}_{m}\)} & \multirow{2}{*}{AP\({}_{l}\)} \\ \hline & & 42.6 & 61.4 & 45.7 & 24.4 & 45.7 & 58.2 \\ & ✓ & 43.0 & 61.5 & 46.1 & 25.0 & 45.7 & 59.1 \\ ✓ & & 44.1 & 62.3 & 47.6 & 25.5 & 47.5 & 60.3 \\ ✓ & ✓ & **44.8** & **63.0** & **48.4** & **27.5** & **48.0** & **61.3** \\ \hline \hline \end{tabular} \end{table} Table 5: The effectiveness of our proposed modules. CSLA: cross-stage label assignment. \begin{table} \begin{tabular}{l|c|c|c|c c c c c} \hline \hline Detector & Backbone & Encoder/FPN & Epochs & AP & \(\text{AP}_{50}\) & \(\text{AP}_{75}\) & \(\text{AP}_{s}\) & \(\text{AP}_{m}\) & \(\text{AP}_{l}\) \\ \hline DETR [4] & ResNet-50-DC5 & TransformerEnc & 500 & 43.3 & 63.1 & 45.9 & 22.5 & 47.3 & 61.1 \\ SMCA [16] & ResNet-50 & TransformerEnc & 50 & 43.7 & 63.6 & 47.2 & 24.2 & 47.0 & 60.4 \\ Deformable DETR [70] & ResNet-50 & DeformTransEnc & 50 & 43.8 & 62.6 & 47.7 & 26.4 & 47.1 & 58.0 \\ Anchor DETR [57] & ResNet-50-DC5 & DecoupTransEnc & 50 & 44.2 & 64.7 & 47.5 & 24.7 & 48.2 & 60.6 \\ Efficient DETR [61] & ResNet-50 & DeformTransEnc & **36** & 45.1 & 63.1 & 49.1 & 28.3 & 48.4 & 59.0 \\ Conditional DETR [38] & ResNet-50-DC5 & TransformerEnc & 108 & 45.1 & 65.4 & 48.5 & 25.3 & 49.0 & 62.2 \\ Sparse R-CNN [48] & ResNet-50 & FPN & **36** & 45.0 & 63.4 & 48.2 & 26.9 & 47.2 & 59.5 \\ REGO [9] & ResNet-50 & DeformTransEnc & 50 & 47.6 & 66.8 & 51.6 & 29.6 & 50.6 & 62.3 \\ DAB-D-DETR [34] & ResNet-50 & DeformTransEnc & 50 & 46.8 & 66.0 & 50.4 & 29.1 & 49.8 & 62.3 \\ DN-DAB-D-DETR [30] & ResNet-50 & DeformTransEnc & **12** & 43.4 & 61.9 & 47.2 & 24.8 & 46.8 & 59.4 \\ DN-DAB-D-DETR [30] & ResNet-50 & DeformTransEnc & 50 & 48.6 & 67.4 & 52.7 & 31.0 & **52.0** & 63.7 \\ AdaMixer [17] & ResNet-50 & - & **12** & 44.1 & 63.1 & 47.8 & 29.5 & 47.0 & 58.8 \\ AdaMixer [17] & ResNet-50 & - & **24** & 46.7 & 65.9 & 50.5 & 29.7 & 49.7 & 61.5 \\ AdaMixer [17] & ResNet-50 & - & **36** & 47.0 & 66.0 & 51.1 & 30.1 & 50.2 & 61.8 \\ StageInteractor & ResNet-50 & - & **12** & 46.3 & 64.3 & 50.6 & 29.8 & 49.6 & 60.8 \\ StageInteractor & ResNet-50 & - & **24** & 48.3 & 66.6 & 52.9 & 31.7 & 51.4 & 63.3 \\ StageInteractor & ResNet-50 & - & **36** & **48.9** & **67.4** & **53.4** & **31.7** & 51.8 & **64.3** \\ StageInteractor* & ResNet-50 & - & **36** & **50.8** & **66.8** & **55.9** & **34.0** & **54.6** & **66.2** \\ \hline DETR [4] & ResNet-101-DC5 & TransformerEnc & 500 & 44.9 & 64.7 & 47.7 & 23.7 & 49.5 & 62.3 \\ SMCA [16] & ResNet-101 & TransformerEnc & 50 & 44.4 & 65.2 & 48.0 & 24.3 & 48.5 & 61.0 \\ Efficient DETR [61] & ResNet-101 & DeformTransEnc & **36** & 45.7 & 64.1 & 49.5 & 28.2 & 49.1 & 60.2 \\ Conditional DETR [38] & ResNet-101-DC5 & TransformerEnc & 108 & 45.9 & 66.8 & 49.5 & 27.2 & 50.3 & 63.3 \\ Sparse R-CNN [48] & ResNet-101 & FPN & **36** & 46.4 & 64.6 & 49.5 & 28.3 & 48.3 & 61.6 \\ REGO [9] & ResNet-101 & DeformTransEnc & 50 & 48.5 & 67.0 & 52.4 & 29.5 & 52.0 & 64.4 \\ AdaMixer [17] & ResNet-101 & - & **36** & 48.0 & 67.0 & 52.4 & 30.0 & 51.2 & 63.7 \\ StageInteractor & ResNet-101 & - & **36** & **49.9** & **68.6** & **54.6** & **33.0** & **53.6** & **65.4** \\ \hline REGO [9] & ResNeXt-101 & DeformTransEnc & 50 & 49.1 & 67.5 & 53.1 & 30.0 & 52.6 & 65.0 \\ AdaMixer [17] & ResNeXt-101-DCN & - & **36** & 49.5 & 68.9 & 53.9 & 31.3 & 52.3 & 66.3 \\ StageInteractor & ResNeXt-101-DCN & - & **36** & **51.1** & **70.1** & **55.9** & **33.1** & **54.2** & **66.8** \\ \hline AdaMixer [17] & Swin-S & - & **36** & 51.3 & **71.2** & 55.7 & 34.2 & 54.6 & 67.3 \\ StageInteractor & Swin-S & - & **36** & **52.2** & 70.9 & **57.1** & **35.7** & **55.6** & **67.9** \\ \hline \hline \end{tabular} \end{table} Table 4: The performance of DETR-like detectors on MS COCO minival set with ResNet-50. CSLA: cross-stage label assignment. DCA: dual cross-attention. \begin{table} \begin{tabular}{l|c|c|c c c c} \hline \hline \multirow{2}{*}{CELA} & \multirow{2}{*}{Reuse} & \multirow{2}{*}{AP} & \multirow{2}{*}{AP\({}_{50}\)} & \multirow{2}{*}{AP\({}_{75}\)} & \multirow{2}{*}{AP\({}_{s}\)} & \multirow{2}{*}{AP\({}_{m}\)} & \multirow{2}{*}{AP\({}_{l}\)} \\ \hline & & 42.6 & 61.4 & 45.7 & 24.4 & 45.7 & 58.2 \\ & ✓ & 43.0 & 61.5 & 46.1 & 25.0 & 45.7 & 59.1 \\ ✓ & & 44.1 & 62.3 & 47.6 & 25.5 & 47.5 & 60.3 \\ ✓ & ✓ & **44.8** & **63.0** & **48.4** & **27.5** & **48.0** & **61.3** \\ \hline \hline \end{tabular} \end{table} Table 5: The effectiveness of our proposed modules. CSLA: cross-stage label assignment. in Tab. 9. We do experiments on the \(\max\{\Delta\gamma_{i}\}\). This means there are at most \(\max\{\Delta\gamma_{i}\}\) filters reused for the \(i\)-th stage. We find more filters can bring more performance gain. The results also show the scalability of the decoder layers in this framework. **Selection of filters on mixing.** We explore whether the previous filters with adapters are suitable for our new channel mixing. As shown in Tab. 10, we find that using the adapters to fuse previous filters with the current ones is most suitable, which ensures both the adaptability of each stage and the diversity [66] of filters. More importantly, the existence of the additional dynamic mixing is necessary. **Inference speed and training memory use.** As depicted in Tab. 11, with one TITAN XP and one batch size, the speed of our model is 11.6 img/s while that of the baseline is 13.5 img/s, _i.e_., only \(\mathbf{1.16}\times\) slower. When we set the batch size as 2 for training, the additional operation only costs about 0.3G GPU memory (about 5%). **Application scope of the cross-stage label assigner.** For the \(i\)-th stage, the application scope of cross-stage label assigner is \([\alpha_{i},\beta_{i}]\). In this part, we explore the appropriate values of \(\alpha_{i}\) and \(\beta_{i}\). As shown in Tab. 12, we find that the best setting is \([i-1,L]\). Yet if too many previous ground-truth labels are included, like \(\alpha_{i}=1\), this hinders the last decoder layer to learn how to remove duplicates. Moreover, we conduct experiments to verify whether the cross-stage label assigner needs to be applied on all stages, because existing works [4, 26] demonstrate that only early stages can be applied with one-to-many label assignment. On the contrary, as shown in the last line of Tab. 12, we find that our cross-stage label assigner can also be applied on the last two stages and this brings performance gain. We speculate that this module helps remove some inappropriate supervision caused by vanilla bipartite matching which has a shortage of constraining IoU, and this module provides some proper supervision from the previous stage. of various query-based object detectors. ## Appendix A Spatial Mixing The process of applying filter reusing onto the spatial mixing is depicted in Fig. 8. It is very similar to the filter reusing on channel mixing, but the reused filter is used in the combination with the generated filter rather than cascade mixing. This combination is performed along the output dimension. In the cascade mixing, the lightweight static linear layers are placed between the activation function and the dynamic mixing. Apart from the static channel mixing, the linear layers can also achieve the _efficient_ spatial mixing, as shown in Code 1. Specifically, we split the sampling points into \(K\) groups, and perform affine transformation within and across groups, like [7]. The parameters cost on this operation is \((K^{2}+(\frac{P_{\text{in}}^{(i)}}{\text{R}})^{2})\cdot D_{C}^{2}\). Since the number of sampling points is set as the power of 2 and the number of spatial blocks \(K\) is set close to the square root of the number of sampling points, we use the formula \(K=2^{\lfloor\log_{2}\sqrt{P_{\text{in}}^{(i)}}\rfloor}\) for calculation. Thus, an upper-bound of the parameter cost is \(O(3P_{\text{in}}^{(i)}D_{C}^{2})\), and thus this module is still more lightweight than those related to dynamic filter generation. ## Appendix B Feature Sampling For the feature sampling, according to [17, 70, 11], we first generate a set of sampling points via content vectors, and then use these points to capture the desired image features with bilinear interpolation. Since the sampling points are organized into \(K\) groups, the feature sampler is correspondingly designed to generate points in groups. The PyTorch-like Pseudo-code is illustrated in Code 2. Specifically, we first use the content vectors to generate two sets of offsets to the positional vectors by linear layers. Then, the offsets is formed into the sampling points to extract features. ## Appendix C Additional Ablation Studies **The modules in the cascade mixing.** Both the reused heavy dynamic filters and the lightweight static linear layers are crucial to our method. As shown in Tab. 13, only when these two mixing approaches are combined can the large performance gain be achieved. Moreover, as shown in Tab. 14, we find that inserting static channel-spatial aggregation into the lightweight linear layers is more beneficial than solely performing channel or spatial mixing. **Feature sampling.** Different from the feature sampling in [17], the sampler in our detector is required to generate points in groups. Therefore, in this part, we explore whether the original feature sampling method (_i.e_. directly generat \begin{table} \begin{tabular}{c c|c c c c c c} \hline \hline Dynamic & Static & AP & AP\({}_{50}\) & AP\({}_{75}\) & AP\({}_{s}\) & AP\({}_{m}\) & AP\({}_{l}\) \\ \hline & & 44.1 & 62.3 & 47.6 & 25.5 & 47.5 & 60.3 \\ & ✓ & 43.5 & 61.7 & 46.8 & 25.3 & 47.0 & 59.0 \\ ✓ & & 43.9 & 62.2 & 47.6 & 26.6 & 46.9 & 60.8 \\ ✓ & ✓ & **44.8** & **63.0** & **48.4** & **27.5** & **48.0** & **61.3** \\ \hline \hline \end{tabular} \end{table} Table 13: The modules in the cascade mixing. \begin{table} \begin{tabular}{c|c c c c c c} \hline \hline Static Mixing & AP & AP\({}_{50}\) & AP\({}_{75}\) & AP\({}_{s}\) & AP\({}_{m}\) & AP\({}_{l}\) \\ \hline & 43.9 & 62.2 & 47.6 & 26.6 & 46.9 & 60.8 \\ Channel & 44.0 & 62.3 & 47.8 & 26.2 & 47.0 & 61.1 \\ Spatial & 43.7 & 61.8 & 47.2 & 25.7 & 47.0 & 59.9 \\ Channel-spatial & **44.8** & **63.0** & **48.4** & **27.5** & **48.0** & **61.3** \\ \hline \hline \end{tabular} \end{table} Table 14: The type of static mixing in our detector. Figure 8: The overview of our spatial dynamic mixing. ing all sampling points) is feasible. As shown in Tab. 15, we report the results of our detector with vanilla feature sampling in the first line. Compared to the first line, the results in the last line show that our sampling is more compatible with our detector than 3D feature sampling. To find whether the weight initialization of 3D feature sampler causes this phenomenon, we modify the initialization of sampler so that its outputs at the first iteration are identical with the our sampler, and report the corresponding performance in the second line. The results are still worse than our sampling. Therefore, we speculate that our two-stage sampler is consistent with our dynamic mixing, thereby boosting the performance. **The threshold of IoU.** As shown in Tab. 16, we find use a threshold of 0.5 to select proper labels based on IoU is the best choice. ## Appendix D Duplicate Removal According to [47], the strict one-to-one label assignment can ensure the object detector to have the ability to remove duplicate predictions. However, our cross-stage label assigner actually does not strictly follow one-to-one matching even in the last few stages, _i.e_., it has the potential to assign one ground-truth object to multiple predicted boxes on the classification task. Therefore, we explore whether our label assigner influences the performance of duplicate removal in query-based object detectors. As shown in Tab. 17, the results show that the performance our detector is relatively stable on AP with or without NMS. We consider this is because the coordinates of the most predicted boxes in the last few stages change little, and the operation of _gathering-and-selecting_ labels in our assigner is performed adaptively. ## Appendix E MS COCO Test As shown in Tab. 18, we report the performance of StagelInteractor on COCO test-dev set. Here, the performance is evaluated with the same models that are used for the comparison with other state-of-the-art query-based detectors. Because the labels of COCO test-dev set are not publicly available, so the evaluation is performed on the online server. ## Appendix F Analysis about dynamic channel mixing In vanilla AdaMixer [17], the FLOPs for the channel mixing is \(B\times N\times G\times P_{\mathrm{in}}^{(i)}\times(2D_{C}-1)\times D_{C}\), whereas the FLOPs for generating a dynamic channel filter is \(B\times N\times G\times(2D-1)\times D_{C}\times D_{C}\). Therefore, the ratio between these two FLOPs is: \[\frac{B\times N\times G\times(2D-1)\times D_{C}\times D_{C}}{B\times N\times G \times P_{\mathrm{in}}^{(i)}\times(2D_{C}-1)\times D_{C}}\approx 8 \tag{4}\] Therefore, generating channel filters consumes more computational costs than performing channel mixing. ## Appendix G Qualitative Analysis To verify the discriminability of our detector, we use t-SNE [53] visualization for the query features in various models. As depicted in Fig. 9, we select some representative categories with corresponding features to show the effectiveness of our structures. Compared with Fig. 8(a) and Fig. 8(b), the distance between each group of categories is wider in Fig. 8(c), and the points are more separate. lowing topics could be explored in the future: (1) the optimal designs of each decoder layer in a query-based detector; (2) more elaborate and effective cross-stage interactions; (3) the theoretical properties and the essence of the cascade structures. ## I Societal Impact Object detection is a classical vision task and we adopt the open dataset: MS COCO [33], so there is no negative social impact if the method is used properly. ## J Model Implementation Details **Hyper-parameters.** The cross-stage label assignment is performed on each stage, and its application scope is \([i-1,L]\). The threshold for selecting labels is set to 0.5. The reuse of dynamic filters do not perform on the first two decoder layers, and in other stages, all the generated filters for channel mixing are reused. The number of spatial blocks \(K\) is set close to the square root of the number of sampling points, _i.e_., we use the formula \(K=2^{\lfloor\log_{2}\sqrt{P_{\mathrm{in}}^{(i)}}\rfloor}\) for calculation. Other parameters of our model are in line with [17]. Other parameters of our model are in line with the vanilla AdaMixer [17] and DETRs [13]. **Initialization.** Following [17], the initial weights of linear layers generating dynamic filters are set to zero, and the biases of these linear layers are initialized as expected. The initial weights of linear layers in the feature sampler are also set to zero, and the biases of these linear layers are initialized as follows: (1) the bias corresponding to \(\mathrm{d}\mathrm{x}\)y_1 in Code 2 is uniformly initialized within [-0.5, 0.5]. (2) the one corresponding to \(\mathrm{d}\mathrm{x}\)y_2 is uniformly initialized within [-\(\frac{0.5}{\sqrt{2}}\)\(\frac{0.5}{\sqrt{2}}\)]. (3) The parts corresponding to the \(\mathrm{d}\mathrm{z}\)_1 and \(\mathrm{d}\mathrm{z}\)_2 are initialized as zeros. The initialization of other modules are set following [17, 13]. **Loss and optimization.** Focal loss [32] serves as the classification loss. GIoU loss [44] with \(\ell_{1}\) loss serve as the localization loss. The weighted sum of these losses are for training, and the loss weights are in line with those of AdaMixer in mmdetection codebase [5] and those of DETRs in DETREX codebase [13]. AdamW [37] is taken as the optimizer.
2308.12217
Funnel MPC for nonlinear systems with arbitrary relative degree
The Model Predictive Control (MPC) scheme Funnel MPC enables output tracking of smooth reference signals with prescribed error bounds for nonlinear multi-input multi-output systems with stable internal dynamics. Earlier works achieved the control objective for system with relative degree restricted to one or incorporated additional feasibility constraints in the optimal control problem. Here we resolve these limitations by introducing a modified stage cost function relying on a weighted sum of the tracking error derivatives. The weights need to be sufficiently large and we state explicit lower bounds. Under these assumptions we are able to prove initial and recursive feasibility of the novel Funnel MPC scheme for systems with arbitrary relative degree - without requiring any terminal conditions, a sufficiently long prediction horizon or additional output constraints.
Thomas Berger, Dario Dennstädt
2023-08-23T15:55:37Z
http://arxiv.org/abs/2308.12217v2
# Funnel MPC for nonlinear systems with arbitrary relative degree1 ###### Abstract The Model Predictive Control (MPC) scheme Funnel MPC enables output tracking of smooth reference signals with prescribed error bounds for nonlinear multi-input multi-output systems with stable internal dynamics. Earlier works achieved the control objective for system with relative degree restricted to one or incorporated additional feasibility constraints in the optimal control problem. Here we resolve these limitations by introducing a modified stage cost function relying on a weighted sum of the tracking error derivatives. The weights need to be sufficiently large and we state explicit lower bounds. Under these assumptions we are able to prove initial and recursive feasibility of the novel Funnel MPC scheme for systems with arbitrary relative degree - without requiring any terminal conditions, a sufficiently long prediction horizon or additional output constraints. keywords: model predictive control, funnel control, reference tracking, nonlinear systems, initial feasibility, recursive feasibility ## 1 Introduction Model Predictive Control (MPC) is a nowadays widely used control technique which has seen various applications, see e.g. [20]. It is applicable to nonlinear multi-input multi-output system and able to take state and control constraints directly into account. MPC relies on the iterative solution of finite horizon Optimal Control Problems (OCP), see e.g. [21; 14]. Solvability of the OCP at any particular time instance is essential for the successful application of MPC. Incorporating suitably designed terminal conditions (costs and constraints) in the optimization problem is an often used method to guarantee _initial and recursive feasibility_, meaning guaranteeing that the solvability of the OCP at a particular instance in time automatically implies that the OCP can be solved at the successor time instance. However, the computational effort for solving the OCP and finding initially feasible control signals becomes significantly more complicated by the introduction of such (artificial) terminal conditions. Thus, the domain of admissible controls for MPC might shrink substantially, see e.g. [10; 13]. Alternative methods relying on controllability conditions, e.g. cost controllability [11], require a sufficiently long prediction horizon, see e.g. [8; 12]. Especially in the presence of time-varying state and output constraints these techniques are considerably more involved, see e.g. [19]. Funnel MPC (FMPC) was proposed in [5] to overcome these restrictions. It allows for output reference tracking such that the tracking error evolves within predefined (time-varying) performance bounds. While in [5] output constraints were incorporated in the OCP, it was shown in the successor work [2] that for a class of systems with relative degree one and, in a certain sense, input-to-state stable internal dynamics, these constraints are superfluous. Utilizing a "funnel-like" stage cost, which penalizes the tracking error and becomes infinite when approaching predefined boundaries, guarantees initial and recursive feasibility - without the necessity to impose additional terminal conditions or requirements on the length of the prediction horizon. FMPC is inspired by funnel control which is an adaptive feedback control technique of high-gain type first proposed in [16], see also the recent work [4] for a comprehensive literature overview. The funnel controller is inherently robust and allows for output tracking with prescribed performance guarantees for a fairly large class of systems solely invoking structural assumptions. In contrast to MPC, funnel control does not use a model of the system. The control input signal is solely determined by the instantaneous values of the system output. The controller therefore cannot "plan ahead". This often results in unnecessary high control values and a rapidly changing control signal with peaks. Compared to this, by utilizing a system model, FMPC exhibits a significantly better controller performance in numerical simulations, see [2; 5]. A direct combination of both control techniques which allows for the application of FMPC in the presence of disturbances and even a structural plant-model mismatch was recently proposed in [3]. This approach was further extended in [17] by a learning component which realizes online learning of the model to allow for a steady improvement of the controller performance over time. Nevertheless, the results of [2; 3; 17] are still restricted to the case of systems with relative degree one. Utilizing so-called feasibility constraints in the optimization problem and restricting the class of admissible funnel functions, the case of arbitrary relative degree was considered in [1]. Like in previous results no terminal conditions nor requirements on the length of the prediction horizon are imposed. But then again these feasibility constraints lead to an in creased computational effort and they depend on a number of design parameters which are not easy to determine. Furthermore, the cost functional used in [1] is rather complex (using several auxiliary error variables). In the present paper, we resolve these problems and propose a novel cost functional to extend FMPC to systems with arbitrary relative degree. We further enlarge the considered system class considered in previous works to encompass systems with nonlinear time delays and potentially infinite-dimensional internal dynamics. Similar to FMPC for relative degree one systems, only the distance of one error variable to the funnel boundary is penalized and no feasibility constraints are required. ### Nomenclature \(\mathds{N}\) and \(\mathds{R}\) denote natural and real numbers, respectively. \(\mathds{N}_{0}:=\mathds{N}\cup\{0\}\) and \(\mathds{R}_{\geq 0}:=[0,\infty)\). \(\|x\|:=\sqrt{\langle x,x\rangle}\) denotes the Euclidean norm of \(x\in\mathds{R}^{n}\). \(\|A\|\) denotes the induced operator norm \(\|A\|:=\sup_{\|x\|=1}\|Ax\|\) for \(A\in\mathds{R}^{n\times n}\). \(\mathrm{GL}_{n}(\mathds{R})\) is the group of invertible \(\mathds{R}^{n\times n}\) matrices. \(\mathcal{C}^{p}(V,\mathds{R}^{n})\) is the linear space of \(p\)-times continuously differentiable functions \(f:V\to\mathds{R}^{n}\), where \(V\subset\mathds{R}^{m}\) and \(p\in\mathds{N}_{0}\cup\{\infty\}\). \(\mathcal{C}(V,\mathds{R}^{n}):=\mathcal{C}^{0}(V,\mathds{R}^{n})\). On an interval \(I\subset\mathds{R}\), \(L^{\infty}(I,\mathds{R}^{n})\) denotes the space of measurable and essentially bounded functions \(f:I\to\mathds{R}^{n}\) with norm \(\|f\|_{\infty}:=\operatorname*{ess\,sup}_{t\in I}\|f(t)\|\), \(L^{\infty}_{\mathrm{loc}}(I,\mathds{R}^{n})\) the set of measurable and locally essentially bounded functions, and \(L^{p}(I,\mathds{R}^{n})\) the space of measurable and \(p\)-integrable functions with norm \(\|\cdot\|_{L^{p}}\) and with \(p\geq 1\). Furthermore, \(W^{k,\infty}(I,\mathds{R}^{n})\) is the Sobolev space of all \(k\)-times weakly differentiable functions \(f:I\to\mathds{R}^{n}\) such that \(f,\ldots,f^{(k)}\in L^{\infty}(I,\mathds{R}^{n})\). ### System class We consider nonlinear control affine multi-input multi-output systems of the form \[\begin{split} y^{(r)}(t)&=f\big{(}\mathbf{T}(y, \ldots,y^{(r-1)})(t)\big{)}\\ &\quad\quad+g\big{(}\mathbf{T}(y,\ldots,y^{(r-1)})(t)\big{)}u(t), \\ y|_{[t_{0}-\sigma,t_{0}]}&=y^{0}\in\mathcal{C}^{r-1} ([t_{0}-\sigma,t_{0}],\mathds{R}^{m}),\;\;\;\text{if }\sigma>0,\\ &\big{(}y(0),\ldots,y^{(r-1)}(0)\big{)}=y^{0}\in\mathds{R}^{rm}, \;\;\;\text{if }\sigma=0,\end{split} \tag{1}\] with \(t_{0}\geq 0\), "memory" \(\sigma\geq 0\), functions \(f\in\mathcal{C}(\mathds{R}^{q},\mathds{R}^{m})\), \(g\in\mathcal{C}(\mathds{R}^{q},\mathds{R}^{m\times m})\), and an operator \(\mathbf{T}\). The operator \(\mathbf{T}\) is causal, locally Lipschitz and satisfies a bounded-input bounded-output property and is characterised in detail in the following definition. **Definition 1.1**.: For \(n,q\in\mathds{N}\) and \(\sigma\geq 0\), the set \(\mathcal{T}^{m,q}_{\sigma}\) denotes the class of operators \(\mathbf{T}:\mathcal{C}([-\sigma,\infty),\mathds{R}^{n})\to L^{\infty}_{ \mathrm{loc}}(\mathds{R}_{\geq 0},\mathds{R}^{q})\) for which the following properties hold: * _Causality_: \(\forall\,y_{1},y_{2}\in\mathcal{C}([-\sigma,\infty),\mathds{R}^{n})\;\;\; \forall\,t\geq 0\): \[y_{1}|_{[-\sigma,t]}=y_{2}|_{[-\sigma,t]}\;\;\;\Longrightarrow\;\;\;\mathbf{T} (y_{1})|_{[0,t]}=\mathbf{T}(y_{2})|_{[0,t]}.\] * _Local Lipschitz_: \(\forall\,t\geq 0\;\;\;\forall\,y\in\mathcal{C}([-\sigma,t];\mathds{R}^{n})\)\(\exists\,\Delta,\delta,c>0\;\;\;\forall\,y_{1},y_{2}\in\mathcal{C}([-\sigma,\infty); \mathds{R}^{n})\) with \(y_{1}|_{[-\sigma,t]}=y_{2}|_{[-\sigma,t]}=y\) and \(\|y_{1}(s)-y(t)\|<\delta\), \(\|y_{2}(s)-y(t)\|<\delta\) for all \(s\in[t,t+\Delta]\): \[\operatorname*{ess\,sup}_{s\in[t,t+\Delta]}\|\mathbf{T}(y_{1})(s)-\mathbf{T}( y_{2})(s)\|\,{\leq}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! To achieve the control objective, we introduce auxiliary error variables. Define, for parameters \(k_{1},\ldots,k_{r-1}\in\mathds{R}_{\geq 0}\), the functions \(e_{i}:\mathds{R}^{rm}\to\mathds{R}^{m}\), \(i=1,\ldots,r-1\), by \[\begin{split} e_{1}:(\xi_{1},\ldots,\xi_{r})& \mapsto\xi_{1},\\ e_{i+1}:(\xi_{1},\ldots,\xi_{r})&\mapsto e_{i}(S( \xi))+k_{i}e_{i}(\xi),\end{split} \tag{2}\] for \(i=1,\ldots,r-1\), where \[S:\mathds{R}^{rm}\to\mathds{R}^{rm},\ (\xi_{1},\ldots,\xi_{r})\mapsto(\xi_{2}, \ldots,\xi_{r},0)\] is the left shift operator. **Remark 1.3**.: Using the shorthand notation \[\chi(\zeta)(t):=(\zeta(t),\dot{\zeta}(t),\ldots,\zeta^{(r-1)}(t))\in\mathds{R} ^{rm}\] for a function \(\zeta\in W^{r,\infty}(\mathds{R}_{\geq 0},\mathds{R}^{m})\) and \(t\in\mathds{R}_{\geq 0}\), we get \[\begin{split} e_{1}(\chi(\zeta)(t))&=\zeta(t),\\ e_{i+1}(\chi(\zeta)(t))&=\tfrac{\mathrm{d}}{\mathrm{d }t}e_{i}(\chi(\zeta)(t))+k_{i}e_{i}(\chi(\zeta)(t))\end{split} \tag{3}\] for \(i=1,\ldots,r-1\). Furthermore, using the polynomials \(p_{i}(s)=\prod_{j=1}^{j}(s+k_{j})\in\mathds{R}[s]\), the function \(e_{i+1}(\chi(\zeta)(t))\) can be represented as \[e_{i+1}(\chi(\zeta)(t))=p_{i}(\tfrac{\mathrm{d}}{\mathrm{d}t})\zeta(t)\] for \(i=1,\ldots,r-1\). ## 2 Funnel MPC We propose for \(\theta\in\mathcal{G}\), design parameter \(\lambda_{u}\in\mathds{R}_{\geq 0}\), and functions \(e_{1},\ldots,e_{r}\) as defined in (2) with parameters \(k_{i}>0\) for \(i=1,\ldots,r-1\), the _stage cost function_\(\ell_{\theta}:\mathds{R}_{\geq 0}\times\mathds{R}^{rm}\times\mathds{R}^{m}\to \mathds{R}\cup\{\infty\}\) defined by \[\ell_{\theta}(t,\xi,u)\!=\!\begin{cases}\dfrac{\left\|e_{r}(\xi) \right\|^{2}}{\theta(t)^{2}-\left\|e_{r}(\xi)\right\|^{2}}\!+\!\lambda_{u}\left\| u\right\|^{2},&\left\|e_{r}(\xi)\right\|\neq\theta(t)\\ \infty,&\text{else.}\end{cases} \tag{4}\] **Algorithm 2.1** (Funnel MPC).: **Given:** System (1), reference signal \(y_{\mathrm{ref}}\in W^{r,\infty}(\mathds{R}_{\geq 0},\mathds{R}^{m})\), funnel function \(\theta\in\mathcal{G}\), input saturation level \(M>0\), initial data \(y^{0}\in\mathcal{C}^{r-1}([t_{0}-\sigma,t_{0}],\mathds{R}^{m})\) if \(\sigma>0\) or \(y^{0}\in\mathds{R}^{rm}\) if \(\sigma=0\), and stage cost function \(\ell_{\theta}\) as in (4). **Set** the time shift \(\delta>0\), the prediction horizon \(T\geq\delta\), and initialize the current time \(\hat{t}:=t_{0}\). **Steps:** 1. [label=()] 2. Obtain a measurement of the output \(y\) of (1) on the interval \([\hat{t}-\sigma,\hat{t}]\) and set \(\hat{y}:=y|_{[\hat{t}-\sigma,\hat{t}]}\) if \(\sigma>0\) and \(\hat{y}:=\big{(}y(\hat{t}),\ldots,y^{(r-1)}(\hat{t})\big{)}\) if \(\sigma=0\). 3. Compute a solution \(u^{\star}\in L^{\infty}([\hat{t},\hat{t}+T],\mathds{R}^{m})\) of \[\begin{split}\underset{\begin{subarray}{c}u\in L^{\infty}([\hat{t}, \hat{t}+T],\mathds{R}^{m}),\\ \left\|u\right\|_{\infty}\leq M\end{subarray}}{\text{minimize}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! _Then there exists \(M>0\) such that the FMPC Algorithm 2.1 with prediction horizon \(T>0\), time shift \(\delta>0\), and stage cost function \(\ell_{\theta}\) with_ \[\theta(t):=\frac{1}{\gamma}\big{(}\left\|\tfrac{\mathrm{d}}{\mathrm{ d}}e_{r-1}(\chi(y^{0}-y_{\mathrm{ref}})(t_{0}))\right\|\] \[+k_{r-1}\left\|e_{r-1}(\chi(y^{0}-y_{\mathrm{ref}})(t_{0}))\right\| \big{)}e^{-\alpha(t-t_{0})}+\frac{\beta}{\alpha\gamma^{r-1}} \tag{9}\] _is initially and recursively feasible, i.e., at time \(\hat{t}=t_{0}\) and at each successor time \(\hat{t}\in t_{0}+\delta\mathds{N}\) the OCP (5) has a solution. In particular, the closed-loop system consisting of (1) and the FMPC feedback (6) has a (not necessarily unique) global solution \(x:[t_{0}-\sigma,\infty)\to\mathrm{R}^{rm}\) with corresponding output \(y=x_{1}\) and the corresponding input is given by_ \[u_{\mathrm{FMPC}}(t)=\begin{cases}\mu(t,y|_{[\hat{t}-\sigma,\hat{t}]}),&\text {if }\sigma>0,\\ \mu\big{(}t,\big{(}y(\hat{t}),\ldots,y^{(r-1)}(\hat{t})\big{)}\big{)},&\text {if }\sigma=0,\end{cases}\] _for \(t\in[\hat{t},\hat{t}+\delta)\) and \(\hat{t}\in t^{0}+\delta\mathds{N}\). Furthermore, each global solution \(x\) with corresponding output \(y\) and input \(u_{\mathrm{FMPC}}\) satisfies:_ 1. \(\forall\,t\geq t^{0}:\quad\|u_{\mathrm{FMPC}}(t)\|\leq M\)_,_ 2. \(\forall\,t\geq t_{0}:\quad\|y(t)-y_{\mathrm{ref}}(t)\|<\psi(t).\)__ ## 3 Proof of the main result Throughout this section, let the assumptions of Theorem 2.3 hold. Then set \(e_{i}^{0}:=e_{i}(\chi(y^{0}-y_{\mathrm{ref}})(t_{0}))\) and \(e_{i}^{0}:=\frac{\mathrm{d}}{\mathrm{d}t}e_{i}(\chi(y^{0}-y_{\mathrm{ref}})(t_{ 0}))\) for \(i=1,\ldots,r-1\) and we define \(\psi_{1}:=\psi\) and \(\psi_{2},\ldots,\psi_{r}\) as follows: \[\psi_{i+1}(t):=\frac{1}{\gamma^{r-i}}\big{(}\left\|\hat{e}_{i}^{0}\right\|+k_{ i}\left\|e_{i}^{0}\right\|\big{)}\mathrm{e}^{-\alpha(t-t_{0})}+\frac{\beta}{ \alpha\gamma^{r-1}} \tag{10}\] for \(t\geq t_{0}\) and \(i=1,\ldots,r-1\). Note that \(\psi_{i}\in\mathcal{G}\) for all \(i=1,\ldots,r\). Further note that \(\psi_{r}=\theta\) as in (9). In order to achieve that the tracking error \(e=y-y_{\mathrm{ref}}\) evolves within the funnel \(\mathcal{F}_{\psi}\), we address the problem of ensuring that, for all \(t\geq t_{0}\), \(\chi(e)(t)\) is an element of the set \[\mathcal{D}_{t}^{r}:=\{\;\xi\in\mathds{R}^{rm}\;\mid\;\|e_{i}(\xi)\|<\psi_{i}( t),\;i=1,\ldots,r\;\}\;. \tag{11}\] By construction of \(\psi_{i}\) and (3) we have \[\|e_{i}(t_{0})\|\leq\|\hat{e}_{i-1}(t_{0})\|+k_{i-1}\|e_{i-1}(t_{0})\|<\psi_{i} (t_{0})\] for all \(i=2,\ldots,r\), and by assumption we have \[\|e(t_{0})\|\leq\gamma^{r}\psi(t_{0})<\psi_{1}(t_{0}).\] Therefore, \(\chi(y^{0}-y_{\mathrm{ref}})(t_{0})\in\mathcal{D}_{t_{0}}^{r}\). We define the set of all functions \(\zeta\in\mathcal{C}^{r-1}([t_{0}-\sigma,\infty),\mathds{R}^{m})\) which coincide with \(y^{0}\) on the interval \([t_{0}-\sigma,t_{0}]\) and for which \(\chi(\zeta-y_{\mathrm{ref}})(t)\in\mathcal{D}_{t}^{r}\) on the interval \(I_{0}^{r}:=[t_{0},\tau)\) for some \(\tau\in(t_{0},\infty]\) as follows; recall that if \(\sigma=0\), then we identify \((y^{0})^{(i-1)}(t_{0})=y_{0}^{i}\) for \(i=1,\ldots,r\). \[\mathcal{Y}_{\tau}^{r}\!:=\!\left\{\zeta\!\in\!\mathcal{C}^{r-1}(I_{t_{0}- \sigma}^{\infty},\mathds{R}^{m})\!\left|\!\begin{array}{l}\chi(\zeta|_{[t_{0 }-\sigma,t_{0}]})=\chi(y^{0}),\\ \forall\,t\in I_{t_{0}}^{r_{0}};\chi(\zeta-y_{\mathrm{ref}})(t)\!\in\! \mathcal{D}_{t}^{r}\end{array}\!\right\}\!\!.\] **Lemma 3.1**.: _Consider the system (1) with \((f,g,\mathbf{T})\in\mathcal{N}^{m,r}\). Let \(\psi_{i}\in\mathcal{G}\), for \(i=1,\ldots,r\) with parameters \(k_{i}>0\) for \(i=1,\ldots,r-1\). Further, let \(y\in W^{r,\infty}(\mathds{R}_{\geq 0},\mathds{R}^{m})\) and \(y^{0}\in\mathcal{C}^{r-1}([t_{0}-\sigma,t_{0}],\mathds{R}^{m})\) if \(\sigma>0\) or \(y^{0}\in\mathds{R}^{rm}\) if \(\sigma=0\), with \(\chi(y^{0}-y_{\mathrm{ref}})(t_{0})\in\mathcal{D}_{t_{0}}^{r}\), where we identify \((y^{0})^{(i-1)}(t_{0})=y_{0}^{0}\) for \(i=1,\ldots,r\) if \(\sigma=0\). Then, there exist constants \(f_{\mathrm{max}}\), \(g_{\mathrm{max}}>0\) such that for all \(\tau\in(t_{0},\infty]\), \(\zeta\in\mathcal{Y}_{\tau}^{r}\), and \(t\in[t_{0},\tau)\)_ \[f_{\mathrm{max}}\geq\left\|f(\mathbf{T}(\chi(\zeta))|_{[t_{0},\tau)})\right\|_{ \infty},\] \[g_{\mathrm{max}}\geq\left\|g(\mathbf{T}(\chi(\zeta))|_{[t_{0},\tau) })^{-1}\right\|_{\infty}.\] Proof.: We prove the Lemma by adapting the proof of [18, Lem. 1.2] to the given setting. By definition of \(\mathcal{Y}_{\infty}^{r}\) and \(\mathcal{D}_{t}^{r}\), we have for all \(i=1,\ldots,r\) \[\forall\,\zeta\in\mathcal{Y}_{\infty}^{r}\;\forall\,t\geq t_{0}:\quad\|e_{i}( \chi(\zeta-y_{\mathrm{ref}})(t))\|<\psi_{i}(t).\] Due to the definition of the error variables \(e_{i}\) there exists an invertible matrix \(S\in\mathds{R}^{rm\gamma r\times rm}\) such that \[\begin{pmatrix}e_{1}(\chi(\zeta-y_{\mathrm{ref}}))\\ \vdots\\ e_{r}(\chi(\zeta-y_{\mathrm{ref}}))\end{pmatrix}=S\chi(\zeta-y_{\mathrm{ref}}). \tag{12}\] Hence, by boundedness of \(\psi_{i}\) and \(y_{\mathrm{ref}}^{(i)}\) for all \(i=1,\ldots,r\), there exists a compact set \(K\subset\mathds{R}^{rm}\) with \[\forall\,\zeta\in\mathcal{Y}_{\infty}^{r}\;\forall\,t\geq t_{0}:\quad\chi( \zeta)(t)\in K.\] Invoking the BIBO property of the operator \(\mathbf{T}\), there exists a compact set \(K_{q}\subset\mathds{R}^{q}\) with \(\mathbf{T}(\xi)([t_{0},\infty))\subset K_{q}\) for all \(\xi\in\mathcal{C}([t_{0},\infty),\mathds{R}^{rm\gamma r})\) with \(\xi([t_{0},\infty))\subset K\). For arbitrary \(\tau\in(t_{0},\infty)\) and \(\zeta\in\mathcal{Y}_{\tau}^{r}\), we have \(\chi(\zeta)(t)\in K\) for all \(t\in[t_{0},\tau)\). For every element \(\zeta\in\mathcal{Y}_{\tau}^{r}\) the function \(\chi(\zeta)|_{[t_{0}-\sigma,\tau)}\) can be smoothly extended to a function \(\tilde{\zeta}\in\mathcal{C}([t_{0}-\sigma,\infty),\mathds{R}^{m})^{r}\) with \(\tilde{\zeta}(t)\in K\) for all \(t\in[t_{0},\infty)\). We have \(\mathbf{T}(\tilde{\zeta})(t)\in K_{q}\) for all \(t\in\mathds{R}_{\geq 0}\) because of the BIBO property of the operator \(\mathbf{T}\). This implies \(\ Set \(\varepsilon:=\sqrt{\frac{1}{2}(1+\gamma)}\in(0,1)\). Due to continuity there exists \(t_{\star}:=\max\left\{\begin{array}{l}t\in[\hat{t},t^{\star})\end{array}\left| \begin{array}{l}\left\|\frac{e_{i}(t)}{\psi_{i}(t)}\right\|=\varepsilon\end{array}\right.\right\}\), hence we have that \(\varepsilon\leq\left\|\frac{e_{i}(t)}{\psi_{i}(t)}\right\|\leq 1\) for all \(t\in[t_{\star},t^{\star}]\). Utilizing (3) and omitting the dependency on \(t\), we calculate for \(t\in[t_{\star},t^{\star}]\): \[\tfrac{1}{2}\tfrac{\mathrm{d}}{\mathrm{d}t}\left\|\frac{e_{i}}{ \psi_{i}}\right\|^{2}= \left\langle\frac{e_{i}}{\psi_{i}},\frac{\dot{e}_{i}\psi_{i}-e_{ i}\dot{\psi}_{i}}{\psi_{i}^{2}}\right\rangle\] \[= \left\langle\frac{e_{i}}{\psi_{i}},-\left(k_{i}+\frac{\dot{\psi} _{i}}{\psi_{i}}\right)\frac{e_{i}}{\psi_{i}}+\frac{e_{i+1}}{\psi_{i}}\right\rangle\] \[\leq -\left(k_{i}+\frac{\dot{\psi}_{i}}{\psi_{i}}\right)\left\|\frac{ e_{i}}{\psi_{i}}\right\|^{2}+\left\|\frac{e_{i}}{\psi_{i}}\right\|\frac{\|e_{i+1} \|}{\psi_{i}}\] \[\leq -\left(k_{i}+\frac{\dot{\psi}_{i}}{\psi_{i}}\right)\varepsilon^{ 2}+\frac{\psi_{i+1}}{\psi_{i}},\] where we used \(\|e_{i+1}(t)\|\leq\dot{\psi}_{i+1}(t)\) due to the maximality of \(i\). Now we distinguish the two cases \(i=1\) and \(i>1\). For \(i=1\) we find that \(\psi_{1}=\psi\) and by properties of \(\mathcal{G}\) it follows \[-\frac{\dot{\psi}(t)}{\psi(t)}\leq\frac{\alpha\psi(t)-\beta}{\psi(t)}\leq\alpha.\] Furthermore, we have that \(\psi(t)\geq\psi(t_{0})\mathrm{e}^{-\alpha(t-t_{0})}+\frac{\beta}{\alpha}\) for all \(t\geq t_{0}\). Therefore, \[\frac{\dot{\psi}_{2}(t)}{\psi(t)}\leq \frac{1}{\gamma^{\tau-1}}\frac{\left(\left\|\dot{e}_{1}^{0}\right\| +k_{1}\left\|e_{1}^{0}\right\|\right)\mathrm{e}^{-\alpha(t-t_{0})}}{\psi(t_{0 })\mathrm{e}^{-\alpha(t-t_{0})}+\frac{\beta}{\alpha}}\] \[+\frac{\beta}{\alpha\gamma^{\tau-1}\big{(}\psi(t_{0})\mathrm{e}^ {-\alpha(t-t_{0})}+\frac{\beta}{\alpha}\big{)}}\] \[\leq \frac{1}{\gamma^{\tau-1}}\frac{\left\|\dot{e}_{1}^{0}\right\|+k_ {1}\left\|e_{1}^{0}\right\|}{\psi(t_{0})}+\frac{1}{\gamma^{\tau-1}}\leq \gamma k_{1}+\frac{\left\|\dot{e}_{1}^{0}\right\|}{\gamma^{\tau-1}\psi(t_{0} )}+\frac{1}{\gamma^{\tau-1}}\] for all \(t\geq t_{0}\), where we have use that \(\|e(t_{0})\|\leq\gamma^{\tau}\psi(t_{0})\). Hence we obtain that \[\tfrac{1}{2}\tfrac{\mathrm{d}}{\mathrm{d}t}\left\|\frac{e_{i}}{ \psi}\right\|^{2}\leq -\tfrac{1}{2}(k_{1}-\alpha)(1+\gamma)+\gamma k_{1}+\frac{ \left\|\dot{e}_{1}^{0}\right\|}{\gamma^{\tau-1}\psi(t_{0})}+\frac{1}{\gamma^{ \tau-1}}\] \[\leq -\tfrac{1}{2}(1-\gamma)k_{1}+\alpha+\frac{\left\|\dot{e}_{1}^{0} \right\|}{\gamma^{\tau-1}\psi(t_{0})}+\frac{1}{\gamma^{\tau-1}}\leq 0\] for all \(t\in[t_{\star},t^{\star}]\), where the last inequality follows from (8). Now consider the case \(i>1\). Then we have \(-\frac{\dot{\psi}_{i}(t)}{\psi_{i}(t)}\leq\alpha\) for all \(t\geq 0\) and, invoking that by (3) \[\|e_{i}^{0}\|\leq\left\|\dot{e}_{i-1}^{0}\right\|+k_{i-1}\left\|e_{i-1}^{0} \right\|,\] we find that \[\frac{\dot{\psi}_{i+1}(t)}{\psi_{i}(t)} = \frac{\frac{1}{\gamma^{\tau-1}}\big{(}\big{\|}\dot{e}_{i}^{0}\|+k _{i}\big{\|}\dot{e}_{i}^{0}\big{\|}\big{)}\mathrm{e}^{-\alpha\Phi-t_{0}}+\frac {\beta}{\alpha\gamma^{\tau-1}}}{\frac{1}{\gamma^{\tau-1}\tau}\big{(}\big{\|} \dot{e}_{i-1}^{0}\big{\|}+k_{i-1}\big{\|}\dot{e}_{i-1}^{0}\big{\|}\big{)}\mathrm{ e}^{-\alpha\Phi-t_{0}}+\frac{\beta}{\alpha\gamma^{\tau-1}}}\] \[\leq \gamma\frac{\left\|\dot{e}_{i}^{0}\right\|+k_{i}\left\|e_{i}^{0} \right\|}{\left\|\dot{e}_{i-1}^{0}\right\|+k_{i-1}\left\|e_{i-1}^{0}\right\|+ \frac{\beta}{\alpha\gamma^{\tau-2}}}+1\] \[\leq \gamma k_{i}+\gamma\frac{\left\|\dot{e}_{i}^{0}\right\|}{\left\|e_ {i}^{0}\right\|+\frac{\beta}{\alpha\gamma^{\tau-2}}}+1\] for all \(t\geq t_{0}\). Hence we obtain that \[\tfrac{1}{2}\tfrac{\mathrm{d}}{\mathrm{d}t}\left\|\frac{e_{i}}{ \psi_{i}}\right\|^{2}\leq -\tfrac{1}{2}(k_{i}-\alpha)(1+\gamma)+\gamma k_{i}+\gamma\frac{ \left\|\dot{e}_{i}^{0}\right\|}{\|e_{i}^{0}\|+\frac{\beta}{\alpha\gamma^{\tau-2} }}+1\] \[\leq -\tfrac{1}{2}(1-\gamma)k_{i}+\alpha+\gamma\frac{\left\|\dot{e}_{i}^ {0}\right\|}{\|e_{i}^{0}\|+\frac{\beta}{\alpha\gamma^{\tau-2}}}+1\leq 0\] for all \(t\in[t_{\star},t^{\star}]\), where the last inequality follows from (8). Summarizing, in each case the contradiction \[1\leq\|e_{i}(t^{\star})/\psi_{i}(t^{\star})\|^{2}\leq\|e_{i}(t_{\star})/\psi_{i} (t_{\star})\|^{2}=\varepsilon^{2}<1\] arises, which completes the proof. For \(\hat{t}\geq t_{0}\), \(M>0\), \(T>0\) and \(\hat{y}\in\mathcal{C}^{r-1}([\hat{t}-\sigma,\hat{t}],\mathds{R}^{m})\) if \(\sigma>0\) or \(\hat{y}\in\mathds{R}^{m}\) if \(\sigma=0\) we denote by \(\mathcal{U}_{T}(M,\hat{t},\hat{y})\) the set \[\left\{u\!\in\!L^{\infty}([\hat{t},\hat{t}\!+\!T],\mathds{R}^{m})\!\left|\! \begin{array}{l}x(t;\hat{t},\hat{y},u)-\chi(y_{\mathrm{ref}})(t)\in \mathcal{D}_{t}^{r}\\ \mbox{for all }t\in[\hat{t},\hat{t}\!+\!T],\left\|u\right\|_{\infty}\leq M \end{array}\right\}. \tag{13}\] This is the set of all \(L^{\infty}\)-controls \(u\) bounded by \(M\) which, if applied to the system (1), guarantee that the error signals \(e_{i}(x(t;\hat{t},\hat{y},u)-\chi(y_{\mathrm{ref}})(t))\) evolve within their respective funnles defined by \(\psi_{i}\) on the interval \([\hat{t},\hat{t}+\!T]\). We note that the conditions in (13) implicitly require the solution \(x(\cdot;\hat{t},\hat{y},u)\) to exist on the interval \([\hat{t},\hat{t}+\!T]\). **Lemma 3.3**.: _Under the assumptions of Theorem 2.3, consider the functions \(\psi_{2},\ldots,\psi_{r}\in\mathcal{G}\) defined in (10). Further, let \(\hat{t}\geq t_{0}\) and \(\hat{y}\in\mathcal{C}^{r-1}([\hat{t}-\sigma,\hat{t}],\mathds{R}^{m})\) if \(\sigma>0\) or \(\hat{y}\in\mathds{R}^{m}\) if \(\sigma=0\ to the system (1) leads to a closed-loop system. If this initial value problem is considered on the interval \([\hat{t},\hat{t}+T]\) with initial conditions \[y|_{[\hat{t}-\sigma,\hat{t}]} =\hat{y}, \text{if }\sigma>0,\] \[\left(y(\hat{t}),\ldots,y^{(r-1)}(\hat{t})\right) =\hat{y}, \text{if }\sigma=0,\] then an application of a variant (a straightforward modification tailored to the current context) of (15, Thm. B.1) yields the existence of a maximal solution \(x:[\hat{t}-\sigma,\omega)\to\mathds{R}^{m}\). If \(x\) is bounded, then \(\omega=\infty\), so the solution exists on \([\hat{t}-\sigma,\hat{t}+T]\). Utilizing (3) one can show that \[e_{r}(t)=e^{(r-1)}(t)+\sum_{j=1}^{r-1}k_{j}e_{j}^{(r-j-1)}(t). \tag{17}\] Omitting the dependency on \(t\), we calculate for \(t\in[\hat{t},\omega)\): \[\frac{\hat{e}_{r}\psi_{r}-e_{r}\dot{\psi}_{r}}{\psi_{r}}=e^{(r)}+ \sum_{j=1}^{r-1}k_{j}e_{j}^{(r-j)}-e_{r}\frac{\dot{\psi}_{r}}{\psi_{r}}\] \[= f(\mathbf{T}(\chi(y)))\!+\!g(\mathbf{T}(\chi(y)))u\!-\!y_{\text{ ref}}^{(r)}\!+\!\sum_{j=1}^{r-1}\!\!k_{j}e_{j}^{(r-j)}\!-\!e_{r}\frac{\dot{\psi}_{r }}{\psi_{r}}\!=\!0.\] Therefore, \[\tfrac{\mathrm{d}}{\mathrm{d}}\tfrac{1}{2}\left\|\frac{e_{r}}{\psi_{r}}\right\| ^{2}=\left\langle\frac{e_{r}}{\psi_{r}},\frac{\hat{e}_{r}\psi_{r}-e_{r}\dot{ \psi}_{r}}{\psi_{r}^{2}}\right\rangle=0.\] Since \(\left\|\frac{e_{r}(t)}{\psi_{r}(\hat{t})}\right\|<1\) by the assumption \(\chi(\hat{y}-y_{\text{ref}})(\hat{t})\in\mathcal{D}_{\hat{t}}^{r}\), this yields \(\left\|\frac{e_{r}(t)}{\psi_{r}(\hat{t})}\right\|<1\) for all \(t\in[\hat{t},\omega)\). This implies, according to Lemma 3.2, \(\chi(y-y_{\text{ref}})(t)\in\mathcal{D}_{t}^{r}\) for all \(t\in[\hat{t},\omega)\), i.e., \(\|e_{i}(t)\|<\psi_{i}(t)\) for all \(i=1,\ldots,r\). Thus, \(\|e_{i}(t)\|\leq\mu_{i}^{0}\) for all \(i=1,\ldots,r\). Invoking boundedness of \(y_{\text{ref}}^{(r)}\), \(i=0,\ldots,r\), and the relation in (12), we may infer that \(x=\chi(y)\) is bounded on \([\hat{t},\omega)\). Hence, \(\omega=\infty\). Furthermore, \(\|f(\mathbf{T}(\chi(y))(t))\|\leq f_{\text{max}}\) and \(\left\|g(\mathbf{T}(\chi(y))(t))^{-1}\right\|\leq g_{\text{max}}\) for all \(t\in[\hat{t},\hat{t}+T]\) according to Lemma 3.1. Finally, using (3) and the definition of \(\mu_{i}^{j}\) it follows that \[\left\|e_{i}^{(j+1)}(t)\right\|\!-\!\left\|e_{i+1}^{(j)}(t)\!-\!k_{i}e_{i}^{(j )}(t)\right\|\!\leq\!\mu_{i+1}^{j}\!+\!k_{i}\mu_{i}^{j}\!=\!\mu_{i}^{j+1} \tag{18}\] inductively for all \(i=1,\ldots,r\) and \(j=0,\ldots,r-i-1\). Thus, by definition of \(u\) and \(M\) we have \(\left\|u\right\|_{\infty}\leq M\) and hence \(u\in\mathcal{U}_{T}(M,\hat{t},\hat{y})\). _Step 3:_ We show implication (14). If, for any \(T_{1}>0\) an arbitrary but fixed control \(\hat{u}\in\mathcal{U}_{T_{1}}(M,\hat{t},\hat{y})\) is applied to the system (1), then \(x(t;\hat{t},\hat{y},u)-\chi(y_{\text{ref}})(t)\in\mathcal{D}_{t}^{r}\) for all \(t\in[\hat{t},\hat{t}+T_{1}]\). If for any \(\tilde{t}\in[\hat{t},\hat{t}+T]\), the system is considered on the interval \([\tilde{t},\hat{t}+T_{2}]\) with \(T_{2}>0\) and initial data \(\tilde{y}:=y(\cdot;\hat{t},\hat{y},u)|_{[\hat{t}-\hat{x},\hat{t}]}\) if \(\sigma>0\) or \(\tilde{y}:=x(\tilde{t},\hat{t},\hat{y},u)\) if \(\sigma=0\), then one can show by a repetition of the arguments in Step 2 that the application of the control \(\tilde{u}\in L^{\infty}([\tilde{t},\hat{t}+T_{2}],\mathds{R}^{m})\) as in (16), _mutatis mutandals_, guarantees \(x(t;\tilde{t},\tilde{y},\tilde{u})-\chi(y_{\text{ref}})(t)\in\mathcal{D}_{t}^{r}\) for all \(t\in[\tilde{t},\tilde{t}+T_{2}]\). Since the prerequisites for Lemmata 3.1 and 3.2 are still satisfied, the control \(\tilde{u}\) is bounded by \(M\) as constructed in Step 1. Thus, \(\tilde{u}\in\mathcal{U}_{T_{2}}(M,\tilde{t},\tilde{y})\neq\emptyset\). **Lemma 3.4**.: _Under the assumptions of Theorem 2.3, consider the functions \(\psi_{2},\ldots,\psi_{r}\in\mathcal{G}\) defined in (10). Further, let \(T>0\ M>0\), \(\hat{t}\geq t_{0}\), and \(\hat{y}\in\mathcal{C}^{r-1}([\hat{t}-\sigma,\hat{t}],\mathds{R}^{m})\) if \(\sigma>0\) or \(\hat{y}\in\mathcal{R}^{r\prime\prime}\) if \(\sigma=0\) such that \(\mathcal{U}_{T}(M,\hat{t},\hat{y})\neq\emptyset\). Then, \(\mathcal{U}_{T}(M,\hat{t},\hat{y})\) is equal to the set \(\mathcal{U}_{T}(M,\hat{t},\hat{y})\) defined by_ \[\left\{\begin{matrix}x(t;\hat{t},\hat{y},u)\text{ satisfies }\eqref{eq:t_{0} for all}\\ u\in L^{\infty}([\hat{t},\hat{t}+T],\mathds{R}^{m})\\ \int_{t}^{t+\tilde{t}}\ell_{\psi_{r}}(t,\zeta(t),u(t))\,\mathrm{d}t<\infty,\\ \zeta(t):=x(t;\hat{t},\hat{y},u)-\chi(y_{\text{ref}})(t)\end{matrix}\right\}.\] Proof.: We adapt the proof of (2, Thm. 4.3) to the current setting. Given \(u\in\mathcal{U}_{T}(M,\hat{t},\hat{y})\), it follows from the definition of \(\mathcal{U}_{T}(M,\hat{t},\hat{y})\) that \(\zeta(t):=x(t;\hat{t},\hat{y},u)-\chi(y_{\text{ref}})(t)\in\mathcal{D}_{t}^{r}\) for all \(t\in[\tilde{t},\hat{t}+T]\). Thus, \[\forall\,t\in[\hat{t},\hat{t}+T]:\ \|e_{r}(\zeta(t))\|<\psi_{r}(t).\] We use the shorthand notation \(e_{r}(t):=e_{r}(\zeta(t))\). Due to continuity of the involved functions, there exists \(\varepsilon\in(0,1)\) with \(\|e_{r}(t)\|^{2}\leq\psi_{r}(t)^{2}-\varepsilon\) for all \(t\in[\hat{t},\hat{t}+T]\). Then, \(\ell_{\psi_{r}}(t,\zeta(t),u(t))\geq 0\) for all \(t\in[\hat{t},\hat{t}+T]\) and \[\int_{\hat{t}}^{\hat{t}+T}|\ell_{\psi_{r}}(t,\zeta(t),u(t))|\ \,\mathrm{d}t\] \[= \int_{\hat{t}}^{\hat{t}+T}\!\!\left|\frac{\|e_{r}(t)\|^{2}}{\psi_{r }(t)^{2}-\|e_{r}(t)\|^{2}}+\lambda_{u}\left\|u(t)\right\|^{2}\right|\ \mathrm{d}t\] \[\leq \int_{\hat{t}}^{\hat{t}+T}\!\!\frac{\|\psi_{r}\|_{\infty}}{ \varepsilon}+\lambda_{u}\!\left\|u\right\|_{\infty}^{2}\mathrm{d}t\leq\!\left( \!\frac{\|\psi_{r}\|_{\infty}}{\varepsilon}+\lambda_{u}M^{2}\!\right)\!T<\!\infty.\] Therefore, \(\mathcal{U}_{T}(M,\hat{t},\hat{y})\) is contained in \(\tilde{\mathcal{U}}_{T}(M,\hat that \(\hat{e}_{r}\) is bounded. First observe that, since \(x(\cdot;\hat{t},\hat{y},u)\) is continuous, it is bounded on the compact interval \([\hat{t},\hat{t}]\). Since \(f\) and \(g\) are continuous and by the BIBO property of the operator \(\mathbf{T}\), \(f(\mathbf{T}(x(\cdot;\hat{t},\hat{y},u)))\) and \(g(\mathbf{T}(x(\cdot;\hat{t},\hat{y},u)))\) are bounded on the interval \([\hat{t},\hat{t}]\). As in (18) in the proof of Lemma 3.3, for \(e_{i}(t):=e_{i}(\zeta)(t)\), we may show that \(e_{i}^{(j)}\) is bounded by \(\mu_{i}^{j}\) as defined in (15) for \(i=1,\ldots,r\) and \(j=0,\ldots r-i-1\). Finally, it follows from (17) that \[\hat{e}_{r}=f(\mathbf{T}(\chi(y))(\cdot))+g(\mathbf{T}(\chi(y))(\cdot))u-y_{ \mathrm{ref}}^{(r)}+\sum_{j=1}^{r-1}k_{j}e_{j}^{(r-j)},\] which is bounded on \([\hat{t},\hat{t}]\) by the above observations. Since \(\psi_{r}\) and \(e_{r}\) are Lipschitz continuous and products and sums of Lipschitz continuous functions on a compact interval are again Lipschitz continuous, we may infer that \(1-\frac{\|e_{r}(\cdot)\|^{2}}{\psi_{r}(\cdot)^{2}}\) is Lipschitz continuous on \([\hat{t},\hat{t}]\). Now (2, Lem. 4.1) yields that it is strictly positive, i.e., \(\psi_{r}(t)^{2}>\|e_{r}(t)\|^{2}\) for all \(t\in[\hat{t},\hat{t}]\), which contradicts the definition of \(\hat{t}\). Hence, \(\hat{\mathcal{U}}_{T}(M,\hat{t},\hat{g})\subseteq\mathcal{U}_{T}(M,\hat{t}, \hat{g})\). **Lemma 3.5**.: _Under the assumptions of Theorem 2.3, consider the functions \(\psi_{2},\ldots,\psi_{r}\in\mathcal{G}\) defined in (10). Further, let \(T>0\), \(M>0\), \(\hat{t}\geq t_{0}\), and \(\hat{y}\in\mathcal{C}^{r-1}([\hat{t}-\sigma,\hat{t}],\mathds{R}^{m})\) if \(\sigma>0\) or \(\hat{y}\in\mathds{R}^{m}\) if \(\sigma=0\) such that \(\mathcal{U}_{T}(M,\hat{t},\hat{y})\neq\emptyset\). Then, there exists \(u^{*}\in\mathcal{U}_{T}(M,\hat{t},\hat{y})\) such that \(u^{*}\) solves the OCP (5) for \(\theta=\psi_{r}\)._ Proof.: As a consequence of Lemma 3.4, solving the OCP (5) is equivalent to minimizing the function \[J:L^{\infty}([\hat{t},\hat{t}+T],\mathds{R}^{m})\to\mathds{R} \cup\left\{\infty\right\},\] \[u\mapsto \begin{cases}\int_{\hat{t}}^{\hat{t}+T}\ell_{\psi_{r}}(t,\zeta(t),u(t))\,\mathrm{d}t,&u\in\mathcal{U}_{T}(M,\hat{t},\hat{y}),\\ \infty,&\text{else},\end{cases}\] where \(\zeta(t):=x(t;\hat{t},\hat{y},u)-\chi(y_{\mathrm{ref}})(t)\). For every \(u\in\mathcal{U}_{T}(M,\hat{t},\hat{y})\) we have \(\|e_{r}(\zeta(t))\|<\psi_{r}(t)\) for all \(t\in[\hat{t},\hat{t}+T]\), thus \(J(u)\geq 0\). Hence, the infimum \(J^{*}:=\inf_{u\in\mathcal{U}_{T}(M,\hat{t},\hat{y})}J(u)\) exists. Let \((u_{k})\in\left(\mathcal{U}_{T}(M,\hat{t},\hat{y})\right)^{\mathds{N}}\) be a minimizing sequence, meaning \(J(u_{k})\to J^{*}\). By definition of \(\mathcal{U}_{T}(M,\hat{t},\hat{y})\), we have \(\|u_{k}\|\leq M\) for all \(k\in\mathds{N}\). Since \(L^{\infty}([\hat{t},\hat{t}+T],\mathds{R}^{m})\subset L^{2}([\hat{t},\hat{t} +T],\mathds{R}^{m})\), we conclude that \((u_{k})\) is a bounded sequence in the Hilbert space \(L^{2}\), thus \(u_{k}\) converges weakly, up to a subsequence, to a function \(u^{*}\in L^{2}([\hat{t},\hat{t}+T],\mathds{R}^{m})\). Let \((x_{k}):=(x(\cdot;\hat{t};\hat{y};u_{k}))\in\mathcal{C}([\hat{t}-\sigma,\hat{ t}+T],\mathds{R}^{m})^{\mathds{N}}\) be the sequence of associated responses. By \(u_{k}\in\mathcal{U}_{T}(M,\hat{t},\hat{y})\) we have \(x_{k}(t)\in\mathcal{D}_{t}^{*}\) for all \(t\) in \([\hat{t},\hat{t}+T]\). Since the set \(\bigcup_{t\in[\hat{t},\hat{t}+T]}\mathcal{D}_{t}^{*}\) is compact and independent of \(k\in\mathds{N}\), the sequence \((x_{k})\) is uniformly bounded. A repetition of Steps 2-4 of the proof of (2, Thm. 4.6) yields that \((x_{k})\) has a subsequence (which we do not relabel) that converges uniformly to \(x^{*}=x(\cdot;\hat{t};\hat{y};u^{*})\) and that \(\|u^{*}\|_{\infty}\leq M\). Along the lines of Steps 5-7 of the proof of (2, Thm. 4.6) it follows that \(u^{*}\in\mathcal{U}_{T}(M,\hat{t},\hat{y})\) and \(J(u^{*})=J^{*}\). This completes the proof. ### Proof of Theorem 2.3 Choosing the bound \(M>0\) from Lemma 3.3 and utilizing Lemma 3.5 this can be shown by a straightforward adaption of the proof of (2, Thm. 2.10). ## 4 Simulations To demonstrate the application of the FMPC Algorithm 2.1 we consider the example of a mass-spring system mounted on a car from [23]. Consider a car with mass \(m_{1}\), on which a ramp is mounted and inclined by the angle \(\theta\in[0,\frac{\pi}{2})\). On this ramp a mass \(m_{2}\), which is coupled to the car by spring-damper component with spring constant \(k>0\) and damping coefficient \(d>0\), moves frictionless, see Figure 2. A control force \(F=u\) can be applied to the car. The dynamics of the system can be described by the equations \[\begin{bmatrix}m_{1}+m_{2}&m_{2}\cos(\theta)\\ m_{2}\cos(\theta)&m_{2}\end{bmatrix}\begin{pmatrix}\ddot{z}(t)\\ \ddot{s}(t)\end{pmatrix}+\begin{pmatrix}0\\ ks(t)+ds(t)\end{pmatrix}= \begin{pmatrix}u(t)\\ 0\end{pmatrix}, \tag{19}\] where \(z(t)\) is the horizontal position of the car and \(s(t)\) the relative position of the mass on the ramp at time \(t\). The output \(y\) of the system is the horizontal position of the mass on the ramp, given by \[y(t)=z(t)+s(t)\cos(\vartheta).\] For the simulation we choose the parameters \(m_{1}=4\), \(m_{2}=1\), \(k=2\), \(d=1\), \(\theta=\frac{\pi}{4}\), and initial values \(z(0)=s(0)=\dot{z}(0)=\dot{s}(0)=0\). As outlined in [4], for these parameters the system (19) belongs to the class \(\mathcal{N}^{1,2}\). The objective is tracking of the reference signal \(y_{\mathrm{ref}}(t)=\cos(t)\) within predefined boundaries described by a function \(\psi\in\mathcal{G}\). This means that the tracking error \(e(t)=y(t)-y_{\mathrm{ref}}(t)\) should satisfy \(\|e(t)\|<\psi(t)\) for all \(t\geq 0\). We compare the FMPC Algorithm 2.1 with the original FMPC scheme from (2), for which only feasibility for systems with relative degree one has been shown so far, and the FMPC scheme from (1), which uses feasibility constraints to ensure recursive feasibility for systems with higher relative degree. Since, in comparison to the set \(\mathcal{G}\), the set of admissible funnel functions for control scheme from (1) is quite restrictive, the same funnel function \(\psi(t)=1/10+11\mathrm{e}^{-2\pi x/20}-7\mathrm{e}^{-3x/2}\) as in (1) was chosen. Straightforward calculations show that \(\alpha=1.5\), \(\beta=\frac{3}{20}\), \(\gamma=0.5\), and \(k_{1}=14\) satisfy the requirements of Theorem 2.3. With these parameters, the funnel function \(\theta\) in (9) is given by \[\theta(t)=28e^{-3t/2}+\frac{1}{5}.\] For the stage cost function \(\ell_{\theta}\) as in (4) the parameter \(\lambda_{u}=\frac{1}{100}\) has been chosen. Further, the maximal input was limited to \(\left\|u\right\|_{\infty}\leq 20\), i.e., \(M=20\) was chosen. Figure 2: Mass-on-car system. Inserting the definition of the function \(e_{2}\) from (2) with parameter \(k_{1}\), the stage cost thus reads \[\ell_{\theta}(t,\xi_{1},\xi_{2},u)=\frac{\left\|\xi_{2}+k_{1}\xi_{1}\right\|^{2} }{\theta(t)^{2}-\left\|\xi_{2}+k_{1}\xi_{1}\right\|^{2}}+\lambda_{u}\left\|u \right\|^{2}\] for \(\left\|\xi_{2}+k_{1}\xi_{1}\right\|\neq\theta(t)\). With this and \(e(t)=y(t)-y_{\text{ref}}(t)\) the OCP (5) becomes \[\underset{\begin{subarray}{c}u\in L^{\infty}_{\left(\left\|t,t+T\right\|, \mathbb{R}^{n}\right)},\\ \left\|u\right\|_{\infty}\leq M\end{subarray}}{\text{minimize}}\int_{\hat{t}}^ {t+T}\!\!\!\ell_{\theta}(t,e(t),\dot{e}(t),u(t))\,\mathrm{d}t.\] As in [1] and [2] only step functions with constant step length \(0.04\) are considered in the OCP (5) due to discretisation. The prediction horizon and the time shift are chosen as \(T=0.6\) and \(\delta=0.04\). All simulations are performed with Matlab and the toolkit CasADi on the interval \([0,10]\) and are depicted in Figure 3. The tracking errors resulting from the application of the different FMPC schemes from [1], [2] and Algorithm 2.1 to system (19) are shown in Figure 2(a). The corresponding control signals are displayed in Figure 2(b). It is evident that all three control schemes achieve the control objective, the evolution of the tracking error with in the performance boundaries given by \(\psi\). Overall, the performance of all three FMPC schemes is comparable. After \(t=4\) the computed control signals and the corresponding tracking errors of all three control schemes are almost identical. However, FMPC from [1] requires feasibility constraints in the OCP to achieve initial and recursive feasibility; together with the more complex stage cost, this severely increases the computational effort. Furthermore, the parameters involved in the feasibility constraints are very hard to determine and usually (as in the simulations performed here) conservative estimates must be used. But then again, initial and recursive feasibility cannot be guaranteed. Concerning the FMPC scheme from [2], it is still an open problem to show that it is initially and recursively feasible for systems with relative degree larger than one. ## 5 Conclusion In the present paper we proposed a new model predictive control algorithm for a class of nonlinear systems with arbitrary relative degree, which achieves tracking of a reference signal with prescribed performance. The new FMPC scheme resolves the drawbacks of earlier approaches in [2] (no proof of initial and recursive feasibility for relative degree larger than one) and [1] (requirement of feasibility constraints, design parameters difficult to determine, high computational effort). All advantages of these approaches (no terminal costs or conditions, no requirements on the prediction horizon) are retained. Essentially, this solves the open problems formulated in the conclusions of [1, 2]. Compared to previous works on FMPC, the class of nonlinear systems considered here includes systems with nonlinear delays and infinite-dimensional internal dynamics. An interesting question which remains for future research is, whether the weighted sum of the tracking error derivatives \(e,\dot{e},\ldots,e^{(r-1)}\) used in the cost functional in (5) can be replaced by a sole error signal \(e\), when instead the prediction horizon \(T\) is chosen sufficiently long.
2310.07336
Numerical Simulation Study of Neutron-Proton Scattering using Phase Function Method
In this article, we propose a numerical approach to solve quantum mechanical scattering problems, using phase function method, by considering neutron-proton interaction as an example. The nonlinear phase equation, obtained from the time-independent Schrodinger equation, is solved using the Runge-Kutta method for obtaining S-wave scattering phase shifts for neutron-proton interaction modeled using Yukawa and Malfliet-Tjon potentials. While scattering phase shifts of S-states using Yukawa match with experimental data for only lower energies of 50 MeV, Malfliet-Tjon potential with repulsive term gives very good accuracy for all available energies up to 350 MeV. Utilizing these S-wave scattering phase shifts, low energy scattering parameters, and total S-wave cross section have been calculated and found to be consistent with experimental results. This simulation methodology can be easily extended to study scattering phenomenon using phase wave analysis approach in the realms of atomic, molecular, and nuclear physics.
Shikha Awasthi, Anil Khachi, Lalit Kumar, O. S. K. S. Sastri
2023-10-11T09:29:19Z
http://arxiv.org/abs/2310.07336v1
# Numerical Simulation Study of Neutron-Proton Scattering using Phase Function Method ###### Abstract In this article, we propose a numerical approach to solve quantum mechanical scattering problems, using phase function method, by considering neutron-proton interaction as an example. The nonlinear phase equation, obtained from time-independent Schrodinger equation, is solved using Runge-Kutta method for obtaining S-wave scattering phase shifts for neutron-proton interaction modeled using Yukawa and Malfliet-Tjon potentials. While scattering phase shifts of S-states using Yukawa match with experimental data for only lower energies of 50 MeV, Malfliet-Tjon potential with repulsive term gives very good accuracy for all available energies up to 350 MeV. Utilizing these S-wave scattering phase shifts, low energy scattering parameters and total S-wave cross section have been calculated and found to be consistent with experimental results. This simulation methodology can be easily extended to study scattering phenomenon using phase wave analysis approach in the realms of atomic, molecular and nuclear physics. Introduction The wavefunction obtained by solving the time-independent Schrodinger equation for various models of interaction potentials is central to understanding quantum mechanical systems. Problems involving penetration through a rectangular barrier, and bound and scattering states of finite square well are routinely performed in an undergraduate quantum mechanics course. Gamow's theory of alpha (\(\alpha\)) particle tunneling based on extension of barrier penetration and explanation of bound state of deuteron (\(d\)) and neutron-proton (_np_) scattering cross section utilizing square well are part of nuclear physics course at undergraduate level. Even though Yukawa's meson exchange theory[1] to explain _np_-scattering is discussed at undergraduate level, corresponding time-independent Schrodinger equation for Yukawa potential is not solved. This is mainly due to non-availability of analytical solution, till recently[2]. Thus, most textbooks on nuclear physics[3; 4; 5] still rely on square well potential for obtaining binding energy of deuteron as well as discussing _np_ scattering cross section. So, there is a need to introduce a numerical technique to solve Yukawa potential to provide a better understanding of _np_-scattering. This would equip students with required skills to solve two-body scattering problems in atomic, nuclear, and particle physics. The main objective of this paper is to present a simple derivation of phase equation for S-wave (i.e. \(\ell=0\)) and solve it numerically using Runge-Kutta method to obtain scattering phase shifts for neutron-proton _np_ - interaction by choosing Yukawa potential and its modified form called Malfliet-Tjon (MT)[6] potential. Theoretically, scattering cross section data are obtained from scattering phase shifts (SPS) by using either effective range theory or phase shift analysis. The later approach involves determination of scattering phase shifts that arise due to scattering of incoming projectile with interaction potential of target nucleus. Theoretical modeling involves, proposing a mathematical function to represent interaction potential based on understanding of underlying physical phenomena and then solving the radial time-independent Schrodinger equation to obtain the wavefunction. Mostly, scattering phase shifts are deduced from matching the wave function within interaction region with that of asymptotic region in which interaction ceases to exist[7; 8; 9]. This wave function approach to determining scattering phase shifts involves solving the radial time-independent Schrodinger equation numerically and is discussed in advanced computational physics book[10], which is beyond the reach of many undergraduate physics students. Experimentally, scattering cross section data are available at different lab energies of incoming projectile, from which scattering phase shifts are deduced. The scattering phase shifts data for nucleon-nucleon [neutron-proton (\(np\)), neutron-neutron (\(nn\)) and proton-proton (\(pp\))] [11], nucleon-nucleus [neutron-deuteron (\(nd\)), proton-deuteron (\(pd\)), neutron-alpha (\(n\alpha\)), proton-alpha (\(p\alpha\))] [12; 13] and nucleus-nucleus [\(\alpha\alpha\)] [14] systems are available in literature. An alternative approach to determine scattering phase shifts is variable phase approach, originally proposed by Morse [15], and later came to be known as phase function method (PFM) [16; 17; 18]. This method has been utilized for obtaining scattering phase shifts for _np_-interaction with reasonable success [19; 20; 21]. In this approach, the time-independent Schrodinger equation is transformed into a first order non-linear Ricatti-type equation that directly deals with phase shifts for different \(\ell\) values and different energies, without need for wavefunction like other methods. Thus, phase function method is an easy alternative to traditional methods like r-matrix method [7], s-matrix method [8] or just function method [9], etc. In this work, we have solved phase equation numerically by choosing Runge-Kutta 5\({}^{th}\) order [22] (RK-5) method to obtain scattering phase shifts for \({}^{3}S_{1}\) and \({}^{1}S_{0}\) states in _np_ interaction using Yukawa and Malfliet-Tjon potential. By providing best model parameters for both these potentials, RK-5 method can be easily implemented using worksheet environment such as Gnumeric, Excel or LibreOffice-Calc for determining scattering phase shifts at different energies. This would be within the reach of undergraduate physics students. The obtained scattering phase shifts are then utilized to determine total scattering cross section at various energies and scattering parameters for both singlet and triplet states. The paper is structured based on simulation methodology [23] consisting of following four stages: 1. _Modeling physical system_[24] In next section, we will describe scattering process in detail and formulate mathematical model in terms of phase equation. Then, numerical solution is developed in three steps. 2. _Preparation of system_ by choosing appropriate units, region of interest and numerical technique. 3. _Implementation of numerical method_ in a computer. These two stages are briefly touched upon in Section III. This is followed by 4. _Simulation of results and discussion_ in Section IV and conclusions are given in last section. ## II Modeling physical system: description and formulation The process of scattering of a neutron with energy \(E_{\ell ab}\) and a proton, which is at rest in lab frame [25], is represented with position vectors \(\vec{r}_{n}\) and \(\vec{r}_{p}\). Their masses are \(m_{n}\) and \(m_{p}\) respectively. This two body system is reduced to a one body system by transformation to center of mass frame. In this process, origin is shifted to center of mass co-ordinate \(\vec{R}_{cm}\) and two particles are replaced by a single particle with reduced mass \(\mu_{D}=(m_{n}m_{p})/(m_{n}+m_{p})\) which has a position vector \(\vec{r}\), that represents relative distance between neutron and proton. The projectile energy in lab frame, \(E_{\ell ab}\), would be related to centre-of-mass energy \(E_{cm}\) using standard relation [26; 27]: \[E_{cm}=\Big{(}\frac{m_{T}}{m_{P}+m_{T}}\Big{)}E_{\ell ab} = \Big{(}\frac{m_{p}}{m_{n}+m_{p}}\Big{)}E_{\ell ab} \tag{1}\] Where, \(m_{P}\) is the mass of projectile (neutron) and \(m_{T}\) is the mass of target (proton). For simplicity, we drop the subscript henceforth and write center of mass energy as \(E\). The ideal choice for reference system would be spherical polar co-ordinates, \(\vec{r}=(r,\theta,\phi)\), as potential has central force characteristics. The state of system, for \(\ell=0\) is described by its wavefunction \(\psi(\vec{r},t)\), and is obtained by solving the radial time-independent Schrodinger equation, given by \[\frac{d^{2}u_{0}(r)}{dr^{2}}+\frac{2\mu}{\hbar^{2}}\big{[}E-V(r)\big{]}u_{0}(r )=0 \tag{2}\] The wavefunction must satisfy \(u_{0}(0)=0\) at r = 0. Further, at a distance \(r_{0}\) beyond which V(r) is zero, wavefunction and its derivative both need to be continuous. That is, choosing \(u_{a}(r)\) to be asymptotic solution of Eq. 2 for \(r>r_{0}\), we must have \[u_{0}(r)\big{|}_{r=r_{0}}=u_{a}(r)\big{|}_{r=r_{0}} \tag{3}\] similarly \[\frac{du_{0}(r)}{dr}\bigg{|}_{r=r_{0}}=\frac{du_{a}(r)}{dr}\bigg{|}_{r=r_{0}} \tag{4}\] These two conditions are combined together into a single equation by considering logarithmic derivative to satisfy boundary condition, obtained as \[\left.\frac{1}{u_{0}(r)}\frac{du_{0}(r)}{dr}\right|_{r=r_{0}}=\frac{1}{u_{a}(r)} \frac{du_{a}(r)}{dr}\right|_{r=r_{0}} \tag{5}\] ### Concept of Phase-shift: Phase shift techniques are incredibly helpful when analyzing scattering, including nucleon-nucleon scattering. The usefulness of approach to obtain phase shift using wavefunction is applied to \(np\)-scattering on a square-well potential in Krane[3]. Here, we are including spin-spin interaction[28] and numerically obtain phase shifts for \({}^{1}S_{0}\) state of Deuteron. This lays foundation for deriving phase equation which is central to phase function method. The width of square well for deuteron system is known to be \(r_{0}=r_{D}=2.1\) fm[29], radius of \(np\) - system. The depth of square well can be determined to match binding energy of deuteron[3]. The triplet ground state has energy \(E=-2.2\) MeV and virtual singlet state has energy \(E=77\) keV. Due to spin-dependence, depths \(V_{T}\) of triplet (\({}^{3}S_{1}\)) state and \(V_{S}\) for singlet (\({}^{1}S_{0}\)) state are determined to be respectively \(-32.5\) MeV and \(-10\) MeV[28]. For \(E>0\) singlet state, solution within region of well would be given by \[u_{0}(r)=A\ \sin(k_{0}r)+B\ \cos(k_{0}r)=A_{0}\sin(k_{0}r+\phi_{0}) \tag{6}\] where \(k_{0}=\sqrt{\frac{2\mu_{D}(E-V_{S})}{\hbar^{2}}}\). The boundary condition at r=0 gives \(\phi_{0}=0\). Similarly, for asymptotic solution outside the well, where there is no interaction, one obtains \[u_{a}(r)=A_{a}\ \sin(k_{a}r+\delta_{0}) \tag{7}\] where \(k_{a}=\sqrt{\frac{2\mu_{D}E}{\hbar^{2}}}\). The logarithmic derivatives of these two wavefunctions are matched at boundary \(r=r_{D}\), to obtain \[k_{1}\ \cot(k_{1}r_{D})=k\ \cot(kr_{D}+\delta_{0}) \tag{8}\] This equation gives us value of \(\delta\). These two solutions \(u_{0}(r)\) and \(u_{a}(r)\) are plotted in Fig.1 for \(E=50\) MeV. The free particle solution \(u(r)=\sin(kr)\) is also plotted in upper part to visually indicate phase shift accrued due to interaction with square well potential. This way of obtaining numerically the phase shift associated with scattering state of deuteron is seldom done in any textbook, such an activity would enhance learning of the important concept. ### Model Interaction Potentials: The interaction between neutron and proton is originally modeled successfully by Yukawa[1] as \[V_{Y}(r)=-V_{A}\Big{(}\frac{e^{-\mu_{A}r}}{r}\Big{)} \tag{9}\] where \(V_{A}\) is strength of interaction in MeV and \(\mu_{A}\)\(fm^{-1}\) is screening parameter which reflects range of interaction. We will be initially solving phase equation for this potential for various lab energies to show its relevance for low energies. Then, for including role of higher energies, a repulsive part of similar form is added, as proposed by Malfliet and Tjon[6], given by: \[V_{MT}(r)=-V_{A}\Big{(}\frac{e^{-\mu_{A}r}}{r}\Big{)}+V_{R}\Big{(}\frac{e^{- \mu_{R}r}}{r}\Big{)} \tag{10}\] Figure 1: The square-well potential \(V\) (MeV) is plotted w.r.t distance ‘r’. The singlet state wavefunction \(u_{0}(r)\) within the well is matched with asymptotic solution \(u_{a}(r)\) outside it at \(r_{D}\). where, they chose \(\mu_{R}=2\mu_{A}\). We will refer to it as Malfliet-Tjon (MT) potential and has three parameters. Instead of numerically solving for wavefunction, we introduce phase function method which results in scattering phase shifts directly from interaction potential. ### Phase function method: **Derivation of phase equation** The transformation of second order time-independent Schrodinger equation into a Ricatti type first order non-linear differential equation i.e phase equation for \(\ell=0\), was initially given by Morse and Allis[30]. It was later generalised for higher partial waves by Babikov[16] and Calegero[17]. Here, we present the derivation for \(\ell=0\) case[30] in a pedagogical manner. We now introduce the following visual explanation for the first time. Consider a general potential as shown in Fig. 2, to be a combination of extremely small square wells, of differing depths, \(V_{i}(r_{i})\). The wavefunction for the \(i^{th}\) well between \(r_{i}\) to \(r_{i+1}\) would be \(u_{i}(r)=A_{i}\sin(k_{i}r+\phi_{i})\), where \(k_{i}=\sqrt{2\mu(E-V_{i}(r_{i}))/\hbar^{2}}\). Here, \(\phi_{i}\) is determined by matching the boundary conditions at \(r_{i}\), by considering the wavefunctions in the square wells from \(r_{i-1}\) to \(r_{i}\) and \(r_{i}\) to \(r_{i+1}\). Remember, that \(\phi_{0}=0\) for the first well between \(r_{0}\) and \(r_{1}\). Then, \(\delta_{i}\) is obtained by matching wavefunction at \(r_{i+1}\) to asymptotic solution in Eq. 7. So, one would have \[\left.\frac{1}{u_{i}(r)}\frac{du_{i}(r)}{dr}\right|_{r=r_{i+1}}=\left.\frac{1} {u_{a}(r)}\frac{du_{a}(r)}{dr}\right|_{r=r_{i+1}} \tag{11}\] By substituting asymptotic solution from Eq. 7 and defining a function \(Z_{i}(r_{i+1})\) as \[Z_{i}(r_{i+1})=\left.\frac{1}{u_{i}(r)}\frac{du_{i}(r)}{dr}\right|_{r=r_{i+1} }=k_{a}\cot(k_{a}r_{i+1}+\delta_{i}(r_{i+1})) \tag{12}\] As width of wells tends to 0, the approach moves from discrete to continuous. Then \(u_{i}(r)\) is replaced by \(u(r)=A\sin(kr+\phi)\), where \(k=\sqrt{2\mu(E-V(r))/\hbar^{2}}\) and \(Z_{i}(r_{i+1})\) becomes \[Z(r)=\frac{1}{u(r)}\frac{du(r)}{dr}=k_{a}\cot(k_{a}r+\delta(r)) \tag{13}\] Derivative of Z(r), using Eq. 13 is \[\frac{dZ(r)}{dr}=-\frac{(k_{a}^{2}+k_{a}\frac{d\delta}{dr})}{\sin^{2}(k_{a}r+ \delta)} \tag{14}\] Now, differentiating \(Z(r)\), by using first part of Eq. 13, i.e within the potential region, one obtains \[\frac{dZ(r)}{dr}=\frac{d}{dr}\bigg{(}\frac{1}{u(r)}\frac{du(r)}{dr}\bigg{)}=\frac {1}{u(r)}\frac{d^{2}u(r)}{dr^{2}}-\frac{1}{u^{2}(r)}\bigg{(}\frac{du(r)}{dr} \bigg{)}^{2} \tag{15}\] From which, we obtain \[\frac{1}{u(r)}\frac{d^{2}u(r)}{dr^{2}}=\frac{dZ}{dr}+Z^{2}(r) \tag{16}\] _Transforming radial time-independent Schrodinger equation into phase equation:_ Dividing equation Eq. 2 by \(u(r)\), we get \[\frac{1}{u(r)}\frac{d^{2}u(r)}{dr^{2}}+\frac{2\mu}{\hbar^{2}}(E-V(r))=0 \tag{17}\] In terms of \(Z(r)\), it is written as \[\frac{dZ(r)}{dr}+Z^{2}(r)=\frac{2\mu}{\hbar^{2}}\big{(}V(r)-E\big{)} \tag{18}\] On substituting from Eqs. 13 and 14, we have \[-\frac{(k_{a}^{2}+k_{a}\frac{d\delta}{dr})}{\sin^{2}(k_{a}r+\delta)}+k_{a}^{2 }\frac{\cos^{2}(k_{a}r+\delta)}{\sin^{2}(k_{a}r+\delta)}=\frac{2\mu}{\hbar^{2} }(V(r)-E) \tag{19}\] Figure 2: MT potential for \({}^{3}S_{1}\) state, is shown as series of finite rectangular wells. The dotted line shows build up of SPS \(\delta_{i}(r_{i})\) at lab energy \(E=20\) MeV with distance \((r)\). Using relation \(\cos^{2}\theta=1-\sin^{2}\theta\) and \(k_{a}^{2}=\frac{2\mu E}{\hbar^{2}}\), it gets simplified to result in following phase equation \[\boxed{\frac{d\delta(r)}{dr}=-\frac{2\mu}{\hbar^{2}}\frac{V(r)}{k_{a}}\ \sin^{2}(k_{a}r+\delta(r))} \tag{20}\] This is a non-linear first order differential equation of Ricatti type, with initial condition \(\delta(r=0)\)\(=0\). Eq. 20 can not be solved using any analytical techniques and hence we resort to numerical approach. One can obtain the wavefunctions from phase shifts [17], but this is not necessary for determination of experimental scattering parameters or cross section, and hence is not attempted in this work. ## III Numerical solution ### Preparation of System: **Choice of units:** In nuclear physics, scale of energies are in MeV and that of distances in fm. Converting J-m to MeV-fm, value of \(\hbar c\) would be 197.329 MeV-fm. **Region of Interest:** The potential has a certain range over which its influence is felt and dies down exponentially to zero. The limit value of distance \(r_{f}\), at which \(V(r_{f})\) is zero is taken to be little greater than interaction radius of neutron-proton interaction. Typically, nuclear force saturates within 4 fm and hence we have chosen \(r_{f}=5\) fm. The interval [0, 5] for \(r\) is sampled uniformly with step-size (\(h\)) to obtain best accuracy. **Choice of Numerical technique:** One must consider three key characteristics of stability, accuracy and efficiency, in that order of importance, while choosing numerical technique for solving any problem. Typically, \(2^{nd}\) or \(4^{th}\) order Runge-Kutta methods can be utilized for solving phase equation. But, for \(np\)-interaction, experimental SPS are known to three decimal places (See supplemental material, Table 1 in Appendix). The global error for RK-4 method is of the order \(h^{4}\) and this could further add up due to propagation errors accrued with number of iterations. So, \(5^{th}\) order Ruge-Kutta method (RK-5) is suggested for obtaining scattering phase shifts for \(np\)-interaction [20], even though RK-4 should suffice for implementation at undergraduate level. RK-5 is an interesting technique, that involves determination of 6 slopes even though only 5 are utilised for updating the solution at the next step. The phase equation can be viewed as \[\frac{d\delta(r)}{dr}=f(r,k,V,\delta) \tag{21}\] where \[f(r,k,V,\delta)=-\frac{2\mu}{\hbar^{2}}\frac{V(r)}{k}sin^{2}(kr+\delta(r)) \tag{22}\] The method involves calculating value of \(\delta(r_{i+1})\) by utilizing previous value at \(r_{i}\), for i = 0, 1,\(\ldots\), n-1. \[\delta(r_{i+1})=\delta(r_{i})+\frac{h}{90}(7F_{1}+32F_{2}+12F_{4}+32F_{5}+7F_{ 6}) \tag{23}\] where \(F_{i}^{\prime}s\) are slopes of function \(f\) at different points in interval [0, h]. These function evaluations become evident on looking at algorithm presented during implementation stage. ### Implementation of the numerical method in a computer This stage involves writing an algorithm or pseudo code as it's first step. Broad blocks of algorithm are identified as: **1.** Initialisation **2.** Potential Definition **3.** Function Definition **4.** RK-5 procedure **5.** Outputs. The Scilab code for determining scattering phase shifts using RK-5 method has been given in Github [31], which clearly delineates all steps given above. **Optimizing model parameters:** Experimental data for scattering phase shifts have been modified due to newer inputs at extended lab energies and also better accuracies at existing energies. So, one is left with a challenge of obtaining new set of model parameters that match with experimental data. There are many optimization algorithms [32] for obtaining best model parameters for a chosen system. We have given our implementation [33; 34; 11] based on Variation Monte Carlo technique on Github [31]. ### Experimental Observables: **Scattering Properties:** For low energy scattering, scattering length '\(a\)' and effective range '\(r_{0}\)' can be calculated by using relation [35] \[k\cot(\delta)=-\frac{1}{a}+0.5r_{0}k^{2} \tag{24}\] Using scattering phase shifts, obtained by numerically solving phase equation, \(k\cot(\delta)\) was plotted as a function of \(0.5k^{2}\). This results in a straight line and scattering parameters \(r_{0}\) and \(a\) are obtained from its slope and intercept respectively. **Partial and Total Cross section:** The partial cross section for S-wave (i.e \(\ell=0\) partial wave), is given by [3]: \[\sigma(k)=\frac{4\pi}{k^{2}}\sin^{2}\delta \tag{25}\] Using numerically obtained scattering phase shifts for triplet \({}^{3}S_{1}\) and singlet \({}^{1}S_{0}\) states, we compute respective scattering cross section \(\sigma_{t}\) and \(\sigma_{s}\). The total cross section of S-wave [3; 29]\(np\) scattering is calculated as \[\sigma(k)=\frac{3}{4}\sigma_{t}+\frac{1}{4}\sigma_{s} \tag{26}\] ## IV Results and discussions Experimental SPS from R. N. Perez \(et.al.\) (Granada group) [36] along with 0.1 MeV data point from Nijmegen database [37] have been considered for both \({}^{3}S_{1}\) and \({}^{1}S_{0}\) states. Model parameters for both Yukawa and MT interaction potentials have been obtained based on an optimization procedure [11] given at Github [31], and are given in Table 1. The corresponding mean absolute percentage error (MAPE) are also given in Table 1. Utilizing these model parameters for Yukawa and MT potentials, phase equation Eq. 20 has been solved using RK-2, RK-4 and RK-5 methods, for both triplet \({}^{3}S_{1}\) and singlet \({}^{1}S_{0}\) states. Even though mean absolute percentage error for RK-5 and RK-4 methods are exactly similar, for two data points there has been a change in phase shift values at the 3\({}^{rd}\) decimal place. Hence, one can implement RK-4 method in the lab, as it is well known algorithm. Obtained scattering phase shifts are plotted in Fig. 3. The scattering phase shifts using RK-4 method are similar to RK-5 method. Even in RK-2 method, the change in scattering phase shifts occurs in second decimal place, so they would not be visible in plots, and hence not included. It should be emphasised that while CoM energies are used for computing scattering phase shifts, plots are made with laboratory energies for ease of comparison with experimental data. It is seen that scattering phase shifts for both \({}^{3}S_{1}\) and \({}^{1}S_{0}\) states, obtained using Yukawa potential match with empirical data for laboratory energies up to about 50 MeV. Then, they start to deviate and tend to saturate to values far above expected ones. On the other hand, scattering phase shifts obtained on solving phase equation using MT potential match expected data all the way upto 350 MeV. Close match between computed and and experimental scattering phase shifts for both \({}^{3}S_{1}\) and \({}^{1}S_{0}\) states using MT potential with RK-5, RK-4 and RK-2 methods can further be observed from data compiled (See supplemental material, Table 1 in Appendix). Plots of Yukawa and MT potentials using model parameters are shown in Fig. 4. It is interesting to observe that using MT potential, shapes of both triplet ground state and singlet scattering state are very similar except for their depths. In order to determine low energy scattering parameters using Eq.5, we have considered energies from \(0.1-10\) MeV which results in \(k\) values from \(0.035-0.347\)\(fm^{-1}\). Considering scattering phase shifts obtained using MT potential, we have plotted \(kcot(\delta)\) w.r.t. \(0.5k^{2}\) for both singlet and triplet states. These are shown in Fig. 5. Slopes of their regression lines give scattering length '\(a\)' and intercepts are utilized for determining effective range '\(r_{0}\)'. Obtained values are tabulated alongside experimental ones in Table 2. While there is a good overlap between ranges of calculated experimental values [35] for scattering length '\(a\)', those for effective range '\(r_{0}\)' are reasonably close. Adding more low energy data points below 0.1 MeV might further improve determination of these parameters. \begin{table} \begin{tabular}{l c c c c c} Potential & States & \(V_{r}\) (MeV) & \(V_{a}\) (MeV) & \(\mu_{A}\) (\(fm^{-1}\)) & MAPE \% \\ \hline \multirow{3}{*}{Yukawa} & \({}^{3}S_{1}\) & — & 50.25 & 0.37 & 1.65 (up-to 50 MeV) \\ & \({}^{1}S_{0}\) & — & 41.79 & 0.61 & 1.52 (up-to 50 MeV) \\ \hline \multirow{3}{*}{Malfliet-Tjon} & \({}^{3}S_{1}\) & 9435.57 & 2134.88 & 2.54 & 0.61 (up-to 350 MeV) \\ & \({}^{1}S_{0}\) & 6806.60 & 1522.42 & 2.42 & 1.91 (up-to 350 MeV) \\ \end{tabular} \end{table} Table 1: Model parameters and mean absolute percentage error (MAPE) for triplet (\({}^{3}S_{1}\)) and singlet (\({}^{1}S_{0}\)) states of _np_ interaction for Yukawa and Malfliet-Tjon potentials. For Yukawa potential, the MAPE is given up-to 50 Mev since, beyond that the MAPE value increases to a significant amount. Finally, partial and total scattering cross section were calculated from obtained scattering phase shifts using Eqns. 25 and 26 respectively, for experimental energies ranging from \(0.1-350\) MeV. Obtained total cross section is plotted along with experimental data[38] in Fig. 6. On extrapolating to \(E=0.000132\) MeV, total cross section for S-wave is calculated Figure 4: Plots of Yukawa and MT potentials responsible for scattering phase shifts observed due to scattering from \({}^{3}S_{1}\) and \({}^{1}S_{0}\) states in _np_ interaction. Figure 3: Scattering phase shifts for (a) triplet \({}^{3}S_{1}\)_(left)_ and (b) singlet \({}^{1}S_{0}\)_(right)_ state of _np_ scattering obtained using Yukawa and Malfliet-Tjon potentials, along with experimental data[36], for lab energies upto 350 MeV. to be = 20.641 b. This is in good agreement with experimental total deuteron cross section value of 20.491 b\({}^{3}\). Generally, interaction potentials that explain experimental scattering phase shifts of \(np\)-scattering are utilized to determine properties of deuteron which is a weakly bound nucleus consisting of one neutron and one proton. By solving the radial time-independent Schrodinger equation using numerical technique [39], we have determined deuteron binding energy (BE) = \(-\)2.026 MeV for obtained \({}^{3}S_{1}\) potential. The experimental binding energy of Deuteron = \(-\)2.225 MeV [3]. Similarly, utilizing potential for \({}^{1}S_{0}\), an energy value of 76 KeV has been obtained for unbound state which is very close to expected value of 77 KeV. This type of determination of certain property of a system which consists of projectile and target particles is called off-shell calculation and acts as a cross-confirmation of obtained interaction potential to be consistent with other expected data. \begin{table} \begin{tabular}{c c c c c} States & \(a(fm)\) (exp.) [35] & \(a(fm)\) (calc.) & \(r_{0}(fm)\) (exp.) [35] & \(r_{0}(fm)\) (calc.) \\ \hline \({}^{3}S_{1}\) & 5.397 \(\pm\) 0.011 & 5.534 \(\pm\) 0.032 & 1.727 \(\pm\) 0.013 & 1.705 \(\pm\) 0.012 \\ \({}^{1}S_{0}\) & -23.678 \(\pm\) 0.028 & -24.038 \(\pm\) 0.045 & 2.44 \(\pm\) 0.11 & 2.307 \(\pm\) 0.025 \\ \end{tabular} \end{table} Table 2: Comparison of obtained scattering length ‘\(a\)’ and effective range ‘\(r_{0}\)’ with experimental values: Figure 5: Plots of \(k\)\(cot\delta\) with 0.5\(k^{2}\) for triplet and singlet states along with regression lines The proposed method for determination of scattering phase shifts for \(np\) scattering is also applicable to study \(neutron-deuteron\)\((nd)\)\({}^{12}\) and \(neutron-alpha\)\((n-\alpha)\)\({}^{13}\) scattering systems. One has to redetermine model parameters for chosen interaction potential by fitting simulated scattering phase shifts to match with available experimental data for these systems. Since by changing interacting particles, few physical observables i.e, reduced mass of the system, interaction potential between interacting particles and scattering phase shifts will change. Other two body scattering systems involving charged particles, such as \(proton-proton\)\((pp)\)\({}^{11}\) and \(\alpha-\alpha\)\({}^{14}\) require introducing screened Coulomb potential such as Hulthen potential, a modified form of Yukawa potential. ## V Conclusion The advantage of phase function method (PFM), which directly allows determination of scattering phase shifts without recourse to wavefunction, has been utilized to introduce phase wave analysis procedure to obtain scattering cross section. Phase equation for \(\ell=0\), derived from the radial time-independent Schrodinger equation, has been numerically solved using \(5^{th}\) order Runge-Kutta (RK-5) method for determining scattering phase shifts of both Figure 6: Neutron-proton (\(np\)) experimental\({}^{38}\) and calculated cross section up to \(E=350\) MeV. triplet and singlet states of \(np\) - interaction. While attractive Yukawa potential was able to explain experimental scattering phase shifts data for lab energies upto about 50 MeV, MT potential consisting of an extra repulsive part performed well even at higher energies upto 350 MeV. These scattering phase shifts were utilized to obtain scattering length and effective range for both S-waves and are found to be having good match with experimental values. Finally, total scattering cross section at various energies have been obtained from partial scattering cross section of both S-waves to very good accuracy. The methodology detailed in this paper could be easily extended to study other two body scattering phenomenon in atomic, nuclear and particle physics and hopefully is within reach of undergraduate physics students to undertake interesting projects. **Acknowledgement:** We are very thankful to Prof. C. Rangacharyulu, from University of Saskatchewan, for his inputs in improving presentation of this paper. **Conflict of Interest:** The authors have no conflicts to disclose.
2303.00561
A method for determining Cartan geometries from the local behavior of automorphisms
For the purpose of determining global properties of Cartan geometries from local information about automorphisms, we introduce a construction for a Cartan geometry that captures the local behavior of a given automorphism near a distinguished element. The result of this construction, which we call the sprawl generated by the automorphism from the distinguished element, is uniquely characterized by a kind of "universal property" that allows us to compare Cartan geometries admitting automorphisms with equivalent local behavior. As example applications, we prove that the only affine structure on a connected manifold admitting an affine transformation with scaling isotropy is the standard one on affine space, and describe how to construct non-flat real projective structures admitting nontrivial automorphisms with higher-order fixed points.
Jacob W. Erickson
2023-03-01T15:05:21Z
http://arxiv.org/abs/2303.00561v3
# A method for determining Cartan geometries from the local behavior of automorphisms ###### Abstract. For the purpose of determining global properties of Cartan geometries from local information about automorphisms, we introduce a construction for a Cartan geometry that captures the local behavior of a given automorphism near a distinguished element. The result of this construction, which we call the sprawl generated by the automorphism from the distinguished element, is uniquely characterized by a kind of "universal property" that allows us to compare Cartan geometries admitting automorphisms with equivalent local behavior. To demonstrate the remarkable effectiveness of the techniques derived from this construction, we use them to completely characterize all almost c-projective structures and all almost quaternionic structures admitting nontrivial automorphisms with higher-order fixed points, as well as all non-degenerate partially integrable almost CR-structures admitting a higher-order fixed point with non-null isotropy. Partially supported by the Brin Graduate Fellowship at the University of Maryland. ## 1. Introduction Up to conformal isomorphism in each dimension greater than two, a celebrated theorem of Ferrand and Obata tells us that there are only two examples of conformal structures of Riemannian signature admitting automorphism groups that act non-properly: the standard conformal sphere and Euclidean space. This result was extended in several cases, until finally Frances generalized it in [6] to all parabolic geometries of real rank one, assuming the curvature satisfies a standard regularity condition. A convenient and concise history of this result can be found in [10]. While there are several obstructions to meaningfully extending this theorem to parabolic geometries of arbitrary real rank, it is still worth asking: how much can we say about the global structure of a Cartan geometry just from the behavior of its automorphisms? For example, there has been recent progress in [7] and [12] on the so-called Lorentzian Lichnerowicz Conjecture, which asks whether a conformal structure of Lorentzian signature on a compact manifold must be conformally flat if its automorphism group does not preserve an underlying Lorentzian metric. Another question of considerable interest is how much the existence of an automorphism with a higher-order fixed point determines the structure of a given parabolic geometry. Local results in this direction are obtained in [3], [8], and [11], suggesting that in certain cases, the existence of a higher-order fixed point guarantees the vanishing of the curvature in some neighborhood. In these papers, corresponding results for global flatness were anticipated, but not obtained without assuming strong conditions on the structure of the geometry like real-analyticity, or in the case of [9], metrizability. Our goal for this paper is to begin establishing general tools with which we can answer these global questions. In order to demonstrate the effectiveness of the tools developed below, we have shown that the existence of higher-order fixed points in certain parabolic geometries almost completely determines the global structure. **Theorem A**.: _Let \((\mathrm{PGL}_{m+1}\,\mathbb{C},P)\) be the model for almost c-projective structures of real dimension \(2m\). If \((\mathscr{G},\omega)\) is a Cartan geometry of type \((\mathrm{PGL}_{m+1}\,\mathbb{C},P)\) over a connected smooth manifold \(M\) with a nontrivial automorphism \(\alpha\in\mathrm{Aut}(\mathscr{G},\omega)\) such that \(\alpha(\mathscr{e})\in\mathscr{e}P_{+}\) for some \(\mathscr{e}\in\mathscr{G}\), then \((\mathscr{G},\omega)\) geometrically embeds onto a dense open subset of the Klein geometry \((\mathrm{PGL}_{m+1}\,\mathbb{C},\omega_{\mathrm{PGL}_{m+1}\,\mathbb{C}})\) over \(\mathbb{CP}^{m}\)._ **Theorem B**.: _Let \((\mathrm{PGL}_{m+1}\,\mathbb{H},P)\) be the model for almost quaternionic structures with real dimension \(4m\). If \((\mathscr{G},\omega)\) is a normal Cartan geometry of type \((\mathrm{PGL}_{m+1}\,\mathbb{H},P)\) over a connected smooth manifold \(M\) with a nontrivial automorphism \(\alpha\in\mathrm{Aut}(\mathscr{G},\omega)\) such that \(\alpha(\mathscr{e})\in\mathscr{e}P_{+}\) for some \(\mathscr{e}\in\mathscr{G}\), then \((\mathscr{G},\omega)\) geometrically embeds onto a dense open subset of the Klein geometry \((\mathrm{PGL}_{m+1}\,\mathbb{H},\omega_{\mathrm{PGL}_{m+1}\,\mathbb{H}})\) over \(\mathbb{HP}^{m}\)._ **Theorem C**.: _Let \((\mathrm{PU}(\mathrm{h}_{p,q}),P)\) be the model for partially integrable almost CR-structures with Levi form of signature \((p,q)\). Suppose \((\mathscr{G},\omega)\) is a regular Cartan geometry of type \((\mathrm{PU}(\mathrm{h}_{p,q}),P)\) over a connected smooth manifold \(M\) with an automorphism \(\alpha\in\mathrm{Aut}(\mathscr{G},\omega)\) such that \(\alpha(\mathscr{e})\in\mathscr{e}a\) for some \(\mathscr{e}\in\mathscr{G}\) and \(a\in P_{+}\). If \(a\) is non-null, then \((\mathscr{G},\omega)\) geometrically embeds onto a dense open subset of the Klein geometry \((\mathrm{PU}(\mathrm{h}_{p,q}),\omega_{\mathrm{PU}(\mathrm{h}_{p,q})})\)._ The main tool leading to these results is a Cartan geometry that completely captures the local behavior of a given automorphism. We detail the construction of these geometries, called _sprawls_, in Section 4, after recalling some preliminary details in Section 2 and providing modest improvements to certain local results in Section 3. In Section 5, we then prove Theorems A, B, and C as corollaries of a more general theorem. For convenience of later works, we also compute the holonomy group of a sprawl in an appendix. We should remark that this paper will not include the "stitching theorem" mentioned by the author at the 2022 Geometric Structures conference in Strasbourg; this other result, used in conjunction with sprawls, allows us to prove strong results for _real_ projective structures with higher-order fixed points as well. While writing this paper, we realized that a few seemingly innocuous details had been overlooked, and though there is little reason to believe the result is not ultimately still true, the author still needs to verify some key intermediate aspects. Regardless, we felt that the results on sprawls merited their own paper anyway. ## Acknowledgements We would like to specifically thank Karin Melnick for several helpful suggestions and conversations. Additionally, we would like to thank Charles Frances, both for inspiring this paper and for tolerating the author's enthusiasm. ## 2. Preliminaries This section is a way of establishing terminology and notation; it is probably not a good introduction to the topic. We recommend [13] for an actual introduction to Cartan geometries, [4] for an introduction to parabolic geometries, and [5] for an overview of the results on holonomy. ### Relevant model geometries Cartan geometries are modeled on homogeneous geometries in the sense of Klein, though we present these geometries in a way that emphasizes the role of the Lie group as a principal bundle over the homogeneous space. **Definition 2.1**.: A _model_ (or _model geometry_) is a pair \((G,H)\), where \(G\) is a Lie group and \(H\) is a closed subgroup of \(G\) such that \(G/H\) is connected. In this case, \(G\) is called the _model group_ and \(H\) is called the _isotropy_ or _stabilizer subgroup_. Writing \(\mathrm{I}(m):=\mathbb{R}^{m}\rtimes\mathrm{O}(m)\) and \(\mathrm{Aff}(m):=\mathbb{R}^{m}\rtimes\mathrm{GL}_{m}\,\mathbb{R}\), two standard examples of model geometries are \((\mathrm{I}(m),\mathrm{O}(m))\), corresponding to \(m\)-dimensional Euclidean geometry on \(\mathbb{R}^{m}\cong\mathrm{I}(m)/\,\mathrm{O}(m)\), and \((\mathrm{Aff}(m),\mathrm{GL}_{m}\,\mathbb{R})\), corresponding to \(m\)-dimensional affine geometry on \(\mathbb{R}^{m}\cong\mathrm{Aff}(m)/\,\mathrm{GL}_{m}\,\mathbb{R}\). Cartan geometries modeled on these correspond to Riemannian structures and affine structures, respectively. The models relevant to Theorems A, B, and C are all parabolic. **Definition 2.2**.: A model \((G,P)\) is _parabolic_ whenever the model group \(G\) is semisimple and the isotropy1\(P\) is a parabolic subgroup. For parabolic models, we get an \(\operatorname{Ad}_{P}\)-invariant filtration of \(\mathfrak{g}\) given by \[\mathfrak{g}=\mathfrak{g}^{-k}\supset\mathfrak{g}^{-k+1}\supset\cdots\supset \mathfrak{g}^{k}\supset\{0\},\] with \(\mathfrak{g}^{0}:=\mathfrak{p}\) and \(\mathfrak{g}^{1}:=\mathfrak{p}_{+}\), where \(\mathfrak{p}_{+}\) is the nilradical of \(\mathfrak{p}\). From this filtration, we can also get a grading \(\sum_{i=-k}^{k}\mathfrak{g}_{i}\) on \(\mathfrak{g}\) with \(\mathfrak{g}_{i}\approx\mathfrak{g}^{i}/\mathfrak{g}^{i+1}\); this grading corresponds to a choice of Cartan involution, but we will keep this choice implicit, since we will not need the Cartan involution here. Notably, the filtration and grading satisfy \([\mathfrak{g}^{i},\mathfrak{g}^{j}]\subseteq\mathfrak{g}^{i+j}\) and \([\mathfrak{g}_{i},\mathfrak{g}_{j}]\subseteq\mathfrak{g}_{i+j}\) for all \(i\) and \(j\), so \(\mathfrak{g}_{-}:=\sum_{i<0}\mathfrak{g}_{i}\) and \(\mathfrak{g}_{0}\) are subalgebras. We denote by \(G_{-}\) and \(P_{+}\) the connected nilpotent subgroups generated by \(\mathfrak{g}_{-}\) and \(\mathfrak{p}_{+}\), respectively, and by \(G_{0}\) the closed subgroup of \(P\) such that \(\operatorname{Ad}_{G_{0}}(\mathfrak{g}_{i})=\mathfrak{g}_{i}\) for every grading component \(\mathfrak{g}_{i}\). In Theorem A, we will focus on the model \((\operatorname{PGL}_{m+1}\mathbb{C},P)\), where \[P:=\left\{\begin{pmatrix}r&p\\ 0&A\end{pmatrix}:r\in\mathbb{C}^{\times},p^{\top}\in\mathbb{C}^{m},A\in \operatorname{GL}_{m}\mathbb{C}\right\}.\] This model geometry is parabolic, with grading \[\mathfrak{g}_{-}=\mathfrak{g}_{-1}:=\left\{\begin{pmatrix}0&0\\ v&0\end{pmatrix}\in\mathfrak{p}\mathfrak{gl}_{m+1}\mathbb{C}:v\in\mathbb{C}^{m }\right\},\] \[\mathfrak{g}_{0}:=\left\{\begin{pmatrix}r&0\\ 0&R\end{pmatrix}\in\mathfrak{p}\mathfrak{gl}_{m+1}\mathbb{C}:r\in\mathbb{C},R \in\mathfrak{gl}_{m}\mathbb{C}\right\},\] and \[\mathfrak{p}_{+}=\mathfrak{g}_{1}:=\left\{\begin{pmatrix}0&p\\ 0&0\end{pmatrix}\in\mathfrak{p}\mathfrak{gl}_{m+1}\mathbb{C}:p^{\top}\in \mathbb{C}^{m}\right\},\] and associated subgroups \[G_{-}:=\left\{\begin{pmatrix}1&0\\ v&\mathds{1}\end{pmatrix}\in\operatorname{PGL}_{m+1}\mathbb{C}:v\in\mathbb{C}^{ m}\right\},\] \[G_{0}:=\left\{\begin{pmatrix}r&0\\ 0&A\end{pmatrix}\in\operatorname{PGL}_{m+1}\mathbb{C}:r\in\mathbb{C}^{\times},A \in\operatorname{GL}_{m}\mathbb{C}\right\},\] and \[P_{+}:=\left\{\begin{pmatrix}1&p\\ 0&\mathds{1}\end{pmatrix}\in\operatorname{PGL}_{m+1}\mathbb{C}:p^{\top}\in \mathbb{C}^{m}\right\}.\] In this case, the model encodes complex projective geometry over \(\operatorname{PGL}_{m+1}\mathbb{C}/P\cong\mathbb{CP}^{m}\), and Cartan geometries of type \((\operatorname{PGL}_{m+1}\mathbb{C},P)\) (with certain curvature restrictions) correspond to almost c-projective structures. An overview of such structures can be found in [2]. In Theorem B, we will focus on the model \((\operatorname{PGL}_{m+1}\mathbb{H},P)\). Here, \(\operatorname{PGL}_{m+1}\mathbb{H}\) is the quotient of the quaternionic general linear group \(\operatorname{GL}_{m+1}\mathbb{H}\) of right \(\mathbb{H}\)-module automorphisms of \(\mathbb{H}^{m+1}\) by its center, which corresponds to those automorphisms which left-multiply every element of \(\mathbb{H}^{m+1}\) by a nonzero real number, and \[P:=\left\{\begin{pmatrix}r&p\\ 0&A\end{pmatrix}\in\operatorname{PGL}_{m+1}\mathbb{H}:r\in\mathbb{H}^{\times},p ^{\top}\in\mathbb{H}^{m},A\in\operatorname{GL}_{m}\mathbb{H}\right\}.\] This model geometry is also parabolic, with grading, subgroups, and subalgebras basically the same as those for \((\mathrm{PGL}_{m+1}\,\mathbb{C},P)\) but with \(\mathbb{C}\) replaced by \(\mathbb{H}\) everywhere: \[\mathfrak{g}_{-}=\mathfrak{g}_{-1}:=\left\{\begin{pmatrix}0&0\\ v&0\end{pmatrix}\in\mathfrak{pgl}_{m+1}\mathbb{H}:v\in\mathbb{H}^{m}\right\},\] \[\mathfrak{g}_{0}:=\left\{\begin{pmatrix}r&0\\ 0&R\end{pmatrix}\in\mathfrak{pgl}_{m+1}\mathbb{H}:r\in\mathbb{H},R\in\mathfrak{ gl}_{m}\mathbb{H}\right\},\] and \[\mathfrak{p}_{+}=\mathfrak{g}_{1}:=\left\{\begin{pmatrix}0&p\\ 0&0\end{pmatrix}\in\mathfrak{pgl}_{m+1}\mathbb{H}:p^{\top}\in\mathbb{H}^{m} \right\},\] with associated subgroups \[G_{-}:=\left\{\begin{pmatrix}1&0\\ v&\mathds{1}\end{pmatrix}\in\mathrm{PGL}_{m+1}\,\mathbb{H}:v\in\mathbb{H}^{m} \right\},\] \[G_{0}:=\left\{\begin{pmatrix}r&0\\ 0&A\end{pmatrix}\in\mathrm{PGL}_{m+1}\,\mathbb{H}:r\in\mathbb{H}^{\times},A\in \mathrm{GL}_{m}\,\mathbb{H}\right\},\] and \[P_{+}:=\left\{\begin{pmatrix}1&p\\ 0&\mathds{1}\end{pmatrix}\in\mathrm{PGL}_{m+1}\,\mathbb{H}:p^{\top}\in\mathbb{H }^{m}\right\}.\] This model basically encodes the quaternionic analogue of projective geometry over \(\mathrm{PGL}_{m+1}\,\mathbb{H}/P\cong\mathbb{HP}^{m}\), and Cartan geometries of type \((\mathrm{PGL}_{m+1}\,\mathbb{H},P)\) (with certain curvature restrictions) correspond to almost quaternionic structures, as described in 4.1.8 of [4]. Finally, in Theorem C, we will focus on the model \((\mathrm{PU}(\mathrm{h}_{p,q}),P)\), where \(\mathrm{h}_{p,q}\) is the Hermitian form on \(\mathbb{C}^{p+q+2}\) with quadratic form given by \[\begin{bmatrix}z_{0}\\ \vdots\\ z_{p+q+1}\end{bmatrix}\mapsto 2\mathrm{Re}(\bar{z}_{0}z_{p+q+1})+\sum_{j=1}^{p }|z_{j}|^{2}-\sum_{j=p+1}^{p+q}|z_{j}|^{2},\] and \(\mathrm{PU}(\mathrm{h}_{p,q})\) is the quotient of the group of unitary transformations for \(\mathrm{h}_{p,q}\) by its center, consisting of multiples of the identity matrix by elements of \(\mathrm{U}(1)\). Denoting by \(I_{p,q}\) the \((p+q)\times(p+q)\) diagonal matrix with the first \(p\) diagonal entries equal to \(+1\) and the last \(q\) diagonal entries equal to \(-1\), the Lie group \(\mathrm{PU}(\mathrm{h}_{p,q})\) has Lie algebra of the form \[\mathfrak{pu}(\mathrm{h}_{p,q}):=\left\{\begin{pmatrix}r&\beta&\mathrm{is}\\ v&R&-I_{p,q}\bar{\beta}^{\top}\\ \mathrm{i}t&-\bar{v}^{\top}I_{p,q}&-\bar{r}\end{pmatrix}:\begin{array}{l}s, t\in\mathbb{R},\,v,\beta^{\top}\in\mathbb{C}^{p+q},\\ r\in\mathbb{C},\text{ and }R\in\mathfrak{u}(p,q)\end{array}\right\},\] where elements of the Lie algebra are considered equivalent if their difference is an imaginary multiple of the identity matrix. The parabolic subgroup \(P\), then, is \[P:=\left\{\begin{pmatrix}r&r\beta&r(\mathrm{i}s-\frac{1}{2}\beta I_{\underline{p}, q}\bar{\beta}^{\top})\\ 0&A&-AI_{p,q}\bar{\beta}^{\top}\\ 0&0&\bar{r}^{-1}\end{pmatrix}:\begin{array}{l}s\in\mathbb{R},\,\beta^{\top} \in\mathbb{C}^{p+q},\\ r\in\mathbb{C}^{\times},\text{ and }A\in\mathrm{U}(p,q)\end{array}\right\},\] with grading given by \[\mathfrak{g}_{-2}:=\left\langle\left(\begin{matrix}0&0&0\\ 0&0&0\\ \mathrm{i}&0&0\end{matrix}\right)\right\rangle,\] \[\mathfrak{g}_{-1}:=\left\{\begin{pmatrix}0&0&0\\ v&0&0\\ 0&-\bar{r}^{\top}I_{p,q}&0\end{pmatrix}\in\mathfrak{pu}(\mathrm{h}_{p,q}):v\in \mathbb{C}^{p+q}\right\},\] \[\mathfrak{g}_{0}:=\left\{\begin{pmatrix}r&0&0\\ 0&R&0\\ 0&0&-\bar{r}\end{pmatrix}\in\mathfrak{pu}(\mathrm{h}_{p,q}):r\in\mathbb{C},R \in\mathfrak{u}(p,q)\right\},\] \[\mathfrak{g}_{1}:=\left\{\begin{pmatrix}0&\beta&0\\ 0&0&-I_{p,q}\bar{\beta}^{\top}\\ 0&0&0\end{pmatrix}\in\mathfrak{pu}(\mathrm{h}_{p,q}):\beta^{\top}\in\mathbb{C} ^{p+q}\right\},\] and \[\mathfrak{g}_{2}:=\left\langle\left(\begin{matrix}0&0&\mathrm{i}\\ 0&0&0\\ 0&0&0\end{matrix}\right)\right\rangle,\] with corresponding subgroups \[G_{-}:=\left\{\begin{pmatrix}1&0&0\\ v&\mathds{1}&0\\ \mathrm{i}t-\frac{1}{2}\bar{v}^{\top}I_{p,q}v&-\bar{v}^{\top}I_{p,q}&1\end{pmatrix} \in\mathrm{PU}(\mathrm{h}_{p,q}):v\in\mathbb{C}^{p+q},t\in\mathbb{R}\right\},\] \[G_{0}:=\left\{\begin{pmatrix}r&0&0\\ 0&A&0\\ 0&0&\bar{r}^{-1}\end{pmatrix}\in\mathrm{PU}(\mathrm{h}_{p,q}):r\in\mathbb{C}^{ \times},A\in\mathrm{U}(p,q)\right\},\] and \[P_{+}:=\left\{\begin{pmatrix}1&\beta&\mathrm{i}s-\frac{1}{2}\beta I_{p,q}\bar {\beta}^{\top}\\ 0&\mathds{1}&-I_{p,q}\bar{\beta}^{\top}\\ 0&0&1\end{pmatrix}\in\mathrm{PU}(\mathrm{h}_{p,q}):\beta^{\top}\in\mathbb{C}^{ p+q},s\in\mathbb{R}\right\}.\] Within \(P_{+}\), we will also say that \[a=\begin{pmatrix}1&\beta&\mathrm{i}s-\frac{1}{2}\beta I_{p,q}\bar{\beta}^{ \top}\\ 0&\mathds{1}&-I_{p,q}\bar{\beta}^{\top}\\ 0&0&1\end{pmatrix}\in P_{+}\] is _non-null_ if and only if \[\mathrm{i}s-\frac{1}{2}\beta I_{p,q}\bar{\beta}^{\top}\neq 0.\] By analogy with pseudo-Riemannian structures, we occasionally also refer to a non-null element \(a\in\exp(\mathfrak{g}_{1})\subset P_{+}\) as being "timelike" when \(\beta I_{p,q}\bar{\beta}^{\top}>0\) and "spacelike" when \(\beta I_{p,q}\bar{\beta}^{\top}<0\). The homogeneous space \(\operatorname{PU(h}_{p,q})/P\) is naturally diffeomorphic to the null-cone \[\operatorname{Null(h}_{p,q}):=\left\{\mathbb{C}^{\times}u\in\mathbb{C} \mathbb{P}^{p+q+1}:\operatorname{h}_{p,q}(u,u)=0\right\}\] for \(\operatorname{h}_{p,q}\) in \(\mathbb{C}\mathbb{P}^{p+q+1}\). This null-cone is a compact simply connected smooth manifold of (real) dimension \(2(p+q)+1\); for \(q=0\) or \(p=0\), \(\operatorname{Null(h}_{p,q})\) is diffeomorphic to the sphere. Cartan geometries of type \((\operatorname{PU(h}_{p,q}),P)\) (with certain curvature restrictions) correspond to non-degenerate partially integrable almost CR-structures with Levi form of signature \((p,q)\), as detailed in 4.2.4 of [4]. ### Generalities for Cartan geometries In essence, the idea of a Cartan geometry of type \((G,H)\) is to specify a \(\mathfrak{g}\)-valued one-form \(\omega\) on a given principal bundle \(\mathscr{G}\) so that \(\omega\) behaves like the Maurer-Cartan form \(\omega_{{}_{G}}:X_{g}\in T_{g}G\mapsto\operatorname{L}_{g^{-1}*}X_{g}\in \mathfrak{g}\) does on \(G\), where \(\operatorname{L}_{a}:g\mapsto ag\) denotes left-translation by \(a\). **Definition 2.3**.: Let \((G,H)\) be a model. A _Cartan geometry of type \((G,H)\) over a (smooth) manifold \(M\)_ is a pair \((\mathscr{G},\omega)\), where \(\mathscr{G}\) is a principal \(H\)-bundle over \(M\) with quotient map2\(q_{{}_{H}}:\mathscr{G}\to M\) and \(\omega\) is a \(\mathfrak{g}\)-valued one-form on \(\mathscr{G}\) such that Footnote 2: In an effort to declutter notation, we will always denote the quotient map of a principal \(H\)-bundle by \(q_{{}_{H}}\), even if there are multiple relevant principal bundles. The meaning of the map should always be clear from context. Similarly, we will always denote the right-action of \(h\in H\) on a principal bundle by \(\operatorname{R}_{h}:\mathscr{G}\mapsto\mathscr{G}h\). * for every \(\mathscr{G}\in\mathscr{G}\), \(\omega_{\mathscr{G}}:T_{\mathscr{G}}\to\mathfrak{g}\) is a linear isomorphism; * for every \(h\in H\), \(\operatorname{R}_{h}^{*}\omega=\operatorname{Ad}_{h^{-1}\omega}\) ; * for every \(Y\in\mathfrak{h}\), the flow of the vector field \(\omega^{-1}(Y)\) is given by \(\exp(t\omega^{-1}(Y))=\operatorname{R}_{\exp(tY)}\) for all \(t\in\mathbb{R}\). A natural example of a Cartan geometry of type \((G,H)\) is always the _Klein geometry_ of that type, which encodes the geometric structure of the model geometry as a Cartan geometry. **Definition 2.4**.: The _Klein geometry of type \((G,H)\)_ is the Cartan geometry \((G,\omega_{{}_{G}})\) over \(G/H\), where \(G\) is the model group and \(\omega_{{}_{G}}\) is the Maurer-Cartan form on \(G\). Throughout, we will want to compare different Cartan geometries of the same type. To do this, we will use local diffeomorphisms that preserve the Cartan-geometric structure, called _geometric maps_. **Definition 2.5**.: Given two Cartan geometries \((\mathscr{G},\omega)\) and \((\mathscr{G},\upsilon)\) of type \((G,H)\), a _geometric map_\(\varphi:(\mathscr{G},\omega)\to(\mathscr{G},\upsilon)\) is an \(H\)-equivariant smooth map \(\varphi:\mathscr{G}\to\mathscr{G}\) such that \(\varphi^{*}\upsilon=\omega\). A geometric map \(\varphi:(\mathscr{G},\omega)\to(\mathscr{G},\upsilon)\) always induces a corresponding local diffeomorphism between the base manifolds of \(\mathscr{G}\) and \(\mathscr{G}\), given by \(q_{{}_{H}}(\boldsymbol{g})\mapsto q_{{}_{H}}(\varphi(\boldsymbol{g}))\) for each \(\boldsymbol{g}\in\mathscr{G}\). We find it convenient and natural to not waste symbols to distinguish between these maps; throughout, whenever we have a geometric map \(\varphi\), we will also denote its induced map on the base manifolds by the same symbol \(\varphi\). The meaning should always be clear from context. Of course, some geometric maps tell us more than others. We say that a geometric map \(\varphi:(\mathscr{G},\omega)\to(\mathscr{G},\upsilon)\) is a _geometric embedding_ when \(\varphi\) is injective, and when \(\varphi\) is bijective, we further say that it is a _(geometric) isomorphism_. A geometric isomorphism from \((\mathscr{G},\omega)\) to itself is then called a _(geometric) automorphism_. Automorphisms of Cartan geometries tend to be fairly rigid. Given an automorphism \(\alpha\) of \((\mathscr{G},\omega)\) and an element \(\boldsymbol{e}\in\mathscr{G}\), the image \(\alpha(\boldsymbol{e})\) uniquely determines \(\alpha\) when the base manifold is connected. The group \(\operatorname{Aut}(\mathscr{G},\omega)\) of all automorphisms of \((\mathscr{G},\omega)\) therefore acts freely on \(\mathscr{G}\), and we can induce a Lie group structure on it by looking at the smooth structure inherited from orbits of \(\operatorname{Aut}(\mathscr{G},\omega)\) in \(\mathscr{G}\). Following our convention of not distinguishing between geometric maps and the corresponding induced maps on the base manifolds, when we talk about fixed points of an automorphism \(\alpha\in\operatorname{Aut}(\mathscr{G},\omega)\), we will mean fixed points of the induced map on the base manifold. For the Klein geometry of type \((G,H)\) and \(a\in G\), we will write \[\operatorname{Fix}_{G/H}(a):=\{q_{{}_{H}}(g)\in G/H:a(q_{{}_{H}}(g))=q_{{}_{H }}(g)\}.\] Since \(\operatorname{Aut}(\mathscr{G},\omega)\) acts freely, automorphisms will not have actual fixed points in the bundle \(\mathscr{G}\). When the induced map \(\alpha\) on the base manifold fixes a point \(q_{{}_{H}}(\boldsymbol{g})\in M\), the overlying \(H\)-equivariant map sends \(\boldsymbol{g}\) to \(\alpha(\boldsymbol{g})=\boldsymbol{g}a\) for some \(a\in H\). This element \(a\in H\) is called the _isotropy_ of \(\alpha\) at \(\boldsymbol{g}\). **Definition 2.6**.: For \(\alpha\in\operatorname{Aut}(\mathscr{G},\omega)\) and \(\boldsymbol{e}\in\mathscr{G}\) such that \(\alpha(\boldsymbol{e})\in\boldsymbol{e}H\), the _isotropy_ of \(\alpha\) at \(\boldsymbol{e}\) is the unique element \(a\in H\) such that \(\alpha(\boldsymbol{e})=ea\). For the applications considered in the paper, we will primarily focus on automorphisms \(\alpha\in\operatorname{Aut}(\mathscr{G},\omega)\) of parabolic geometries \((\mathscr{G},\omega)\) of type \((G,P)\) with isotropy \(a\in P_{+}\) at some \(\boldsymbol{e}\in\mathscr{G}\). In that case, we say that \(\alpha\) has a _higher-order fixed point_ at \(q_{{}_{P}}(\boldsymbol{e})\). Another core idea for Cartan geometries is that of _curvature_, which tells us when the geometry locally differs from the Klein geometry. **Definition 2.7**.: Given a Cartan geometry \((\mathscr{G},\omega)\) of type \((G,H)\), its _curvature_ is the \(\mathfrak{g}\)-valued two-form given by \(\Omega:=\mathrm{d}\omega+\frac{1}{2}[\omega,\omega]\). When the curvature vanishes in a neighborhood of a point, then the geometry is locally equivalent to that of the Klein geometry near that point. In other words, when \(\Omega\) vanishes on some neighborhood of an element \(e\in\mathscr{G}\), we can find a geometric embedding \[\psi:(q_{\mu}^{-1}(U),\omega_{{}_{G}})\hookrightarrow(\mathscr{G},\omega)\] from a neighborhood \(q_{\mu}^{-1}(U)\) of the identity \(e\in G\) in the Klein geometry such that \(\psi(e)=e\). When the curvature vanishes everywhere, we say that the geometry is _flat_. Throughout, we will make use of the fact that \[\Omega^{\omega}(X\wedge Y):=\Omega(\omega^{-1}(X)\wedge\omega^{-1}(Y))\] for \(X,Y\in\mathfrak{g}\) determines an \(H\)-equivariant map \(\Omega^{\omega}\) from \(\mathscr{G}\) to the vector space \(\Lambda^{2}(\mathfrak{g}/\mathfrak{h})^{\vee}\otimes\mathfrak{g}\) that completely characterizes the curvature. Furthermore, when our model is parabolic, the Killing form gives us a natural isomorphism of \(P\)-representations between \((\mathfrak{g}/\mathfrak{p})^{\vee}\) and \(\mathfrak{p}_{+}\), hence an isomorphism between \(\Lambda^{2}(\mathfrak{g}/\mathfrak{p})^{\vee}\otimes\mathfrak{g}\) and \(\Lambda^{2}\mathfrak{p}_{+}\otimes\mathfrak{g}\). For parabolic geometries, there are two standard assumptions placed on the curvature. The first condition, called _regularity_, asks that \(\Omega^{\omega}\) have positive homogeneity for the filtration of \(\mathfrak{g}\), in the sense that \(\Omega^{\omega}(\mathfrak{g}^{i}\wedge\mathfrak{g}^{j})\subseteq\mathfrak{g} ^{i+j+1}\) for all \(i\) and \(j\). This is a natural, geometrically straightforward assumption that we will use throughout. The second, called _normality_, requires that \(\Omega^{\omega}\) vanish under the Kostant codifferential; see 3.1.12 of [4] for details. We find this condition difficult to justify intrinsically; thankfully, it generally seems to not be required in this context, so we have removed the assumption wherever it was even remotely convenient to do so. ### Development and holonomy Again, we recommend [5] for an overview of our techniques involving holonomy, as well as Chapter 3, Section 7 of [13] for a review of basic results on developments of paths. We would like to pretend that Cartan geometries \((\mathscr{G},\omega)\) of type \((G,H)\) "are" their model geometries. The notions of _development_ and _holonomy_ allow us to do this somewhat judiciously. **Definition 2.8**.: Given a (piecewise smooth)3 path \(\gamma:[0,1]\to\mathscr{G}\) in a Cartan geometry \((\mathscr{G},\omega)\) of type \((G,H)\), the _development_\(\gamma_{G}\) of \(\gamma\) is the unique (piecewise smooth) path \(\gamma_{G}:[0,1]\to G\) such that \(\gamma_{G}(0)=e\) and \(\omega(\dot{\gamma})=\omega_{{}_{G}}(\dot{\gamma}_{G})\). Footnote 3: Throughout, whenever we refer to a “path”, we will always mean a piecewise smooth path. The idea here is that the tangent vectors \(\dot{\gamma}\) tell us how to move along \(\gamma\) at each point in time, and \(\gamma_{G}\) is the path we get by trying to follow these same instructions in the model group \(G\), starting at the identity. Crucially, it follows that if we have two paths with the same development and starting point in a Cartan geometry, then they must be the same path. Development allows us to identify paths in a Cartan geometry with paths in the model, and if we fix a pretend "identity element" \(e\in\mathscr{G}\) we can even give a kind of correspondence between elements of \(\mathscr{G}\) and elements of \(G\). **Definition 2.9**.: For a Cartan geometry \((\mathscr{G},\omega)\) of type \((G,H)\) and points \(\mathscr{e},\mathscr{G}\in\mathscr{G}\), we say that \(g\in G\) is a _development of \(\mathscr{G}\) from \(\mathscr{e}\)_ if and only if there exists a path \(\gamma:[0,1]\to\mathscr{G}\) and \(h\in H\) such that \(\gamma(0)=\mathscr{e}\), \(\gamma(1)h=\mathscr{g}\), and \(\gamma_{G}(1)h=g\). Developments of elements in \(\mathscr{G}\) are usually not unique. Thankfully, the _holonomy group_\(\operatorname{Hol}_{\mathscr{e}}(\mathscr{G},\omega)\) tells us precisely how this happens: if \(g\) is a development of \(\mathscr{G}\) from \(\mathscr{e}\), then the set of all possible developments from \(\mathscr{e}\) to \(\mathscr{G}\) is precisely \(\operatorname{Hol}_{\mathscr{e}}(\mathscr{G},\omega)g\). **Definition 2.10**.: For a Cartan geometry \((\mathscr{G},\omega)\) of type \((G,H)\), the _holonomy group_ of \((\mathscr{G},\omega)\) at \(\mathscr{e}\in\mathscr{G}\) is the subgroup of \(G\) given by \[\operatorname{Hol}_{\mathscr{e}}(\mathscr{G},\omega):=\left\{\gamma_{G}(1)h_{ \gamma}^{-1}\in G:\begin{array}{l}\gamma:[0,1]\to\mathscr{G}\text{ is a path such that,}\\ \text{for }h_{\gamma}\in H,\,\gamma(0)=\mathscr{e}=\gamma(1)h_{\gamma}^{-1} \end{array}\right\}.\] Because the holonomy group \(\operatorname{Hol}_{\mathscr{e}}(\mathscr{G},\omega)\) completely describes the ambiguity in taking developments from \(\mathscr{e}\), we get a geometric map \(\delta:(\mathscr{G},\omega)\to(G,\omega_{{}_{G}})\), which we could reasonably call the _developing map_, given by \(\gamma(1)h\mapsto\gamma_{G}(1)h\) when \(\operatorname{Hol}_{\mathscr{e}}(\mathscr{G},\omega)=\{e\}\). Indeed, when the geometry is flat and the base manifold is simply connected, the holonomy group is always trivial, and the induced map of \(\delta\) on the base manifold is precisely the developing map in the usual sense for locally homogeneous geometric structures. ## 3. Ballast sequences To prove Theorems A, B, and C, we will need a way to guarantee that the curvature vanishes in some neighborhood of a particular point. For this purpose, we will use sequences of elements in the isotropy group that we call _ballast sequences_. **Definition 3.1**.: Consider a Cartan geometry \((\mathscr{G},\omega)\) of type \((G,H)\) and an automorphism \(\alpha\in\operatorname{Aut}(\mathscr{G},\omega)\). A sequence \((b_{k})\) in \(H\) is a _ballast sequence for \(\alpha\) at \(\mathscr{G}\in\mathscr{G}\) with attractor \(\mathscr{e}\in\mathscr{G}\)_ if and only if there exists a sequence \((\mathscr{G}_{k})\) in \(\mathscr{G}\) such that \(\mathscr{G}_{k}\to\mathscr{G}\) and \(\alpha^{k}(\mathscr{G}_{k})b_{k}^{-1}\to\mathscr{e}\). The term "ballast" here alludes to weight placed in a ship to help stabilize it; on its own, the sequence \((\alpha^{k}(\mathscr{G}_{k}))\) in the Cartan geometry might "capsize" off to infinity, but if we add some additional "weight" by right-translating by a sequence in the isotropy group, then the result can still converge in the Cartan geometry. Furthermore, because the behavior of these sequences is often characterized by their interactions with the representation-theoretic weights of the model Lie algebra, the comparison to something specifically used for its weight seems justified. That being said, our main reason for introducing this terminology is to start moving away from the term "holonomy sequence", which is used for these objects in, among several other places, [3], [6], and [11]; ballast sequences generally do not take values in the holonomy group, and since we anticipate that techniques involving the actual holonomy of a Cartan geometry will see significant growth in the near future, it is only a matter of time until the term "holonomy sequence" becomes detrimentally cumbersome and confusing.4 Footnote 4: As an example, imagine we want to keep track of certain developments from \(e\) of the sequence \((\alpha^{k}(\mathpzc{g})b_{k}^{-1})\). If \(a\in G\) is a development of \(\alpha(e)\) from \(e\) and \(g\in G\) is a development of \(\mathpzc{g}\) from \(e\), then every development of \(\alpha^{k}(\mathpzc{g})b_{k}^{-1}\) is of the form \(\eta_{k}a^{k}gb_{k}^{-1}\) for some \(\eta_{k}\in\operatorname{Hol}_{e}(\mathpzc{S},\omega)\). If we look at a sequence of such developments, then referring to \((b_{k})\) and not \((\eta_{k})\) as the “holonomy sequence” becomes confusing quite quickly. The key to using these sequences is recognizing that, for a ballast sequence \((b_{k})\) at \(\mathpzc{g}\) with attractor \(e\), if \(V\) is a representation of \(H\) and \(\sigma:\mathpzc{G}\to V\) is an \(H\)-equivariant map (hence corresponding to a section of \(\mathpzc{G}\times_{H}V\) over \(M\)) such that \(\sigma\circ\alpha=\sigma\), then \[b_{k}\cdot\sigma(\mathpzc{g}_{k})=\sigma(\alpha^{k}(\mathpzc{g}_{k})b_{k}^{-1 })\to\sigma(e).\] Therefore, if \(b_{k}\cdot\sigma(\mathpzc{g}_{k})\) cannot converge to \(\sigma(e)\) unless \(\sigma(\mathpzc{g}_{k})\) converges to \(0\), then we must have \(\sigma(\mathpzc{g})=0\). For example, if \((\mathpzc{G},\omega)\) is a Cartan geometry of type \((\operatorname{Aff}(m),\operatorname{GL}_{m}\mathbb{R})\) and \(\alpha(e)=e(\lambda\mathds{1})\), where \(0<\lambda<1\) and \(\lambda\mathds{1}\) is the linear transformation of \(\mathbb{R}^{m}\) that rescales everything by \(\lambda\), then for every \(v\in\mathbb{R}^{m}<\operatorname{\mathfrak{aff}}(m)\) such that \(\exp(\omega^{-1}(v))\mathpzc{e}\) is well-defined, \[\alpha^{k}(\exp(\omega^{-1}(v))\mathpzc{e}) =\exp(\omega^{-1}(v))\alpha^{k}(e)\] \[=\exp(\omega^{-1}(v))(\mathpzc{e}(\lambda^{k}\mathds{1}))\] \[=(\exp(\omega^{-1}(\operatorname{Ad}_{\lambda^{k}\mathds{1}}v)) \mathpzc{e})(\lambda^{k}\mathds{1})\] \[=(\exp(\omega^{-1}(\lambda^{k}v))\mathpzc{e})(\lambda^{k} \mathds{1}),\] so the sequence \((\lambda^{k}\mathds{1})\) in \(\operatorname{GL}_{m}\mathbb{R}\) is a ballast sequence for \(\alpha\) at \(\exp(\omega^{-1}(v))\mathpzc{e}\) with attractor \(e\). The curvature \(\Omega^{\omega}\) takes values in the \(\operatorname{GL}_{m}\mathbb{R}\)-representation \(\Lambda^{2}(\mathbb{R}^{m})^{\vee}\otimes\operatorname{\mathfrak{aff}}(m)\), which decomposes into eigenspaces for \(\lambda\mathds{1}\) as \(\Lambda^{2}(\mathbb{R}^{m})^{\vee}\otimes\mathbb{R}^{m}\), with eigenvalue \(\lambda^{-2}\lambda=\lambda^{-1}\), and \(\Lambda^{2}(\mathbb{R}^{m})^{\vee}\otimes\operatorname{\mathfrak{gl}}_{m} \mathbb{R}\), with eigenvalue \(\lambda^{-2}\). These eigenvalues are all greater than \(1\), so \(\lambda^{k}\mathds{1}\cdot w\) cannot converge for nonzero \(w\in\Lambda^{2}(\mathbb{R}^{m})^{\vee}\otimes\operatorname{\mathfrak{aff}}(m)\). Because \[\lambda^{k}\mathds{1}\cdot\Omega^{\omega}_{\exp(\omega^{-1}(v))e}=\Omega^{ \omega}_{\alpha^{k}(\exp(\omega^{-1}(v))e)(\lambda^{k}\mathds{1})^{-1}}\to \Omega^{\omega}_{e},\] this means we must have \(\Omega^{\omega}_{\exp(\omega^{-1}(v))e}=0\), hence \(\Omega^{\omega}\) vanishes in a neighborhood of \(e\). For many parabolic geometries, there is a well-developed strategy for constructing ballast sequences in order to prove that the curvature vanishes in a neighborhood of a higher-order fixed point. Essentially, the idea is to construct Jacobson-Morozov triples for the isotropy of the automorphism, allowing us to restrict to one-dimensional subspaces on which the automorphism behaves like a translation along the real projective line. Using this, we can construct ballast sequences and check whether these ballast sequences force the curvature to vanish along a given one-dimensional subspace; if we can cover a dense subset of a neighborhood of the higher-order fixed point with such subspaces, then we must have a flat neighborhood. These results on higher-order fixed points are often stated in terms of infinitesimal automorphisms, since this is convenient for getting an element of the Lie algebra from the isotropy in order to construct the one-dimensional subspaces along which the automorphisms behave as translations on the real projective line. However, since \(P_{+}\) is nilpotent, its exponential map is necessarily bijective, so we can always get a corresponding element of the subalgebra \(\mathfrak{p}_{+}\) for a given element of \(P_{+}\). Thus, all of these results on higher-order fixed points for infinitesimal automorphisms in \(\mathfrak{aut}(\mathscr{F},\omega)\) also work for automorphisms in \(\operatorname{Aut}(\mathscr{F},\omega)\). On the other hand, many of these results also use normality. While this condition is often useful, we do not think normality plays a role in the vanishing of curvature in a neighborhood of a higher-order fixed point, at least in the cases of interest at present, so we have endeavored to remove the assumption wherever possible. For example, the following result in the complex case is essentially the same as Theorem 1.2 of [11], except we have removed the reliance on normality. **Proposition 3.2**.: _Let \(\mathbb{K}\in\{\mathbb{R},\mathbb{C}\}\), so that the model \((\operatorname{PGL}_{m+1}\mathbb{K},P)\) corresponds to \(\mathbb{K}\)-projective geometry over \(\mathbb{K}\mathbb{P}^{m}\). If \((\mathscr{F},\omega)\) is a (not necessarily normal) Cartan geometry of type \((\operatorname{PGL}_{m+1}\mathbb{K},P)\) with an automorphism \(\alpha\in\operatorname{Aut}(\mathscr{F},\omega)\) such that \(\alpha(\boldsymbol{e})=\boldsymbol{e}a\) for some nontrivial \(a\in P_{+}\), then \(\Omega^{\omega}\) vanishes in a neighborhood of \(\boldsymbol{e}\)._ Proof.: After conjugation by an element of \(P\), we may assume that \[a=\begin{pmatrix}1&1&0\\ 0&1&0\\ 0&0&\mathds{1}\end{pmatrix}\in\operatorname{PGL}_{m+1}\mathbb{K},\] where here and throughout this proof the block sizes along each row are \(1\times 1\), \(1\times 1\), and \(1\times m\) for the first two rows and \(m\times 1\), \(m\times 1\), and \(m\times m\) for the last row. We want to show that, for \(x\in\mathbb{K}\setminus\{0\}\), \(y\in\mathbb{K}^{m}\), and \[X=\begin{pmatrix}0&0&0\\ x&0&0\\ y&0&0\end{pmatrix}\in\mathfrak{pgf}_{m+1}\mathbb{K},\] \(\Omega^{\omega}_{\exp(\omega^{-1}(X))\boldsymbol{e}}=0\) whenever \(\exp(\omega^{-1}(X))\boldsymbol{e}\) is well-defined. Then, \(\Omega^{\omega}\) vanishes on a dense subset of a neighborhood of \(\boldsymbol{e}\), hence it vanishes on a neighborhood of \(\boldsymbol{e}\) by continuity. Because \[a^{k}\exp(tX) =\begin{pmatrix}1&k&0\\ 0&1&0\\ 0&0&\mathds{1}\end{pmatrix}\begin{pmatrix}1&0&0\\ tx&1&0\\ ty&0&\mathds{1}\end{pmatrix}=\begin{pmatrix}1+ktx&k&0\\ tx&1&0\\ ty&0&\mathds{1}\end{pmatrix}\] \[=\exp\left(\frac{t}{1+ktx}X\right)\begin{pmatrix}1+ktx&k&0\\ 0&\frac{1}{1+ktx}&0\\ 0&\frac{-kt}{1+ktx}y&\mathds{1}\end{pmatrix}\] in \(\operatorname{PGL}_{m+1}\mathbb{K}\) for all \(t\in[0,1]\) whenever \(\frac{-1}{kx}\not\in[0,1]\subset\mathbb{R}\) (so that \(1+ktx\) never vanishes), it follows that, whenever \(\frac{-1}{kx}\not\in[0,1]\), \[\exp(\omega^{-1}(tX))(\boldsymbol{e}a^{k})=\left(\exp\left(\omega^{-1}\left( \tfrac{t}{1+ktx}X\right)\right)\boldsymbol{e}\right)\begin{pmatrix}1+ktx&k&0 \\ 0&\frac{1}{1+ktx}&0\\ 0&\frac{-kt}{1+ktx}y&\mathds{1}\end{pmatrix}\] for all \(t\in[0,1]\) such that \(\exp(\omega^{-1}(tX))(\boldsymbol{e}a^{k})\) is well-defined, since then the left- and right-hand sides of the above equation determine paths starting at \(\boldsymbol{e}a^{k}\) with the same development. Since, for positive \(k\), we can only have \(\frac{-1}{kx}\in[0,1]\) if \(x\) is a negative real number, in which case we can correct by having \(k\to-\infty\) instead of \(k\to+\infty\), we get a ballast sequence for \(\alpha\) at \(\exp(\omega^{-1}(X))\boldsymbol{e}\) with attractor \(\boldsymbol{e}\) given by \[b_{k}:=\begin{pmatrix}1+kx&k&0\\ 0&\frac{1}{1+kx}&0\\ 0&\frac{-k}{1+kx}y&\mathds{1}\end{pmatrix}\in P\] whenever \(\exp(\omega^{-1}(X))\boldsymbol{e}\) is well-defined. Now, since \(\Omega^{\omega}\) takes values in the \(P\)-representation \(\Lambda^{2}\mathfrak{p}_{+}\otimes\mathfrak{pgl}_{m+1}\mathbb{K}\), it suffices to show that whenever \(w\in\Lambda^{2}\mathfrak{p}_{+}\otimes\mathfrak{pgl}_{m+1}\mathbb{K}\) is nonzero, \(b_{k}\cdot w\) cannot converge. To this end, note that \(b_{k}\) acts diagonalizably (over \(\mathbb{K}\)) on both \(\mathfrak{p}_{+}\) and \(\mathfrak{pgl}_{m+1}\mathbb{K}\): for \(\mathfrak{p}_{+}\), we can decompose into eigenspaces \[\left\{\begin{pmatrix}0&-\frac{\beta(y)}{x}&\beta\\ 0&0&0\\ 0&0&0\end{pmatrix}:\beta^{\top}\in\mathbb{R}^{m-1}\right\}\text{ and }\left\langle \begin{pmatrix}0&1&0\\ 0&0&0\\ 0&0&0\end{pmatrix}\right\rangle\] with eigenvalues \(1+kx\) and \((1+kx)^{2}\), respectively, and for \(\mathfrak{pgl}_{m+1}\mathbb{K}\), we can decompose into eigenspaces \[\left\langle\begin{pmatrix}-x\frac{1+kx}{2+kx}&-(\frac{1+kx}{2+kx})^{2}&0\\ x^{2}&x\frac{1+kx}{2+kx}&0\\ xy&\frac{1+kx}{2+kx}y&0\end{pmatrix}\right\rangle\] with eigenvalue \((1+kx)^{-2}\), \[\left\{\begin{pmatrix}0&\frac{(1+kx)\beta(y)}{x(2+kx)}&-\frac{1+kx}{2+kx} \beta\\ 0&-\beta(y)&x\beta\\ xv&\frac{1+kx}{2+kx}v-\frac{\beta(y)}{x}y&y\beta\end{pmatrix}:v,\beta^{\top} \in\mathbb{K}^{m-1}\right\}\] with eigenvalue \((1+kx)^{-1}\), \[\left\{\begin{pmatrix}r_{1}x&(r_{1}-r_{2})\frac{1+kx}{2+kx}&0\\ 0&r_{2}x&0\\ 0&r_{2}y&R\end{pmatrix}:r_{1},r_{2}\in\mathbb{K},R\in\mathfrak{gl}_{m-1}\mathbb{ K}\right\}\] with eigenvalue \(1\), \[\left\{\begin{pmatrix}0&-\frac{\beta(y)}{x}&\beta\\ 0&0&0\\ 0&v&0\end{pmatrix}:v,\beta^{\top}\in\mathbb{K}^{m-1}\right\}\] with eigenvalue \(1+kx\), and \[\left\langle\begin{pmatrix}0&1&0\\ 0&0&0\\ 0&0&0\end{pmatrix}\right\rangle\] with eigenvalue \((1+kx)^{2}\). In particular, \(b_{k}\) acts diagonalizably on \(\Lambda^{2}\mathfrak{p}_{+}\otimes\mathfrak{psl}_{m+1}\mathbb{K}\), with eigenvalues given by nonnegative integer powers of \(1+kx\). Moreover, these eigenspaces change with \(k\), so that no constant element can stay in a non-expanding eigenspace for all \(k\). Thus, \(b_{k}\cdot w\) cannot converge for nonzero \(w\), hence \(\Omega^{\omega}\) must vanish at every well-defined \(\exp(\omega^{-1}(X))\boldsymbol{e}\) with \(x\neq 0\). We strongly suspect that a similar proof works for the quaternionic case \((\operatorname{PGL}_{m+1}(\mathbb{H}),P)\). Unfortunately, the author lacks the patience for dealing with quaternionic eigenspaces, so we will instead settle for using Theorem 5.4 of [11], which gives the same result under the assumption that the curvature is normal. Again, the original result is stated in terms of infinitesimal automorphisms, but using the fact that \(P_{+}\) is nilpotent, the same proof applies for elements of \(\operatorname{Aut}(\mathscr{F},\omega)\). **Proposition 3.3** (Theorem 5.4 of [11], rephrased).: _Given a **normal** Cartan geometry \((\mathscr{G},\omega)\) of type \((\operatorname{PGL}_{m+1}\mathbb{H},P)\), if there exists a nontrivial automorphism \(\alpha\in\operatorname{Aut}(\mathscr{G},\omega)\) such that \(\alpha(\boldsymbol{e})=\boldsymbol{e}P_{+}\), then \(\Omega^{\omega}\) vanishes in a neighborhood of \(\boldsymbol{e}\)._ This just leaves the CR case, which itself splits into two cases: for \(a\in P_{+}\) non-null, either \(a\) is in the center of \(P_{+}\), corresponding to the highest grading component \(\mathfrak{g}_{2}\), or \(a\) is conjugate to the exponential of a non-null element in \(\mathfrak{g}_{1}\). Luckily, for \(a\) in the center of \(P_{+}\), our answer is already given by Theorem 3.9 of [3]; while the original statement of the result assumes normality, its proof only uses the fact that the curvature lies in the subspace generated by components of positive homogeneity, which is precisely the requirement imposed by regularity. **Proposition 3.4** (Theorem 3.9 of [3], rephrased).: _If \((\mathscr{G},\omega)\) is a regular Cartan geometry of type \((G,P)\), where \((G,P)\) is a parabolic contact geometry, and \(\alpha\in\operatorname{Aut}(\mathscr{G},\omega)\) is a nontrivial automorphism such that \(\alpha(\boldsymbol{e})\in\boldsymbol{e}\exp(\mathfrak{g}_{2})=\boldsymbol{e} \mathrm{Z}(P_{+})\), then \(\Omega^{\omega}\) vanishes in a neighborhood of \(\boldsymbol{e}\)._ For CR automorphisms with non-null isotropy outside of the highest filtration component of the stabilizer, things are a bit more complicated. Theorem 3.10 in [3] shows that, in this case, we can find a flat open set with the higher-order fixed point in its closure, but for our machinery below, it is far more convenient to have a flat neighborhood of our fixed point. For this, we will need something a bit more flexible. **Definition 3.5**.: Suppose \(\gamma:[0,1]\to G\) is a path starting at the identity element \(e\in G\). We will say that a sequence \((\beta_{k})\) of paths \(\beta_{k}:[0,1]\to H\) is \(\gamma\)_-shrinking for \(a\in G\)_ if and only if \(\beta_{k}(0)=a^{k}\) and the length of the path \(a^{k}\gamma\beta_{k}^{-1}:t\mapsto a^{k}\gamma(t)\beta_{k}(t)^{-1}\) with respect to some left-invariant Riemannian metric on \(G\) converges to \(0\) as \(k\to+\infty\). The idea with these sequences of paths is that, as one might guess from the name, they shrink the paths \(a^{k}\gamma\beta_{k}^{-1}\) back to \(e\). Because we can take left-invariant notions from the model group and put them on the Cartan geometries modeled on them, this allows us to guarantee certain paths in Cartan geometries always shrink to a point just by checking their developments. **Lemma 3.6**.: _Suppose \((\mathscr{S},\omega)\) is a Cartan geometry of type \((G,H)\) with an automorphism \(\alpha\in\operatorname{Aut}(\mathscr{G},\omega)\) and an element \(\boldsymbol{e}\in\mathscr{G}\) such that \(\alpha(\boldsymbol{e})=\boldsymbol{e}a\) for some \(a\in H\). Let \(\gamma:[0,1]\to\mathscr{G}\) be a path starting at \(\boldsymbol{e}\). If a sequence of paths \((\beta_{k})\) is \(\gamma_{G}\)-shrinking for \(a\), then for each \(t\in[0,1]\), \((\beta_{k}(t))\) is a ballast sequence for \(\alpha\) at \(\gamma(t)\) with attractor \(\boldsymbol{e}\)._ Proof.: Let \(\mathrm{g}\) be the inner product on \(\mathfrak{g}\) determining the left-invariant Riemannian metric on \(G\) for which \((\beta_{k})\) is \(\gamma_{G}\)-shrinking for \(a\). We get a corresponding Riemannian metric \(\mathrm{g}_{\omega}\) given by \(\mathrm{g}_{\omega}(\xi,\eta):=\mathrm{g}(\omega(\xi),\omega(\eta))\) on \(\mathscr{G}\), and by construction, the length of \(\alpha^{k}(\gamma)\beta_{k}^{-1}:t\mapsto\alpha^{k}(\gamma(t))\beta_{k}(t)^{-1}\) with respect to \(\mathrm{g}_{\omega}\) is equal to the length of \(a^{k}\gamma_{G}\beta_{k}^{-1}\) with respect to \(\mathrm{g}\). Thus, for an arbitrarily small open \(\mathrm{g}_{\omega}\)-ball around \(\boldsymbol{e}\), \(\alpha^{k}(\gamma(t))\beta_{k}(t)^{-1}\) is in that open ball for all sufficiently large \(k\). In particular, at the expense of having to keep track of arclength, our motion is no longer restricted by Jacobson-Morozov triples. **Proposition 3.7**.: _Suppose \((\mathscr{G},\omega)\) is a regular Cartan geometry of type \((\operatorname{PU}(\mathrm{h}_{p,q}),P)\), \(\alpha\in\operatorname{Aut}(\mathscr{G},\omega)\), and \(\boldsymbol{e}\in\mathscr{G}\) such that \(\alpha(\boldsymbol{e})=\boldsymbol{e}a\) for some non-null \(a\in P_{+}\) not in the center of \(P_{+}\). Then, \(\Omega^{\omega}\) vanishes in some neighborhood of \(\boldsymbol{e}\)._ Proof.: Since the cases where \(a\) is "timelike" and "spacelike" are similar, we will just do the "timelike" case. Thus, after conjugating by an element of \(P\), we may assume that \(a\) is of the form \[a=\begin{pmatrix}1&1&0&-1/2\\ 0&1&0&-1\\ 0&0&\mathds{1}&0\\ 0&0&0&1\end{pmatrix},\] where here and throughout this proof, the block sizes for the matrix are, going across each displayed row from left to right, \(1\times 1\), \(1\times 1\), \(1\times(p+q-1)\), and \(1\times 1\) for the top two rows and the bottom row, and \((p+q-1)\times 1\), \((p+q-1)\times 1\), \((p+q-1)\times(p+q-1)\), and \((p+q-1)\times 1\) for the third row. Our goal is to show that, for \(x\in\mathbb{C}\), \(y\in\mathbb{C}^{p+q-1}\), \(\tau\in\mathbb{R}\setminus\{0\}\), and \[X=\begin{pmatrix}0&0&0&0\\ x&0&0&0\\ y&0&0&0\\ \tau\mathrm{i}&-\bar{x}&-\bar{y}^{\top}I_{p-1,q}&0\end{pmatrix}\in\mathfrak{g}_{ -},\] the curvature vanishes at \(\exp(\omega^{-1}(X))\boldsymbol{e}\) whenever \(\exp(\omega^{-1}(X))\boldsymbol{e}\) is well-defined. Then, we will have proven that \(\Omega^{\omega}\) vanishes on a dense subset of a neighborhood of \(\boldsymbol{e}\), so the desired result follows by continuity. To show that the curvature vanishes at \(\exp(\omega^{-1}(X))\boldsymbol{e}\), we consider the path \(\gamma:t\mapsto\exp(t\omega^{-1}(X))\boldsymbol{e}\). Its development \(\gamma_{\mathrm{PU}(\mathrm{h}_{p,q})}\) is the restriction of the one-parameter subgroup \(t\mapsto\exp(tX)\) to the unit interval, and this will give us a convenient opportunity to apply Lemma 3.6. Writing \[z=z_{k}(t,X):=ktx+\frac{k^{2}t^{2}(|x|^{2}+\bar{y}^{\top}I_{p-1,q}y)}{4}-\frac {k^{2}t\tau}{2}\mathrm{i},\] we define \[\beta_{k}(t)_{0}:=\begin{pmatrix}1+z&0&0&0\\ 0&\frac{1+\bar{x}}{1+z}-\frac{k^{2}t^{2}(\bar{y}^{\top}I_{p-1,q}y)}{2(1+z)}& \frac{kt(ktx+2)}{2(1+z)}\bar{y}^{\top}I_{p-1,q}&0\\ 0&\frac{-kt(kt\bar{x}+2)}{2(1+z)}y&\mathds{1}-\frac{k^{2}t^{2}}{2(1+z)}y\bar{y }^{\top}I_{p-1,q}&0\\ 0&0&0&\frac{1}{1+\bar{z}}\end{pmatrix},\] \[\beta_{k}(t)_{+}:=\begin{pmatrix}1&\frac{k(kt\bar{x}+2)}{2(1+z)}&\frac{k^{2}t }{2(1+z)}\bar{y}^{\top}I_{p-1,q}&-\frac{k^{2}}{2(1+z)}\\ 0&1&0&\frac{-k(ktx+2)}{2(1+\bar{z})}\\ 0&0&\mathds{1}&-\frac{k^{2}t}{2(1+\bar{z})}y\\ 0&0&0&\frac{1}{1+\bar{z}}\end{pmatrix},\] and finally \(\beta_{k}(t):=\beta_{k}(t)_{0}\beta_{k}(t)_{+}\). The paths \(\beta_{k}:[0,1]\to P\) come from the model, chosen so that the paths \(a^{k}\gamma_{\mathrm{PU}(\mathrm{h}_{p,q})}\beta_{k}^{-1}\) stay inside of the horospherical subgroup \(G_{-}\), with \(\beta_{k}(t)_{0}\) the part of \(\beta_{k}(t)\) in \(G_{0}\) and \(\beta_{k}(t)_{+}\) the part of \(\beta_{k}(t)\) in \(P_{+}\). The submatrix \[\begin{bmatrix}\frac{1+\bar{z}}{1+z}-\frac{k^{2}t^{2}(\bar{y}^{\top}I_{p-1,q}y) }{2(1+z)}&\frac{kt(ktx+2)}{1+z}\bar{y}^{\top}I_{p-1,q}\\ \frac{-kt(kt\bar{x}+2)}{2(1+z)}y&\mathds{1}-\frac{k^{2}t^{2}}{2(1+z)}y\bar{y} ^{\top}I_{p-1,q}\end{bmatrix}\] has characteristic polynomial \[(\lambda-1)^{p+q-2}\left(\left(\lambda-\frac{1+\bar{z}}{1+z}\right)(\lambda-1 )+\frac{k^{2}t^{2}(\bar{y}^{\top}I_{p-1,q}y)}{1+z}\lambda\right),\] from which we learn that it is diagonalizable--the eigenvectors with eigenvalue \(1\) are already visible from the form of the matrix--with all eigenvalues of absolute value exactly \(1=|\frac{1+\bar{z}}{1+\bar{z}}|\). Consequently, the adjoint action of \(\beta_{k}(t)_{0}\) on \(\mathfrak{pu}(\mathrm{h}_{p,q})\) is diagonalizable, with eigenvalues of absolute value \(|1+z_{k}(t,X)|^{j}\) on each grading component \(\mathfrak{g}_{j}\), which also means \(\beta_{k}(t)_{0}\) acts diagonalizably on \(\Lambda^{2}\mathfrak{p}_{+}\otimes\mathfrak{pu}(\mathrm{h}_{p,q})\) with eigenvalues of absolute value \(|1+z_{k}(t,X)|^{j+2}\) on each component \(\Lambda^{2}\mathfrak{g}_{1}\otimes\mathfrak{g}_{j}\) and \(|1+z_{k}(t,X)|^{j+3}\) on each component \(\mathfrak{g}_{1}\wedge\mathfrak{g}_{2}\otimes\mathfrak{g}_{j}\). For fixed \(t\in(0,1]\), \(|1+z_{k}(t,X)|\) always grows unboundedly in \(k\) since we assume \(\tau\neq 0\). In particular, for each \(t\in(0,1]\) and nonzero \(w\in\Lambda^{2}\mathfrak{p}_{+}\otimes\mathfrak{pu}(\mathrm{h}_{p,q})\), \(\beta_{k}(t)\cdot w\) cannot converge unless it has a nontrivial component in \(\Lambda^{2}\mathfrak{g}_{1}\otimes\mathfrak{g}_{-2}\), since \(\beta_{k}(t)_{+}\) is unipotent and \(\beta_{k}(t)_{0}\) acts by eigenvalues of absolute value at least \(|1+z_{k}(t,X)|\) on each component \(\mathfrak{g}_{i_{1}}\wedge\mathfrak{g}_{i_{2}}\otimes\mathfrak{g}_{j}\) except \(\Lambda^{2}\mathfrak{g}_{1}\otimes\mathfrak{g}_{-2}\). Since regularity precisely guarantees that the component of \(\Omega^{\omega}\) in \(\Lambda^{2}\mathfrak{g}_{1}\otimes\mathfrak{g}_{-2}\) vanishes, it follows that, if \((\beta_{k})\) is \(\gamma_{\mathrm{PU}(\mathrm{h}_{p,q})}\)-shrinking for \(a\), then the curvature must vanish at \(\exp(\omega^{-1}(X))e\) whenever it is well-defined. It remains to show that \((\beta_{k})\) is \(\gamma_{\mathrm{PU}(\mathrm{h}_{p,q})}\)-shrinking for \(a\). Because \(\beta_{k}\) is chosen to trap \(a^{k}\gamma_{\mathrm{PU}(\mathrm{h}_{p,q})}\beta_{k}^{-1}\) inside of \(G_{-}\), the image \(Y_{k,t}\) of the tangent vector to \(a^{k}\gamma_{\mathrm{PU}(\mathrm{h}_{p,q})}\beta_{k}^{-1}\) at \(t\) under the Maurer-Cartan form is precisely the projection of \(\mathrm{Ad}_{\beta_{k}(t)}(X)\) to \(\mathfrak{g}_{-}\). Writing \(\mu:=|x|^{2}+\bar{y}^{\top}I_{p-1,q}y\), we can see by direct computation that this projection is given by \[\begin{pmatrix}0&0&0&0\\ \frac{(1+\frac{k^{2}t^{2}\mu}{4})x+kt\mu-k\tau\mathrm{i}}{(1+\bar{z})^{2}}&0&0 &0\\ \frac{1+\frac{k^{2}t^{2}\mu}{2}}{2}y&0&0&0\\ \frac{\tau\mathrm{i}}{|1+z|^{2}}&-\frac{(1+\frac{k^{2}t^{2}\mu}{4})\bar{x}+kt \mu+k\tau\mathrm{i}}{(1+\bar{z})^{2}}&-\frac{1+\frac{k^{2}t^{2}\mu}{2}}{(1+\bar {z})^{2}}\bar{y}^{\top}I_{p-1,q}&0\end{pmatrix}.\] Therefore, if we choose an inner product \(\mathrm{g}\) on \(\mathfrak{pu}(\mathrm{h}_{p,q})\) such that, on \(\mathfrak{g}_{-}\), it is given by \(\mathrm{g}(X,X):=|x|^{2}+\bar{y}^{\top}y+\tau^{2}\), we see that \[\mathrm{g}(Y_{k,t},Y_{k,t}) =\frac{\left|(1+\frac{k^{2}t^{2}\mu}{4})x+kt\mu-k\tau\mathrm{i} \right|^{2}+(1+\frac{k^{2}t^{2}\mu}{4})^{2}\bar{y}^{\top}y+\tau^{2}}{|1+z|^{4}}\] \[=\frac{1}{|1+z|^{4}}\bigg{(}\left(\mathrm{g}(X,X)-2k\tau(\mathrm{ Im}(x)-\frac{k\tau}{2})\right)+(2k\mu\mathrm{Re}(x))t\] \[\quad+\frac{k^{2}\mu}{2}(|x|^{2}+\bar{y}^{\top}y-k\tau\mathrm{Im}( x)+2\mu)t^{2}+\frac{k^{3}\mu^{2}\mathrm{Re}(x)}{2}t^{3}\] \[\quad+\frac{k^{4}\mu^{2}}{16}(|x|^{2}+\bar{y}^{\top}y)t^{4} \bigg{)}.\] Since \(\sqrt{|a+b|}\leq\sqrt{|a|}+\sqrt{|b|}\), it follows that \[\int_{0}^{1}\sqrt{\mathrm{g}(Y_{k,t},Y_{k,t})}\mathrm{d}t \leq\sqrt{\mathrm{g}(X,X)-2k\tau(\mathrm{Im}(x)-\tfrac{k\tau}{2})} \int_{0}^{1}\frac{1}{|1+z|^{2}}\mathrm{d}t\] \[+\sqrt{|2k\mu\mathrm{Re}(x)|}\int_{0}^{1}\frac{\sqrt{t}}{|1+z|^{ 2}}\mathrm{d}t\] \[+k\sqrt{|\tfrac{\mu}{2}(|x|^{2}+\bar{y}^{\top}y-k\tau\mathrm{Im}( x)+2\mu)|}\int_{0}^{1}\frac{t}{|1+z|^{2}}\mathrm{d}t\] \[+k|\mu|\sqrt{\tfrac{k|\mathrm{Re}(x)|}{2}}\int_{0}^{1}\frac{\sqrt {t}^{3}}{|1+z|^{2}}\mathrm{d}t\] \[+\frac{k^{2}|\mu|}{4}\sqrt{|x|^{2}+\bar{y}^{\top}y}\int_{0}^{1} \frac{t^{2}}{|1+z|^{2}}\mathrm{d}t\] \[\leq f_{k}\int_{0}^{1}\frac{1}{|1+z|^{2}}\mathrm{d}t+\frac{k^{2} |\mu|}{4}\sqrt{|x|^{2}+\bar{y}^{\top}y}\int_{0}^{1}\frac{t^{2}}{|1+z|^{2}} \mathrm{d}t,\] where \[f_{k}:= \sqrt{\mathrm{g}(X,X)-2k\tau(\mathrm{Im}(x)-\tfrac{k\tau}{2})}+ \sqrt{|2k\mu\mathrm{Re}(x)|}\] \[+k\sqrt{|\tfrac{\mu}{2}(|x|^{2}+\bar{y}^{\top}y-k\tau\mathrm{Im}( x)+2\mu)|}+k|\mu|\sqrt{\tfrac{k|\mathrm{Re}(x)|}{2}},\] so to show that \((\beta_{k})\) is \(\gamma_{\mathrm{PU}(\mathrm{h}_{p,q})}\)-shrinking, it suffices to show that both \(f_{k}\int_{0}^{1}\frac{1}{|1+z|^{2}}\mathrm{d}t\) and \(\frac{k^{2}|\mu|}{4}\sqrt{|x|^{2}+\bar{y}^{\top}y}\int_{0}^{1}\frac{t^{2}}{|1 +z|^{2}}\mathrm{d}t\) go to \(0\) as \(k\to+\infty\). Writing \(c_{k}:=\frac{\mu}{2}+(\mathrm{Im}(x)-\frac{k\tau}{2})^{2}\), we have that \[|1+z|^{2} =1+2\mathrm{Re}(z)+\mathrm{Re}(z)^{2}+\mathrm{Im}(z)^{2}\] \[\geq 1+2\mathrm{Re}(z)+\mathrm{Im}(z)^{2}\] \[=1+2kt(\mathrm{Re}(x)+\tfrac{kt\mu}{4})+k^{2}t^{2}(\mathrm{Im}( x)-\tfrac{k\tau}{2})^{2}\] \[=1+2kt\mathrm{Re}(x)+k^{2}t^{2}\left(\tfrac{\mu}{2}+(\mathrm{Im}( x)-\tfrac{k\tau}{2})^{2}\right)\] \[=1+2kt\mathrm{Re}(x)+k^{2}t^{2}c_{k},\] so if we assume \(k\) is sufficiently large so that \(c_{k}-\mathrm{Re}(x)^{2}>0\), then \(|1+z_{k}(t,X)|>0\) for all \(t\in[0,1]\). Thus, \[\int_{0}^{1}\frac{1}{|1+z_{k}(t,X)|^{2}}\mathrm{d}t\leq\int_{0}^{1}\frac{1}{1 +2kt\mathrm{Re}(x)+k^{2}t^{2}c_{k}}\mathrm{d}t,\] which computes5 to Footnote 5: Both integral computations were done with assistance from Maple. \[\frac{1}{k\sqrt{c_{k}-\mathrm{Re}(x)^{2}}}\left(\arctan\left(\frac{kc_{k}+ \mathrm{Re}(x)}{\sqrt{c_{k}-\mathrm{Re}(x)^{2}}}\right)-\arctan\left(\frac{ \mathrm{Re}(x)}{\sqrt{c_{k}-\mathrm{Re}(x)^{2}}}\right)\right).\] This shrinks at a rate commensurate to \(k^{-2}\), since \(c_{k}\) is quadratic in \(k\), and \(f_{k}\) grows slower than \(k^{2}\) in \(k\), so we must have \(f_{k}\int_{0}^{1}\frac{1}{|1+z|^{2}}\mathrm{d}t\to 0\) as \(k\to+\infty\) whenever \(\tau\neq 0\). Similarly, we have \[\int_{0}^{1}\frac{t^{2}}{|1+z_{k}(t,X)|^{2}}\mathrm{d}t\leq\int_{0}^{1}\frac{t ^{2}}{1+2kt\mathrm{Re}(x)+k^{2}t^{2}c_{k}}\mathrm{d}t,\] which computes to \[\frac{1}{k^{2}c_{k}}\Bigg{(}1-\frac{\mathrm{Re}(x)}{kc_{k}}\log(k ^{2}c_{k}+2k\mathrm{Re}(x)+1)\] \[\qquad-\frac{c_{k}-2\mathrm{Re}(x)^{2}}{kc_{k}\sqrt{c_{k}- \mathrm{Re}(x)^{2}}}\left(\arctan\left(\frac{kc_{k}+\mathrm{Re}(x)}{\sqrt{c_{k }-\mathrm{Re}(x)^{2}}}\right)-\arctan\left(\frac{\mathrm{Re}(x)}{\sqrt{c_{k}- \mathrm{Re}(x)^{2}}}\right)\right)\Bigg{)},\] each term of which shrinks at a rate at least commensurate to \(k^{-4}\). In particular, since \(\frac{k^{2}|\mu|}{4}\sqrt{|x|^{2}+\bar{y}^{\top}y}\) is quadratic in \(k\), we see that \(\frac{k^{2}|\mu|}{4}\sqrt{|x|^{2}+\bar{y}^{\top}y}\int_{0}^{1}\frac{t^{2}}{|1 +z|^{2}}\mathrm{d}t\to 0\) as \(k\to+\infty\) whenever \(\tau\neq 0\). In summary, we have shown that the arclength \(\int_{0}^{1}\sqrt{\mathrm{g}(Y_{k,t},Y_{k,t})}\mathrm{d}t\) of \(a^{k}\gamma_{\mathrm{PU}(\mathrm{h}_{p,q})}\beta_{k}^{-1}\) goes to \(0\) as \(k\to+\infty\), so \((\beta_{k})\) is \(\gamma_{\mathrm{PU}(\mathrm{h}_{p,q})}\)-shrinking for \(a\) and the result follows. ## 4. Sprawls We would like to construct Cartan geometries that are generated "as freely as possible" by the local behavior of an automorphism. We call such geometries _sprawls_, a term chosen both to evoke the idea of something extending as lazily as possible, and to sound like the word _span_, which plays a vaguely similar role for vector spaces. To explain the ideas involved effectively, we start by giving the set-up of the construction and describing a naive approach to achieving what we want. While this naive approach ultimately does not work, it serves to motivate the somewhat more complicated definition of the sprawl, which does exactly what we want it to do. After giving the appropriate definitions and verifying that they make sense, we will finally state and prove the key result of the section, Theorem 4.12, which gives a kind of "universal property" for sprawls that will allow us to compare Cartan geometries admitting automorphisms with similar local behavior. ### The set-up and a naive approach Throughout this section, let \((\mathscr{G},\omega)\) be a Cartan geometry of type \((G,H)\) over a connected smooth manifold \(M\) with a distinguished element \(\boldsymbol{e}\in\mathscr{G}\) and an automorphism \(\alpha\in\mathrm{Aut}(\mathscr{G},\omega)\). Furthermore, we fix a connected open subset \(U\) of \(M\) containing both \(q_{{}_{H}}(\boldsymbol{e})\) and \(q_{{}_{H}}(\alpha(\boldsymbol{e}))\); this allows \(U\) to capture the local behavior of \(\alpha\) near \(\boldsymbol{e}\), in the sense that sufficiently small open neighborhoods of \(q_{{}_{H}}(\boldsymbol{e})\) will be mapped back into \(U\) by \(\alpha\). Because \(\alpha\) is an automorphism, all of the iterates of \(q_{{}_{H}}^{-1}(U)\) under \(\alpha\) are geometrically equivalent, but inside \(\mathscr{F}\), they might glue together in ways that are unnecessary to still admit an automorphism that behaves like \(\alpha\) near \(\boldsymbol{e}\). As a simple example, consider the case where \((\mathscr{F},\omega)\) is the Riemannian geometry over a Euclidean torus, \(\alpha\) is a translation, and \(U\) is a small neighborhood of some point \(q_{{}_{H}}(\boldsymbol{e})\): while successive iterates of \(\alpha\) will push \(U\) back around to itself, as in Figure 1, lifting to the Euclidean plane demonstrates a situation with an automorphism exhibiting the same local behavior as \(\alpha\), but which does not push (the geometrically identical copy of) \(U\) back onto itself. Our goal is, in essence, to construct a geometry that is generated "as freely as possible" by the local behavior of \(\alpha\). In other words, we would like to construct a geometry by taking iterates of \(U\) under \(\alpha\) and gluing them together as little as possible to still retain an automorphism with the same local behavior as \(\alpha\) near the distinguished point \(\boldsymbol{e}\). To specify these iterates in a way that avoids implicitly gluing them inside \(\mathscr{F}\), we define, for each \(i\in\mathbb{Z}\), a relabeling map \[\tilde{\alpha}^{i}:q_{{}_{H}}^{-1}(U)\to\tilde{\alpha}^{i}(q_{{}_{H}}^{-1}(U)),\] where \(\tilde{\alpha}^{i}(q_{{}_{H}}^{-1}(U))\) is a diffeomorphic copy of \(q_{{}_{H}}^{-1}(U)\) with all of its points \(\boldsymbol{g}\) rewritten as \(\tilde{\alpha}^{i}(\boldsymbol{g})\). There is a natural right \(H\)-action on \(\tilde{\alpha}^{i}(q_{{}_{H}}^{-1}(U))\) given by, for each \(h\in H\), \(\tilde{\alpha}^{i}(\boldsymbol{g})h:=\tilde{\alpha}^{i}(\boldsymbol{g}h)\), which makes \(\tilde{\alpha}^{i}\) an \(H\)-equivariant map and, therefore, an isomorphism of principal \(H\)-bundles. With this notation, we can specify what we are doing a bit more concretely. We will take the disjoint union \(\bigsqcup_{i\in\mathbb{Z}}\tilde{\alpha}^{i}(q_{{}_{H}}^{-1}(U))\) and apply some minimal gluing (via an equivalence relation \(\sim\)) to obtain a new Figure 1. The region \(U\) (highlighted in darker gray) is pushed back to itself in the torus by iterates of the translation \(\alpha\), but lifting the situation to the plane gives a situation with identical local behavior such that \(U\) never returns to itself after leaving Cartan geometry for which \(\tilde{\alpha}:\tilde{\alpha}^{i}(\boldsymbol{g})\mapsto\tilde{\alpha}^{i+1}( \boldsymbol{g})\) is an automorphism with the same local behavior as \(\alpha\) near \(\boldsymbol{e}\). Identifying \(\tilde{\alpha}^{0}(q_{{}_{H}}^{-1}(U))\) with \(q_{{}_{H}}^{-1}(U)\), so that we may think of \(\tilde{\alpha}^{0}(\boldsymbol{e})\in\tilde{\alpha}^{0}(q_{{}_{H}}^{-1}(U))\) as \(\boldsymbol{e}\in q_{{}_{H}}^{-1}(U)\), this amounts to requiring \(\tilde{\alpha}(\boldsymbol{e})=\alpha(\boldsymbol{e})\), since automorphisms of Cartan geometries over a connected base manifold are uniquely determined by their image on a single element. If \(\tilde{\alpha}^{i+1}(\boldsymbol{e})=\tilde{\alpha}^{i}(\tilde{\alpha}( \boldsymbol{e}))=\tilde{\alpha}^{i}(\alpha(\boldsymbol{e}))\), then for every (piecewise smooth) path \(\gamma:[0,1]\to q_{{}_{H}}^{-1}(U\cap\alpha(U))\) starting with \(\alpha(\boldsymbol{e})\), we must also have \(\tilde{\alpha}^{i+1}(\alpha^{-1}(\gamma(t)))=\tilde{\alpha}^{i}(\gamma(t))\) for all \(t\in[0,1]\), since \(\tilde{\alpha}^{i+1}(\alpha^{-1}(\gamma))\) and \(\tilde{\alpha}^{i}(\gamma)\) are paths with the same development and starting point. In other words, whatever this new Cartan geometry is, we must have adjacent iterates \(\tilde{\alpha}^{i}(q_{{}_{H}}^{-1}(U))\) and \(\tilde{\alpha}^{i+1}(q_{{}_{H}}^{-1}(U))\) glued together by identifying \(\tilde{\alpha}^{i}(\boldsymbol{g})\) with \(\tilde{\alpha}^{i+1}(\alpha^{-1}(\boldsymbol{g}))\) whenever \(q_{{}_{H}}(\boldsymbol{g})\) lies in the connected component of \(U\cap\alpha(U)\) containing \(\alpha(q_{{}_{H}}(\boldsymbol{e}))\). With this in mind, it is tempting to imagine that the minimal equivalence relation on \(\bigsqcup_{i\in\mathbb{Z}}\tilde{\alpha}^{i}(q_{{}_{H}}^{-1}(U))\) that accomplishes these identifications between adjacent iterates is sufficient as well. Indeed, we can see that this gluing gives precisely the right answer in the torus example given above. Unfortunately, this naive gluing will not work in general. To see this, consider the Klein geometry \((\mathrm{I}(2),\omega_{{}_{\mathrm{I}(2)}})\) of type \((\mathrm{I}(2),\mathrm{O}(2))\) over \(\mathbb{R}^{2}\), corresponding to the Euclidean plane. Within this geometry, we choose a rotation \(\alpha\) with infinite order that fixes \(0\) and an open set \(U\) given by the union of a small open ball centered on \(0\) and an open sector of the plane that is disjoint from its image under \(\alpha\), as depicted in Figure 2. The identity element \((0,\mathds{1})\), which we take to be our distinguished element, is contained in \(q_{{}_{\mathrm{O}(2)}}^{-1}(U)\), as is \(\alpha(0,\mathds{1})\), since \(\alpha\) fixes \(0\). Under the naive gluing above, the iterates \(\tilde{\alpha}^{i}(q_{{}_{\mathrm{O}(2)}}^{-1}(U))\) all coincide over the small open ball around \(0\), but nowhere else. This becomes a problem whenever \(U\cap\alpha^{i}(U)\) has points that lie outside of that small open ball: if \(x\in U\cap\alpha^{i}(U)\) lies on the boundary of the open ball, then every neighborhood of \(\tilde{\alpha}^{i}(\alpha^{-i}(x))\) must intersect every neighborhood of \(\tilde{\alpha}^{0}(x)\cong x\) inside the open ball, so since \(\tilde{\alpha}^{i}(\alpha^{-i}(x))\) is not identified with \(x\) under the naive gluing, the resulting space is not even Hausdorff. We can, fortunately, salvage this idea with some slightly intricate modifications. Consider a path \(\gamma:[0,1]\to U\cap\alpha^{i}(U)\) that starts outside of the open ball and ends inside of it. Then, we get corresponding paths \(\tilde{\alpha}^{0}(\gamma)\cong\gamma\) and \(\tilde{\alpha}^{i}(\alpha^{-i}(\gamma))\) in \(\tilde{\alpha}^{0}(U)\) and \(\tilde{\alpha}^{i}(U)\), respectively, and we can lift these to paths \(\hat{\gamma}_{0}\) in \(\tilde{\alpha}^{0}(q_{{}_{\mathrm{O}(2)}}^{-1}(U))\) and \(\hat{\gamma}_{1}\) in \(\tilde{\alpha}^{i}(q_{{}_{\mathrm{O}(2)}}^{-1}(U))\) with the same development and endpoint. In particular, \(\hat{\gamma}_{0}\) and \(\hat{\gamma}_{1}\) must coincide inside the new Cartan geometry, if it exists, so that the concatenation \(\hat{\gamma}_{0}\star\overline{\hat{\gamma}_{1}}\) of \(\hat{\gamma}_{0}\) with the reverse of \(\hat{\gamma}_{1}\) is a loop that "backtracks" over itself. The new strategy, therefore, is to identify elements \(\tilde{\alpha}^{i_{1}}(\mathpzc{g}_{1})\) and \(\tilde{\alpha}^{i_{2}}(\mathpzc{g}_{2})\) whenever we can find a path starting at \(\alpha^{i_{1}}(\mathpzc{g}_{1})\) that only crosses between iterates at points identified under the naive gluing and which "backtracks" over itself to end up at \(\alpha^{i_{2}}(\mathpzc{g}_{2})\). In the next subsection, we will formalize this correction to the naive gluing, which we will use to define the sprawl. ### The definition of the sprawl To start, we provide a way of describing paths that only cross between iterates at the points identified under the naive gluing. **Definition 4.1**.: A _\((U,\alpha,\mathpzc{e})\)-incrementation6_ for \(\gamma:[0,1]\to M\) is a finite partition \(0=t_{0}<\cdots<t_{\ell}=1\) of \([0,1]\) together with a finite sequence of integers \(k_{0},\ldots,k_{\ell-1}\in\mathbb{Z}\) such that, for each \(0\leq j<\ell\), \(|k_{j}-k_{j+1}|=1\) and \(\gamma([t_{j},t_{j+1}])\subseteq\alpha^{k_{j}}(U)\), and for each \(0\leq j<\ell-1\), \(\gamma(t_{j+1})\) is in the connected component of \(\alpha^{k_{j}}(U)\cap\alpha^{k_{j+1}}(U)\) containing \(q_{{}_{H}}(\alpha^{\max(k_{j},k_{j+1})}(\mathpzc{e}))\). The integers \(k_{0}\) and \(k_{\ell-1}\) are called the _initial label_ and _terminal label_, respectively. Footnote 6: We will consistently drop reference to \(U\), \(\alpha\), and \(\mathpzc{e}\) when they are to be understood from context. For example, we will typically just refer to an _incrementation_, rather than a \((U,\alpha,\mathpzc{e})\)-incrementation. **Definition 4.2**.: We say a path \(\gamma:[0,1]\to M\) is _\((U,\alpha,\mathpzc{e})\)-incremented from \(i_{1}\) to \(i_{2}\)_ if and only if there is a \((U,\alpha,\mathpzc{e})\)-incrementation for \(\gamma\) with initial label \(i_{1}\) and terminal label \(i_{2}\). The basic idea for an incrementation of a path \(\gamma\) is to break it into segments \(\gamma([t_{j},t_{j+1}])\), and then label each such segment with a specific integer \(k_{j}\) such that \(\gamma([t_{j},t_{j+1}])\subseteq\alpha^{k_{j}}(U)\). This labeling is further required to only move up or down by \(1\) between adjacent segments, Figure 2. The region \(U\) (highlighted in lighter gray) given by the union of an open ball and an open sector that is disjoint from its image under the rotation \(\alpha\), as well as a depiction of its intersection (highlighted in darker gray) with an iterate under \(\alpha\) where the overlap escapes the open ball with the intersections occurring only in places which must be identified under the naive gluing. We have attempted to illustrate the concept in Figures 3 and 4. Recall that a null-homotopy based at a point \(q_{{}_{H}}(\not{\sigma})\in M\) is a map \(c:[0,1]^{2}\to M\), given as \((s,t)\mapsto c_{s}(t)\), such that \[c_{s}(0)=c_{s}(1)=c_{1}(s)=q_{{}_{H}}(\not{\sigma})\] for all \(s\in[0,1]\). We will need to use a specific type of homotopy, called a _thin_ homotopy. **Definition 4.3**.: A null-homotopy \(c:[0,1]^{2}\to M\) is said to be _thin_ if and only if \(c([0,1]^{2})=c_{0}([0,1])\). Consequently, a loop \(\gamma:[0,1]\to M\) is _thinly null-homotopic_ if and only if there exists a thin null-homotopy \(c\) based at \(\gamma(0)=\gamma(1)\) such that \(c_{0}=\gamma\). Figure 4. An incrementation for the path \(\gamma\) depicted in Figure 3 Figure 3. A path \(\gamma\), highlighted in darker gray, in the manifold \(M\), as well the region \(U\), highlighted in lighter gray, and its iterates under an automorphism \(\alpha\) A thin null-homotopy from a loop \(\gamma:[0,1]\to M\) to the constant loop at \(\gamma(0)=\gamma(1)\) deforms \(\gamma\) to a point while staying within its own image. The archetypical example of a thinly null-homotopic loop is the concatenation of a path with its reverse, so that the resulting loop "backtracks" over itself. Thin homotopies are geometrically useful in many contexts because thinly homotopic loops always have the same holonomy (see, for example, [1]). In particular, while we do not make explicit use of this outside of the appendix in the current version of the paper, it is worth noting that thinly null-homotopic loops always have trivial holonomy. With incrementations and thin null-homotopies in hand, we can now define sprawl-equivalence. **Definition 4.4**.: Two points \(\tilde{\alpha}^{i_{1}}(q_{{}_{H}}(\boldsymbol{g}_{1}))\) and \(\tilde{\alpha}^{i_{2}}(q_{{}_{H}}(\boldsymbol{g}_{2}))\) in the disjoint union \(\bigsqcup_{i\in\mathbb{Z}}\tilde{\alpha}^{i}(U)\) are _sprawl-equivalent (in the base sense)_ if and only if \(\alpha^{i_{1}}(q_{{}_{H}}(\boldsymbol{g}_{1}))=\alpha^{i_{2}}(q_{{}_{H}}( \boldsymbol{g}_{2}))\) and there exists a thinly null-homotopic loop \(\gamma:[0,1]\to M\) based at \(\gamma(0)=q_{{}_{H}}(\alpha^{i_{1}}(\boldsymbol{g}_{1}))=q_{{}_{H}}(\alpha^{ i_{2}}(\boldsymbol{g}_{2}))=\gamma(1)\) incremented from \(i_{1}\) to \(i_{2}\). **Definition 4.5**.: Two elements \(\tilde{\alpha}^{i_{1}}(\boldsymbol{g}_{1})\) and \(\tilde{\alpha}^{i_{2}}(\boldsymbol{g}_{2})\) of the disjoint union \(\bigsqcup_{i\in\mathbb{Z}}\tilde{\alpha}^{i}(q_{{}_{H}}^{-1}(U))\) are _sprawl-equivalent (in the bundle sense)_ if and only if \(\alpha^{i_{1}}(\boldsymbol{g}_{1})=\alpha^{i_{2}}(\boldsymbol{g}_{2})\) and \(\tilde{\alpha}^{i_{1}}(q_{{}_{H}}(\boldsymbol{g}_{1}))\) is sprawl-equivalent in the base sense to \(\tilde{\alpha}^{i_{2}}(q_{{}_{H}}(\boldsymbol{g}_{2}))\). Naturally, these notions are related: if \(\tilde{\alpha}^{i_{1}}(\boldsymbol{g}_{1})\) is sprawl-equivalent in the bundle sense to \(\tilde{\alpha}^{i_{2}}(\boldsymbol{g}_{2})\), then \(\tilde{\alpha}^{i_{1}}(q_{{}_{H}}(\boldsymbol{g}_{1}))\) is sprawl-equivalent in the base sense to \(\tilde{\alpha}^{i_{2}}(q_{{}_{H}}(\boldsymbol{g}_{2}))\). In other words, sprawl-equivalence in the bundle sense induces sprawl-equivalence in the base sense; with this justification, we will denote both of them by the same symbol \(\sim\). We should, of course, first verify that they are equivalence relations. **Proposition 4.6**.: _Both notions of sprawl-equivalence are equivalence relations._ Proof.: Let us start with sprawl-equivalence in the base sense, proving that the relation is reflexive, symmetric, and transitive. After this, the proof for the bundle version will follow easily. For each \(i\in\mathbb{Z}\) and \(q_{{}_{H}}(\boldsymbol{g})\in U\), choosing \(\gamma\) to be the constant path at \(\alpha^{i}(q_{{}_{H}}(\boldsymbol{g}))\), our partition to be the trivial partition \(0=t_{0}<t_{1}=1\), and \(i=k_{0}=i\) shows us that \(\tilde{\alpha}^{i}(q_{{}_{H}}(\boldsymbol{g}))\sim\tilde{\alpha}^{i}(q_{{}_{H }}(\boldsymbol{g}))\). By definition, if \(\tilde{\alpha}^{i_{1}}(q_{{}_{H}}(\boldsymbol{g}_{1}))\sim\tilde{\alpha}^{i_ {2}}(q_{{}_{H}}(\boldsymbol{g}_{2}))\), then there must be a thinly null-homotopic loop \(\gamma:[0,1]\to M\) based at \[\gamma(0)=q_{{}_{H}}(\alpha^{i_{1}}(\boldsymbol{g}_{1}))=q_{{}_{H}}(\alpha^{i _{2}}(\boldsymbol{g}_{2}))=\gamma(1)\] with an incrementation given by a partition \(0=t_{0}<\cdots<t_{\ell}=1\) and a sequence of integers \(i_{1}=k_{0},\ldots,k_{\ell-1}=i_{2}\in\mathbb{Z}\). Consider the reverse loop \(\bar{\gamma}:[0,1]\to M\) defined by \(t\mapsto\gamma(1-t)\); setting \(\bar{t}_{j}=1-t_{\ell-j}\) and \(\bar{k}_{j}=k_{\ell-1-j}\) for each \(j\), we get a reversed incrementation from \(i_{2}\) to \(i_{1}\) for the thinly null-homotopic loop \(\bar{\gamma}\), so \(\tilde{\alpha}^{i_{2}}({}_{\!{}_{H}}({\boldsymbol{g}}_{2}))\sim\tilde{\alpha}^{ i_{1}}({}_{\!{}_{H}}({\boldsymbol{g}}_{1}))\). Similarly, if we have sprawl-equivalences \(\tilde{\alpha}^{i_{1}}({}_{\!{}_{H}}({\boldsymbol{g}}_{1}))\sim\tilde{\alpha} ^{i_{2}}({}_{\!{}_{H}}({\boldsymbol{g}}_{2}))\) and \(\tilde{\alpha}^{i_{2}}({}_{\!{}_{H}}({\boldsymbol{g}}_{2}))\sim\tilde{\alpha} ^{i_{3}}({}_{\!{}_{H}}({\boldsymbol{g}}_{3}))\), then there exist corresponding thinly null-homotopic loops \(\gamma\) and \(\gamma^{\prime}\), together with incrementations given by \(0=t_{0}<\cdots<t_{\ell}=1\) and \(i_{1}=k_{0},\ldots,k_{\ell-1}=i_{2}\in\mathbb{Z}\) for \(\gamma\), and \(0=t^{\prime}_{0}<\cdots<t^{\prime}_{\ell^{\prime}}=1\) and \(i_{2}=k^{\prime}_{0},\ldots,k^{\prime}_{\ell^{\prime}-1}=i_{3}\in\mathbb{Z}\) for \(\gamma^{\prime}\). To show that \(\tilde{\alpha}^{i_{1}}({}_{\!{}_{H}}({\boldsymbol{g}}_{1}))\sim\tilde{\alpha} ^{i_{3}}({}_{\!{}_{H}}({\boldsymbol{g}}_{3}))\), consider the concatenated loop \(\gamma\star\gamma^{\prime}\). This is still thinly null-homotopic, and setting \[\tau_{j}=\begin{cases}\frac{t_{j}}{2}&\text{if $j<\ell$},\\ \frac{1+t^{\prime}_{j-\ell}}{2}&\text{if $j\geq\ell$}\end{cases}\] and \[r_{j}=\begin{cases}k_{j}&\text{if $j<\ell$},\\ k^{\prime}_{j-\ell+1}&\text{if $j\geq\ell$}\end{cases}\] for each \(j\), we get an incrementation for the concatenation \(\gamma\star\gamma^{\prime}\) comprised of the partition \(0=\tau_{0}<\cdots<\tau_{\ell+\ell^{\prime}}=1\) and labels \(i_{1}=r_{0},\ldots,r_{\ell+\ell^{\prime}-1}=i_{3}\in\mathbb{Z}\). In particular, \(\tilde{\alpha}^{i_{1}}({}_{\!{}_{H}}({\boldsymbol{g}}_{1}))\sim\tilde{\alpha} ^{i_{3}}({}_{\!{}_{H}}({\boldsymbol{g}}_{3}))\). Thus, sprawl-equivalence in the base sense is an equivalence relation, and the proof for the bundle version essentially follows from this. For each \(i\in\mathbb{Z}\) and \({\boldsymbol{g}}\in q_{{}_{H}}^{-1}(U)\), \(\alpha^{i}({\boldsymbol{g}})=\alpha^{i}({\boldsymbol{g}})\) and \(\tilde{\alpha}^{i}({}_{\!{}_{H}}({\boldsymbol{g}}))\sim\tilde{\alpha}^{i}({} _{\!{}_{H}}({\boldsymbol{g}}))\) by the above, so \(\tilde{\alpha}^{i}({\boldsymbol{g}})\sim\tilde{\alpha}^{i}({\boldsymbol{g}})\). Likewise, \(\tilde{\alpha}^{i_{1}}({\boldsymbol{g}}_{1})\sim\tilde{\alpha}^{i_{2}}({ \boldsymbol{g}}_{2})\) if and only if \(\alpha^{i_{1}}({\boldsymbol{g}}_{1})=\alpha^{i_{2}}({\boldsymbol{g}}_{2})\) and \(\tilde{\alpha}^{i_{1}}({}_{\!{}_{H}}({\boldsymbol{g}}_{1}))\sim\tilde{\alpha} ^{i_{2}}({}_{\!{}_{H}}({\boldsymbol{g}}_{2}))\), so \(\alpha^{i_{2}}({\boldsymbol{g}}_{2})=\alpha^{i_{1}}({\boldsymbol{g}}_{1})\) and, because sprawl-equivalence in the base sense is symmetric, \(\tilde{\alpha}^{i_{2}}({}_{\!{}_{H}}({\boldsymbol{g}}_{2}))\sim\tilde{\alpha} ^{i_{1}}({}_{\!{}_{H}}({\boldsymbol{g}}_{1}))\), so \(\tilde{\alpha}^{i_{2}}({\boldsymbol{g}}_{2})\sim\tilde{\alpha}^{i_{1}}({ \boldsymbol{g}}_{1})\). Finally, if we have \(\tilde{\alpha}^{i_{1}}({\boldsymbol{g}}_{1})\sim\tilde{\alpha}^{i_{2}}({ \boldsymbol{g}}_{2})\) and \(\tilde{\alpha}^{i_{2}}({\boldsymbol{g}}_{2})\sim\tilde{\alpha}^{i_{3}}({ \boldsymbol{g}}_{3})\), then \[\alpha^{i_{1}}({\boldsymbol{g}}_{1})=\alpha^{i_{2}}({\boldsymbol{g}}_{2})= \alpha^{i_{3}}({\boldsymbol{g}}_{3})\] and \[\tilde{\alpha}^{i_{1}}({}_{\!{}_{H}}({\boldsymbol{g}}_{1}))\sim\tilde{\alpha} ^{i_{2}}({}_{\!{}_{H}}({\boldsymbol{g}}_{2}))\sim\tilde{\alpha}^{i_{3}}({}_{\!{ }_{H}}({\boldsymbol{g}}_{3})),\] so by transitivity of the base version, we get \(\tilde{\alpha}^{i_{1}}({\boldsymbol{g}}_{1})\sim\tilde{\alpha}^{i_{3}}({ \boldsymbol{g}}_{3})\). Sprawl-equivalence is precisely the corrected minimal gluing that we mentioned at the start of the section; as we shall see shortly, it allows us to glue the copies \(\tilde{\alpha}^{i}(q_{{}_{H}}^{-1}(U))\) together into a new principal \(H\)-bundle \(\mathscr{F}\) such that \(\tilde{\alpha}({\boldsymbol{e}})\) coincides with \(\alpha({\boldsymbol{e}})\). **Proposition 4.7**.: _The quotient space_ \[\mathscr{F}=\mathscr{F}(q_{{}_{H}}^{-1}(U),\alpha,{\boldsymbol{e}}):=\left( \bigsqcup_{i\in\mathbb{Z}}\tilde{\alpha}^{i}(q_{{}_{H}}^{-1}(U))\right)\Big{/}\sim\] _admits the structure of a (smooth) principal \(H\)-bundle over the quotient space_ \[q_{{}_{H}}(\mathscr{F})=q_{{}_{H}}(\mathscr{F})(U,\alpha,{\boldsymbol{e}}):=\left( \bigsqcup_{i\in\mathbb{Z}}\tilde{\alpha}^{i}(U)\right)\Big{/}\sim\,,\] _which is a smooth manifold._ Proof.: For each \(i\in\mathbb{Z}\), \(\tilde{\alpha}^{i}(\boldsymbol{g}_{1})\sim\tilde{\alpha}^{i}(\boldsymbol{g}_{2})\) if and only if \(\boldsymbol{g}_{1}=\boldsymbol{g}_{2}\). In particular, each \(\tilde{\alpha}^{i}(q_{{}_{H}}^{-1}(U))\) is embedded in \(\mathscr{F}\) under the quotient by \(\sim\), so it makes sense to identify each \(\tilde{\alpha}^{i}(q_{{}_{H}}^{-1}(U))\) with its image in \(\mathscr{F}\). Similarly, each \(\tilde{\alpha}^{i}(U)\) naturally embeds into \(q_{{}_{H}}(\mathscr{F})\), so we can identify each \(\tilde{\alpha}^{i}(U)\) with its image in the quotient space \(q_{{}_{H}}(\mathscr{F})\). For every element \(h\in H\), we have \(\tilde{\alpha}^{i_{1}}(\boldsymbol{g}_{1})\sim\tilde{\alpha}^{i_{2}}( \boldsymbol{g}_{2})\) if and only if \(\tilde{\alpha}^{i_{1}}(\boldsymbol{g}_{1})h\sim\tilde{\alpha}^{i_{2}}( \boldsymbol{g}_{2})h\), and \(\tilde{\alpha}^{i}(\boldsymbol{g})\sim\tilde{\alpha}^{i}(\boldsymbol{g})h\) if and only if \(h\) is the identity element because otherwise \(\alpha^{i}(\boldsymbol{g})\neq\alpha^{i}(\boldsymbol{g})h\). Because of this, \(\mathscr{F}\) inherits a free right \(H\)-action that coincides with the smooth free right action of \(H\) on each \(\tilde{\alpha}^{i}(q_{{}_{H}}^{-1}(U))\). Since \(\tilde{\alpha}^{i_{1}}(\boldsymbol{g}_{1})\sim\tilde{\alpha}^{i_{2}}( \boldsymbol{g}_{2})\) implies \(\tilde{\alpha}^{i_{1}}(q_{{}_{H}}(\boldsymbol{g}_{1}))\sim\tilde{\alpha}^{i_{ 2}}(q_{{}_{H}}(\boldsymbol{g}_{2}))\), we get a natural map \(q_{{}_{H}}:\mathscr{F}\to q_{{}_{H}}(\mathscr{F})\) given by \(q_{{}_{H}}(\tilde{\alpha}^{i}(\boldsymbol{g})):=\tilde{\alpha}^{i}(q_{{}_{H}} (\boldsymbol{g}))\). By definition, this coincides with the bundle map \(q_{{}_{H}}:\tilde{\alpha}^{i}(q_{{}_{H}}^{-1}(U))\to\tilde{\alpha}^{i}(U)\) for each \(i\), so \(\mathscr{F}\) is a principal \(H\)-bundle over \(q_{{}_{H}}(\mathscr{F})\). It remains to show that \(q_{{}_{H}}(\mathscr{F})\) is a smooth manifold. Note that \(U\) naturally inherits a smooth structure from \(M\), and \(q_{{}_{H}}(\mathscr{F})\) is a union of embedded copies of \(U\) by definition. Moreover, \(\tilde{\alpha}^{i_{1}}(\boldsymbol{g}_{1})\sim\tilde{\alpha}^{i_{2}}( \boldsymbol{g}_{2})\) implies \(\alpha^{i_{1}}(\boldsymbol{g}_{1})=\alpha^{i_{2}}(\boldsymbol{g}_{2})\), hence \(\boldsymbol{g}_{2}=\alpha^{i_{1}-i_{2}}(\boldsymbol{g}_{1})\), so the embedded copies of \(U\) are glued together in \(q_{{}_{H}}(\mathscr{F})\) along open sets by iterates of the diffeomorphism \(\alpha\). In particular, we just need to show that \(q_{{}_{H}}(\mathscr{F})\) is Hausdorff to verify that it admits the structure of a smooth manifold. To this end, suppose that \(\tilde{\alpha}^{i_{1}}(q_{{}_{H}}(\boldsymbol{g}_{1}))\) and \(\tilde{\alpha}^{i_{2}}(q_{{}_{H}}(\boldsymbol{g}_{2}))\) are distinct points of the quotient space \(q_{{}_{H}}(\mathscr{F})\). There are two possible cases: either \(\alpha^{i_{1}}(q_{{}_{H}}(\boldsymbol{g}_{1}))\neq\alpha^{i_{2}}(q_{{}_{H}} (\boldsymbol{g}_{2}))\), or \(\alpha^{i_{1}}(q_{{}_{H}}(\boldsymbol{g}_{1}))=\alpha^{i_{2}}(q_{{}_{H}}( \boldsymbol{g}_{2}))\) but no corresponding thinly null-homotopic loop incremented from \(i_{1}\) to \(i_{2}\) exists. In the first case, there exist disjoint open neighborhoods \(V_{1}\subseteq\alpha^{i_{1}}(U)\) of \(\alpha^{i_{1}}(q_{{}_{H}}(\boldsymbol{g}_{1}))\) and \(V_{2}\subseteq\alpha^{i_{2}}(U)\) of \(\alpha^{i_{2}}(q_{{}_{H}}(\boldsymbol{g}_{2}))\) because \(M\) is Hausdorff, hence \(\tilde{\alpha}^{i_{1}}(\alpha^{-i_{1}}(V_{1}))\) and \(\tilde{\alpha}^{i_{2}}(\alpha^{-i_{2}}(V_{2}))\) are disjoint open neighborhoods of \(\tilde{\alpha}^{i_{1}}(q_{{}_{H}}(\boldsymbol{g}_{1}))\) and \(\tilde{\alpha}^{i_{2}}(q_{{}_{H}}(\boldsymbol{g}_{2}))\), respectively. In the second case, let \(V\) be the path component of the point \(\alpha^{i_{1}}(q_{{}_{H}}(\boldsymbol{g}_{1}))=\alpha^{i_{2}}(q_{{}_{H}}( \boldsymbol{g}_{2}))\) in the intersection \(\alpha^{i_{1}}(U)\cap\alpha^{i_{2}}(U)\), so that \(\tilde{\alpha}^{i_{1}}(\alpha^{-i_{1}}(V))\) is an open neighborhood of \(\tilde{\alpha}^{i_{1}}(q_{{}_{H}}(\boldsymbol{g}_{1}))\) and \(\tilde{\alpha}^{i_{2}}(\alpha^{-i_{2}}(V))\) is an open neighborhood of \(\tilde{\alpha}^{i_{2}}(q_{{}_{H}}(\boldsymbol{g}_{2}))\). These two neighborhoods must be disjoint: if there were a point \(q_{{}_{H}}(\boldsymbol{\mathscr{f}})\) in their intersection, then there would be a path \[\zeta:[0,1]\to V\subseteq\alpha^{i_{1}}(U)\cap\alpha^{i_{2}}(U)\] from \(\alpha^{i_{1}}(q_{{}_{H}}(\boldsymbol{g}_{1}))=(\alpha^{i_{1}}\circ(\tilde{ \alpha}^{i_{1}})^{-1})(\tilde{\alpha}^{i_{1}}(q_{{}_{H}}(\boldsymbol{g}_{1})))\) to \((\alpha^{i_{1}}\circ(\tilde{\alpha}^{i_{1}})^{-1})(q_{{}_{H}}(\boldsymbol{ \mathscr{f}}))\), so if \(\gamma\) were the thinly null-homotopic loop based at \[(\alpha^{i_{1}}\circ(\tilde{\alpha}^{i_{1}})^{-1})(q_{{}_{H}}(\boldsymbol{ \mathscr{f}}))=(\alpha^{i_{2}}\circ(\tilde{\alpha}^{i_{2}})^{-1})(q_{{}_{H}}( \boldsymbol{\mathscr{f}}))\] incremented from \(i_{1}\) to \(i_{2}\) that must exist for the point \(q_{{}_{H}}(\boldsymbol{\mathscr{f}})\) to be in the intersection of \(\tilde{\alpha}^{i_{1}}(U)\) and \(\tilde{\alpha}^{i_{2}}(U)\), then the concatenation given by \(\zeta\star\gamma\star\bar{\zeta}\) would be a thinly null-homotopic loop based at the point \(\alpha^{i_{1}}(q_{{}_{H}}(\boldsymbol{g}_{1}))=\alpha^{i_{2}}(q_{{}_{H}}( \boldsymbol{\mathscr{g}}_{2}))\) and incremented from \(i_{1}\) to \(i_{2}\). This would be a contradiction, since \(\tilde{\alpha}^{i_{1}}(q_{{}_{H}}(\mathpzc{G}_{1}))\) and \(\tilde{\alpha}^{i_{2}}(q_{{}_{H}}(\mathpzc{G}_{2}))\) are distinct by assumption, so \(\tilde{\alpha}^{i_{1}}(\alpha^{-i_{1}}(V))\) and \(\tilde{\alpha}^{i_{2}}(\alpha^{-i_{2}}(V))\) must be disjoint. Thus, \(q_{{}_{H}}(\mathpzc{F})\) is Hausdorff. To imbue this new principal \(H\)-bundle with the structure of a Cartan geometry, we will use a natural map from \(\mathpzc{F}\) to \(\mathpzc{F}\) in order to pull the Cartan connection on \(\mathpzc{G}\) back to \(\mathpzc{F}\). This map, called the _sprawl map_, is precisely the one obtained by identifying each \(\tilde{\alpha}^{i}(q_{{}_{H}}^{-1}(U))\) embedded in \(\mathpzc{F}\) with the corresponding \(\alpha^{i}(q_{{}_{H}}^{-1}(U))\) in \(\mathpzc{F}\). **Definition 4.8**.: The map \(\sigma:\mathpzc{F}\to\mathpzc{G}\) given by \(\tilde{\alpha}^{i}(\mathpzc{G})\mapsto\alpha^{i}(\mathpzc{G})\) is called the _sprawl map_ for \((\mathpzc{G},\omega)\). Before moving on to defining the sprawl, let us make two observations about the sprawl map. First, \(\sigma\) is well-defined: \(\tilde{\alpha}^{i_{1}}(\mathpzc{G}_{1})\sim\tilde{\alpha}^{i_{2}}(\mathpzc{ G}_{2})\) only if \(\sigma(\tilde{\alpha}^{i_{1}}(\mathpzc{G}_{1}))=\alpha^{i_{1}}(\mathpzc{G}_{1 })=\alpha^{i_{2}}(\mathpzc{G}_{2})=\sigma(\tilde{\alpha}^{i_{2}}(\mathpzc{G} _{2}))\), so sprawl-equivalent elements have the same image under \(\sigma\). Second, \(\sigma\) is an \(H\)-equivariant local diffeomorphism, since it coincides with the natural \(H\)-equivariant diffeomorphism between \(\tilde{\alpha}^{i}(q_{{}_{H}}^{-1}(U))\) and \(\alpha^{i}(q_{{}_{H}}^{-1}(U))\) for each \(i\in\mathbb{Z}\). With that, we can finally define the sprawl. **Definition 4.9**.: The _sprawl of \((q_{{}_{H}}^{-1}(U),\omega)\) generated by \(\alpha\) from \(\mathpzc{e}\)_ is the Cartan geometry \((\mathpzc{F},\sigma^{*}\omega)\) of type \((G,H)\) over \(q_{{}_{H}}(\mathpzc{F})\), where \(\sigma\) is the sprawl map. Crucially, note that we have constructed \((\mathpzc{F},\sigma^{*}\omega)\) in such a way as to make the map \[\tilde{\alpha}:\mathpzc{F}\to\mathpzc{F},\,\tilde{\alpha}^{i}(\mathpzc{G}) \mapsto\tilde{\alpha}^{i+1}(\mathpzc{G})\] into an automorphism. Indeed, \(\sigma\) naturally satisfies \(\sigma\circ\tilde{\alpha}=\alpha\circ\sigma\), so \[\tilde{\alpha}^{*}(\sigma^{*}\omega)=(\sigma\circ\tilde{\alpha})^{*}\omega=( \alpha\circ\sigma)^{*}\omega=\sigma^{*}(\alpha^{*}\omega)=\sigma^{*}\omega.\] Moreover, \(\tilde{\alpha}\) and \(\alpha\) must coincide on the distinguished element \(\mathpzc{e}\) under the identification between \(\tilde{\alpha}^{0}(q_{{}_{H}}^{-1}(U))\) and \(q_{{}_{H}}^{-1}(U)\), so \(\tilde{\alpha}\) has the same local behavior as \(\alpha\) on \(q_{H}^{-1}(U)\) near \(\mathpzc{e}\). ### The universal property of sprawls We would like to think of the automorphism \(\tilde{\alpha}\) on the sprawl \((\mathpzc{F},\sigma^{*}\omega)\) as a kind of universal example of an automorphism with the same behavior as \(\alpha\) near \(\mathpzc{e}\). Theorem 4.12 will make precise what we mean by "universal example", but first, we will need two lemmas. First, we need to show that lifts of incremented paths to \(\mathpzc{G}\) further lift to paths on \(\mathpzc{F}\) via the sprawl map, and that the choice of lift only depends on the initial label of the underlying incrementation. **Lemma 4.10**.: _If \(\gamma:[0,1]\to\mathpzc{F}\) is a path in \(\mathpzc{G}\) such that its image \(q_{{}_{H}}(\gamma)\) in \(M\) has an incrementation, then there exists a lift \(\tilde{\gamma}:[0,1]\to\mathpzc{F}\) of \(\gamma\), so that \(\sigma\circ\tilde{\gamma}=\gamma\). Moreover, this choice of lift only depends on the initial label of the incrementation of \(q_{{}_{H}}(\gamma)\)._ Proof.: Suppose that the incrementation of \(q_{{}_{H}}(\gamma)\) is the one given by the partition \(0=t_{0}<\cdots<t_{\ell}=1\) and labels \(k_{0},\ldots,k_{\ell-1}\in\mathbb{Z}\). We can construct a path \(\tilde{\gamma}\) in \(\mathscr{F}\) as follows. First, let us direct our attention to \(\alpha^{k_{0}}(q_{{}_{H}}^{-1}(U))\), where the path \(\gamma\) starts. When restricted to \(\tilde{\alpha}^{k_{0}}(q_{{}_{H}}^{-1}(U))\), \(\sigma\) coincides with the identification between \(\tilde{\alpha}^{k_{0}}(q_{{}_{H}}^{-1}(U))\) and \(\alpha^{k_{0}}(q_{{}_{H}}^{-1}(U))\), so we can simply define \[\tilde{\gamma}|_{[0,t_{1}]}:=(\sigma|_{\tilde{\alpha}^{k_{0}}(q_{{}_{H}}^{-1} (U))})^{-1}\circ\gamma|_{[0,t_{1}]}.\] Next, the incrementation tells us that \(q_{{}_{H}}(\gamma(t_{1}))\) is in the connected component of \(\alpha^{\max(k_{0},k_{1})}(q_{{}_{H}}(e))\) in \(\alpha^{k_{0}}(U)\cap\alpha^{k_{1}}(U)\), so that the constant path at \(q_{{}_{H}}(\gamma(t_{1}))\) is a thinly null-homotopic loop incremented from \(k_{0}\) to \(k_{1}\). In particular, this tells us that \(\tilde{\gamma}(t_{1})\in\tilde{\alpha}^{k_{0}}(q_{{}_{H}}^{-1}(U))\cap\tilde{ \alpha}^{k_{1}}(q_{{}_{H}}^{-1}(U))\), so that we can extend the path \(\tilde{\gamma}\) by again restricting to where \(\sigma\) is a diffeomorphism: \[\tilde{\gamma}|_{[t_{1},t_{2}]}:=(\sigma|_{\tilde{\alpha}^{k_{1}}(q_{{}_{H}}^{ -1}(U))})^{-1}\circ\gamma|_{[t_{1},t_{2}]}.\] By iterating this procedure, defining \[\tilde{\gamma}|_{[t_{j},t_{j+1}]}:=(\sigma|_{\tilde{\alpha}^{k_{j}}(q_{{}_{H}} ^{-1}(U))})^{-1}\circ\gamma|_{[t_{j},t_{j+1}]}\] for each \(j\), we get a well-defined lift \(\tilde{\gamma}\) of \(\gamma\) to \(\mathscr{F}\), with \(\sigma\circ\tilde{\gamma}=\gamma\). Now, suppose that \(\zeta:[0,1]\to\mathscr{F}\) is another lift of \(\gamma\) to \(\mathscr{F}\), constructed in the same way from a possibly different incrementation of \(q_{{}_{H}}(\gamma)\). Then, by definition, we would again have \(\sigma\circ\zeta=\gamma\), and since \(\sigma\) is a geometric map, this means that \(\zeta\), \(\tilde{\gamma}\), and \(\gamma\) would all have the same development: \(\zeta_{G}=\tilde{\gamma}_{G}=\gamma_{G}\). In particular, since the starting points of \(\zeta\) and \(\tilde{\gamma}\) are uniquely determined by the initial label for \(q_{{}_{H}}(\gamma)\), we must have \(\zeta=\tilde{\gamma}\) if their corresponding incrementations have the same initial label, since then they have the same starting point and the same developments. Our second lemma shows us that development completely determines whether or not a path is a thinly null-homotopic loop. **Lemma 4.11**.: _A path \(\gamma:[0,1]\to\mathscr{G}\) is a thinly null-homotopic loop if and only if its development \(\gamma_{G}:[0,1]\to G\) is._ Proof.: Suppose \(\gamma\) is a thinly null-homotopic loop in \(\mathscr{G}\). Up to smooth reparametrization, we may assume that both \(\gamma\) and the thin null-homotopy \(c:[0,1]^{2}\to\mathscr{G}\) are smooth. Because \(c\) is thin, the image of \(c\) is at most one-dimensional, so \(c^{*}\omega\) satisfies \(\mathrm{d}(c^{*}\omega)+\frac{1}{2}[c^{*}\omega,c^{*}\omega]=0\). By the fundamental theorem of nonabelian calculus (Theorem 7.14 in Chapter 3 of [13]), it follows that there is a unique smooth map \(c_{G}:[0,1]^{2}\to G\) such that both \((c_{G})_{0}(0)=e\) and \(c_{G}^{*}\omega_{{}_{G}}=c^{*}\omega\); because \(\gamma=c_{0}\) and \(\gamma_{G}(0)=e=(c_{G})_{0}(0)\), this must also satisfy \((c_{G})_{0}=\gamma_{G}\). Since \(c_{G}([0,1]^{2})=(c_{G})_{0}([0,1])\) and \(c_{G}\) is constant along \([0,1]\times\{0\}\), \(\{1\}\times[0,1]\), and \([0,1]\times\{1\}\), we see that \(c_{G}\) is a thin null-homotopy from \(\gamma_{G}\) to the constant path at \(e\). Conversely, suppose \(\gamma_{G}\) is a thinly null-homotopic loop. Again, up to smooth reparametrization, we may assume that both \(\gamma\) and the thin null-homotopy \(c_{G}:[0,1]^{2}\to G\) with \((c_{G})_{0}=\gamma_{G}\) are smooth. Our strategy is essentially to just modify the local version of the fundamental theorem of nonabelian calculus to show that a map \(c:[0,1]^{2}\to\mathscr{G}\) with \(c^{*}\omega=c_{G}^{*}\omega_{{}_{G}}\) exists locally, then build the map from these local pieces starting at \(c_{0}(0)=\gamma(0)\). Since such a map \(c\) must be constant along \([0,1]\times\{0\}\), \(\{1\}\times[0,1]\), and \([0,1]\times\{1\}\), and \((c_{0})_{G}=(c_{G})_{0}=\gamma_{G}\), it will be a null-homotopy from \(\gamma\) to \(\gamma(0)\) if it exists, and the image of \(c\) cannot leave the image of \(c_{0}=\gamma\) because the image of \(c_{G}\) is contained in the image of \(\gamma_{G}\), so \(c\) is necessarily a thin null-homotopy. Emulating the proof of Theorem 6.1 in Chapter 3 of [13], we consider the projections \(\pi_{\mathscr{G}}:[0,1]^{2}\times\mathscr{G}\to\mathscr{G}\) and \(\pi_{[0,1]^{2}}:[0,1]^{2}\times\mathscr{G}\to[0,1]^{2}\). Setting \(\zeta:=(c_{G}\circ\pi_{[0,1]^{2}})^{*}\omega_{{}_{G}}-\pi_{\mathscr{G}}^{*}\omega\), we see that \(\pi_{[0,1]^{2}\,*}\) gives a linear isomorphism from \(\ker(\zeta)\) to the tangent spaces of \([0,1]^{2}\), so that \(\ker(\zeta)\) is a two-dimensional distribution. Moreover, for \(\Omega:=\mathrm{d}\omega+\frac{1}{2}[\omega,\omega]\), \[\mathrm{d}\zeta =(c_{G}\circ\pi_{[0,1]^{2}})^{*}\mathrm{d}\omega_{{}_{G}}-\pi_{ \mathscr{G}}^{*}\mathrm{d}\omega\] \[=-\frac{1}{2}(c_{G}\circ\pi_{[0,1]^{2}})^{*}[\omega_{{}_{G}}, \omega_{{}_{G}}]+\frac{1}{2}\pi_{\mathscr{G}}^{*}[\omega,\omega]-\pi_{\mathscr{ G}}^{*}\Omega\] \[=-\frac{1}{2}[\zeta+\pi_{\mathscr{G}}^{*}\omega,\zeta+\pi_{ \mathscr{G}}^{*}\omega]+\frac{1}{2}\pi_{\mathscr{G}}^{*}[\omega,\omega]-\pi_{ \mathscr{G}}^{*}\Omega\] \[=-\frac{1}{2}([\zeta,\zeta]+[\pi_{\mathscr{G}}^{*}\omega,\zeta] +[\zeta,\pi_{\mathscr{G}}^{*}\omega])-\pi_{\mathscr{G}}^{*}\Omega.\] Since \(c_{G}\) has rank at most one, \((\pi_{\mathscr{G}})_{*}\ker(\zeta)\) is at most one-dimensional, so \(\pi_{\mathscr{G}}^{*}\Omega\) must vanish on \(\ker(\zeta)\). The rest of the expression for \(\mathrm{d}\zeta\) above is a sum of terms formed by bracketing with \(\zeta\), so it must vanish on \(\ker(\zeta)\) as well. Thus, \(\ker(\zeta)\) is integrable. If \(N\) is a leaf of \(\ker(\zeta)\) through \(((s,t),\mathscr{G})\in[0,1]^{2}\times\mathscr{G}\), then \(\pi_{[0,1]^{2}\,*}\) gives a linear isomorphism from the tangent space of \(N\) at \(((s,t),\mathscr{G})\) to the tangent space of \([0,1]^{2}\) at \((s,t)\), so there is a neighborhood \(V\) of \((s,t)\) on which we get a smooth inverse \(f:V\to N\) to \(\pi_{[0,1]^{2}}\) such that \(f(s,t)=((s,t),\mathscr{G})\). Thus, \[0=f^{*}\zeta =f^{*}((c_{G}\circ\pi_{[0,1]^{2}})^{*}\omega_{{}_{G}}-\pi_{ \mathscr{G}}^{*}\omega)\] \[=f^{*}\pi_{[0,1]^{2}}^{*}(c_{G}^{*}\omega_{{}_{G}})-f^{*}\pi_{ \mathscr{G}}^{*}\omega\] \[=(\pi_{[0,1]^{2}}\circ f)^{*}(c_{G}^{*}\omega_{{}_{G}})-(\pi_{ \mathscr{G}}\circ f)^{*}\omega\] \[=c_{G}^{*}\omega_{{}_{G}}-(\pi_{\mathscr{G}}\circ f)^{*}\omega,\] so \(c|_{V}:=\pi_{\mathscr{G}}\circ f:V\subseteq[0,1]^{2}\to\mathscr{G}\) satisfies \((c|_{V})^{*}\omega=c_{G}^{*}\omega_{{}_{G}}\). For each \(s\in[0,1]\), let \(c_{s}:[0,1]\to\mathscr{G}\) be the unique path with \(c_{s}(0)=\gamma(0)\) and \((c_{s})_{G}=(c_{G})_{s}\); since \(c_{0}=\gamma\) is well-defined and each \(c_{s}\) must stay within the image of \(c_{0}\), these paths are well-defined as well. If there is a map \(c:[0,1]^{2}\to\mathscr{G}\) with \(c^{*}\omega=c_{G}^{*}\omega_{{}_{G}}\) and \(c_{0}(0)=\gamma(0)\), then it must satisfy \(c|_{\{s\}\times[0,1]}=c_{s}\) for each \(s\), so we just need to verify that \(c:(s,t)\mapsto c_{s}(t)\) works as our map. To do this, choose an open neighborhood \(V_{(s,t)}\) for each \((s,t)\in[0,1]^{2}\) such that we get a map \(c|_{V_{(s,t)}}\) as above with \((c|_{V_{(s,t)}})(s,t):=c_{s}(t)\) and \((c|_{V_{(s,t)}})^{*}\omega=c_{G}^{*}\omega_{{}_{G}}\). This lets us cover each \(\{s\}\times[0,1]\) with open sets on which a map satisfying the desired conditions exists, and these maps \(c|_{V_{(s,t)}}\) would necessarily agree on overlaps along \(\{s\}\times[0,1]\) because, by definition, \(c_{s}\) is the unique path with \(c_{s}(0)=\gamma(0)\) and \((c_{s})_{G}=(c_{G})_{s}\). Thus, for each \(s\in[0,1]\), setting \(V_{s}:=\bigcup_{t\in[0,1]}V_{(s,t)}\), we get a map \(c|_{V_{s}}\) on an open neighborhood of \(\{s\}\times[0,1]\) such that \((c|_{V_{s}})^{*}\omega=c_{G}^{*}\omega_{{}_{G}}\) and \((c|_{V_{s}})(s,0)=\gamma(0)\). From here, we can glue the maps \(c|_{V_{s}}\) together along their overlaps to get \(c\), since the \(c|_{V_{s}}\) must necessarily coincide on \([0,1]\times\{0\}\) because they are constant along this interval. Thus, we get a map \(c:[0,1]^{2}\to\mathscr{G}\) satisfying \(c^{*}\omega=c_{G}^{*}\omega_{{}_{G}}\) and \(c_{0}(0)=\gamma(0)\), which must be a thin null-homotopy by the argument above. With these lemmas in hand, let us finally explain what the following theorem is meant to tell us. Recall that, in Definition 4.9, we refer to the Cartan geometry \((\mathscr{F},\sigma^{*}\omega)\) as "the sprawl of \((q_{{}_{H}}^{-1}(U),\omega)\) generated by \(\alpha\) from \(e\)". Ostensibly, however, \((q_{{}_{H}}^{-1}(U),\omega)\), \(\alpha\), and \(e\) are not enough to determine the geometric structure of the sprawl: the Cartan connection is given explicitly in terms of the sprawl map \(\sigma\) for \((\mathscr{F},\omega)\), and the topology of \(\mathscr{F}\) is determined by particular null-homotopies in \(M\). We would like to show that, in truth, the sprawl really is uniquely determined by \((q_{{}_{H}}^{-1}(U),\omega)\), the distinguished element \(e\in q_{{}_{H}}^{-1}(U)\), and the behavior of \(\alpha\) on them. To do this, suppose \((\mathscr{G},\upsilon)\) is another Cartan geometry of type \((G,H)\) that happens to have an open set geometrically identical to \((q_{{}_{H}}^{-1}(U),\omega)\), meaning that there is a geometric embedding \(\psi:(q_{{}_{H}}^{-1}(U),\omega)\hookrightarrow(\mathscr{G},\upsilon)\). Furthermore, suppose it has an automorphism \(\varphi\in\operatorname{Aut}(\mathscr{G},\upsilon)\) that behaves exactly as \(\alpha\) does on the distinguished element \(e\) under the identification given by the geometric embedding \(\psi\); in other words, \(\varphi(\psi(e))=\psi(\alpha(e))\). Then, if the sprawl truly is uniquely determined by \((q_{{}_{H}}^{-1}(U),\omega)\), \(\alpha\), and \(e\), then the sprawl of \((\psi(q_{{}_{H}}^{-1}(U)),\upsilon)\) generated by \(\varphi\) from \(\psi(e)\) should be geometrically isomorphic to \((\mathscr{F},\sigma^{*}\omega)\) in some natural way. The following theorem shows exactly this; indeed, it shows that the embedding \(\psi\) uniquely extends to the new sprawl map for \((\mathscr{G},\upsilon)\) from \((\mathscr{F},\sigma^{*}\omega)\). **Theorem 4.12**.: _Let \((\mathscr{G},\upsilon)\) be another Cartan geometry of type \((G,H)\), with an automorphism \(\varphi\in\operatorname{Aut}(\mathscr{G},\upsilon)\). If_ \[\psi|_{q_{{}_{H}}^{-1}(U)}:(\tilde{\alpha}^{0}(q_{{}_{H}}^{-1}(U)),\sigma^{*} \omega)\cong(q_{{}_{H}}^{-1}(U),\omega)\hookrightarrow(\mathscr{G},\upsilon)\] _is a geometric embedding such that \(\varphi((\psi|_{q_{{}_{H}}^{-1}(U)})(e))=(\psi|_{q_{{}_{H}}^{-1}(U)})(\alpha( e))\), then the embedding \(\psi|_{q_{{}_{H}}^{-1}(U)}\) has a unique extension to a geometric map \(\psi:(\mathscr{F},\sigma^{*}\omega)\to(\mathscr{G},\upsilon)\) from the sprawl of \((q_{{}_{H}}^{-1}(U),\omega)\) generated by \(\alpha\) from \(e\) into \((\mathscr{G},\upsilon)\) such that \(\psi\circ\tilde{\alpha}=\varphi\circ\psi\)._ Proof.: If the desired extension to \(\mathscr{F}\) exists, then it must be of the form \(\psi:\tilde{\alpha}^{i}(\boldsymbol{g})\mapsto\varphi^{i}((\psi|_{q_{{}_{H}}^{-1 }(U)})(\boldsymbol{g}))\), so uniqueness is immediate and \[(\psi^{*}\upsilon)_{\tilde{\alpha}^{i}(\boldsymbol{g})} =\psi^{*}(\upsilon_{\varphi^{i}(\psi(\boldsymbol{g}))})=\psi^{*}( \varphi^{-i})^{*}(\upsilon_{\psi(\boldsymbol{g})})=(\varphi^{-i}\circ\psi)^{*}( \upsilon_{\psi(\boldsymbol{g})})\] \[=(\psi\circ\tilde{\alpha}^{-i})^{*}(\upsilon_{\psi(\boldsymbol{g })})=(\tilde{\alpha}^{-i})^{*}\psi^{*}(\upsilon_{\psi(\boldsymbol{g})})=( \tilde{\alpha}^{-i})^{*}(\sigma^{*}\omega_{\boldsymbol{g}})\] \[=(\sigma^{*}\omega)_{\tilde{\alpha}^{i}(\boldsymbol{g})},\] hence \(\psi\) must be a geometric map as well. Thus, we just need to show that an extension of this form is well-defined. To this end, suppose \(\tilde{\alpha}^{i_{1}}(\boldsymbol{g}_{1})\sim\tilde{\alpha}^{i_{2}}( \boldsymbol{g}_{2})\), so that \(\alpha^{i_{1}}(\boldsymbol{g}_{1})=\alpha^{i_{2}}(\boldsymbol{g}_{2})\) and there exists a thinly null-homotopic loop \(q_{{}_{H}}(\gamma):[0,1]\to M\) based at the point \(\alpha^{i_{1}}(q_{{}_{H}}(\boldsymbol{g}_{1}))=\alpha^{i_{2}}(q_{{}_{H}}( \boldsymbol{g}_{2}))\) incremented from \(i_{1}\) to \(i_{2}\). Since the image of a null-homotopy is contractible, we can lift \(q_{{}_{H}}(\gamma)\) to a thinly null-homotopic loop \(\gamma:[0,1]\to\mathscr{G}\) based at \(\alpha^{i_{1}}(\boldsymbol{g}_{1})=\alpha^{i_{2}}(\boldsymbol{g}_{2})\), and by Lemma 4.10, we can further lift to a path \(\tilde{\gamma}:[0,1]\to\mathscr{F}\) starting at \(\tilde{\alpha}^{i_{1}}(\boldsymbol{g}_{1})\). Since \(\gamma_{G}=\tilde{\gamma}_{G}\), \(\tilde{\gamma}\) is again a thinly null-homotopic loop by Lemma 4.11. Our strategy to show that \[\psi(\tilde{\alpha}^{i_{1}}(\boldsymbol{g}_{1}))=\psi(\tilde{\gamma}(0))=\psi (\tilde{\gamma}(1))=\psi(\tilde{\alpha}^{i_{2}}(\boldsymbol{g}_{2}))\] is to construct a well-defined path \(\beta:[0,1]\to\mathscr{O}\) that always agrees with what the composite \(\psi\circ\tilde{\gamma}\) is if \(\psi\) is well-defined; because we will have \(\beta_{G}=\tilde{\gamma}_{G}\), \(\beta\) will be a thinly null-homotopic loop by Lemma 4.11, hence \[\psi(\tilde{\gamma}(0))=\beta(0)=\beta(1)=\psi(\tilde{\gamma}(1)).\] We construct the path \(\beta:[0,1]\to\mathscr{O}\) along the lines of the proof of Lemma 4.10. Let the incrementation of \(q_{{}_{H}}(\gamma)\) be given by the partition \(0=t_{0}<\cdots<t_{\ell}=1\) and labels \(i_{1}=k_{0},\ldots,k_{\ell-1}=i_{2}\in\mathbb{Z}\). To start, this means that \(\tilde{\gamma}([0,t_{1}])\subseteq\tilde{\alpha}^{i_{1}}(q_{{}_{H}}^{-1}(U))\), since \(\sigma(\tilde{\gamma})=\gamma\) by definition. Whenever we restrict \(\psi\) to a given \(\tilde{\alpha}^{k}(q_{{}_{H}}^{-1}(U))\), we get a well-defined geometric embedding \(\psi|_{\tilde{\alpha}^{k}(q_{{}_{H}}^{-1}(U))}\), which by definition is given by \[\psi|_{\tilde{\alpha}^{k}(q_{{}_{H}}^{-1}(U))}:=\varphi^{k}\circ(\psi|_{q_{{} _{H}}^{-1}(U)})\circ\tilde{\alpha}^{-k}|_{\tilde{\alpha}^{k}(q_{{}_{H}}^{-1}(U ))}.\] Therefore, it is valid to define \(\beta|_{[0,t_{1}]}:=\psi|_{\tilde{\alpha}^{i_{1}}(q_{{}_{H}}^{-1}(U))}\circ \tilde{\gamma}|_{[0,t_{1}]}\). At this point, we make a key observation: because the elements \(\tilde{\alpha}(\boldsymbol{e})\) and \(\alpha(\boldsymbol{e})\) are identified in \((\tilde{\alpha}^{0}(q_{{}_{H}}^{-1}(U)),\sigma^{*}\omega)\cong(q_{{}_{H}}^{-1 }(U),\omega)\) and \((\psi|_{q_{{}_{H}}^{-1}(U)})(\tilde{\alpha}(\boldsymbol{e}))=\varphi((\psi|_{q _{{}_{H}}^{-1}(U)})(\boldsymbol{e}))\), the geometric embeddings \(\psi|_{q_{{}_{H}}^{-1}(U)}\) and \[\psi|_{\tilde{\alpha}(q_{{}_{H}}^{-1}(U))}=\varphi\circ(\psi|_{q_{{}_{H}}^{-1}( U)})\circ\tilde{\alpha}^{-1}|_{\tilde{\alpha}(q_{{}_{H}}^{-1}(U))}\] must coincide over the connected component of \(q_{{}_{H}}(\tilde{\alpha}(\boldsymbol{e}))=q_{{}_{H}}(\alpha(\boldsymbol{e}))\) in the intersection \(U\cap\tilde{\alpha}(U)\) in \(q_{{}_{H}}(\mathscr{F})\), since \[(\psi|_{\tilde{\alpha}(q_{{}_{H}}^{-1}(U))})(\tilde{\alpha}(\boldsymbol{e}))= \varphi((\psi|_{q_{{}_{H}}^{-1}(U)})(\boldsymbol{e}))=(\psi|_{q_{{}_{H}}^{-1}( U)})(\tilde{\alpha}(\boldsymbol{e})).\] Using iterates of \(\tilde{\alpha}\) and \(\varphi\) to move to the other copies of \(q_{{}_{H}}^{-1}(U)\), we see that, for each \(k\), \(\psi|_{\tilde{\alpha}^{k}(q_{{}_{H}}^{-1}(U))}\) and \(\psi|_{\tilde{\alpha}^{k+1}(q_{{}_{H}}^{-1}(U))}\) must coincide over the connected component of \(q_{{}_{H}}(\tilde{\alpha}^{k+1}(\boldsymbol{e}))\) in \(\tilde{\alpha}^{k}(U)\cap\tilde{\alpha}^{k+1}(U)\). By definition, the incrementation of \(q_{{}_{H}}(\gamma)\) tells us that \(q_{{}_{H}}(\gamma)(t_{1})\) lies in the connected component of \(\alpha^{\max(k_{0},k_{1})}(q_{{}_{H}}(\varepsilon))\) in \(\alpha^{k_{0}}(U)\cap\alpha^{k_{1}}(U)\), so \(\tilde{\gamma}(t_{1})\) must lie over the connected component of \(\tilde{\alpha}^{\max(k_{0},k_{1})}(q_{{}_{H}}(\varepsilon))\) in \(\tilde{\alpha}^{k_{0}}(U)\cap\tilde{\alpha}^{k_{1}}(U)\). In particular, \(\psi|_{\tilde{\alpha}^{k_{0}}(q_{{}_{H}}^{-1}(U))}\) and \(\psi|_{\tilde{\alpha}^{k_{1}}(q_{{}_{H}}^{-1}(U))}\) must coincide on \(\tilde{\gamma}(t_{1})\) because \(|k_{0}-k_{1}|=1\), so we can extend \(\beta\) to \([0,t_{2}]\) by defining \(\beta|_{[t_{1},t_{2}]}:=\psi|_{\tilde{\alpha}^{k_{1}}(q_{{}_{H}}^{-1}(U))} \circ\tilde{\gamma}|_{[t_{1},t_{2}]}\). By iterating this procedure, defining \[\beta|_{[t_{j},t_{j+1}]}:=\psi|_{\tilde{\alpha}^{k_{j}}(q_{{}_{H}}^{-1}(U))} \circ\tilde{\gamma}|_{[t_{j},t_{j+1}]}\] for each \(j\), we get a well-defined path \(\beta\) that must be of the form \(\psi\circ\tilde{\gamma}\) if the extension \(\psi\) is well-defined. In particular, \(\beta\) is a path from \(\beta(0)=\varphi^{i_{1}}((\psi|_{q_{{}_{H}}^{-1}(U)})(\boldsymbol{g}_{1}))\) to \(\beta(1)=\varphi^{i_{2}}((\psi|_{q_{{}_{H}}^{-1}(U)})(\boldsymbol{g}_{2}))\) with \(\beta_{G}=\tilde{\gamma}_{G}\), so it must be a thinly null-homotopic loop based at \[\psi(\tilde{\alpha}^{i_{1}}(\boldsymbol{g}_{1}))=\beta(0)=\beta(1)=\psi( \tilde{\alpha}^{i_{2}}(\boldsymbol{g}_{2})).\qed\] This theorem is, as it turns out, remarkably well-suited to placing strong restrictions on which Cartan geometries can admit certain types of automorphisms. As an example, if \((M,\nabla)\) is an affine structure on a connected smooth manifold \(M\) and \(\alpha\) is an affine transformation of \((M,\nabla)\) that fixes a point \(x\) and whose derivative at \(x\) just rescales the tangent space at \(x\) by some \(\lambda\neq\pm 1\), then \((M,\nabla)\) must be isomorphic to the affine structure on affine space. **Corollary 4.13**.: _Suppose that \((\mathscr{G},\omega)\) is a Cartan geometry of type \((\operatorname{Aff}(m),\operatorname{GL}_{m}\mathbb{R})\) over a connected smooth manifold \(M\) with an automorphism \(\alpha\in\operatorname{Aut}(\mathscr{G},\omega)\) such that, for some \(\boldsymbol{e}\in\mathscr{G}\) and \(\lambda\neq\pm 1\), \(\alpha(\boldsymbol{e})=\boldsymbol{e}(\lambda\mathds{1})\). Then, \((\mathscr{G},\omega)\) is isomorphic to the Klein geometry \((\operatorname{Aff}(m),\omega_{{}_{\operatorname{Aff}(m)}})\) over \(\mathbb{R}^{m}\)._ Proof.: By considering the inverse of \(\alpha\) or squaring if necessary, we may assume that \(0<\lambda<1\). As we saw in Section 3, because \(\alpha(\boldsymbol{e})=\boldsymbol{e}(\lambda\mathds{1})\), \((\mathscr{G},\omega)\) is flat in a neighborhood of \(\boldsymbol{e}\). Therefore, for some connected open neighborhood \(U\) of \(0\) in \(\mathbb{R}^{m}\), we have a geometric embedding \[\psi:(q_{\operatorname{GL}_{m}\mathbb{R}}^{-1}(U),\omega_{{}_{\operatorname{Aff }(m)}})\hookrightarrow(\mathscr{G},\omega)\] such that \(\psi(0,\mathds{1})=\boldsymbol{e}\), and since \[\alpha(\psi(0,\mathds{1}))=\psi(0,\mathds{1})(\lambda\mathds{1})=\psi(\lambda \mathds{1}\cdot(0,\mathds{1})),\] Theorem 4.12 tells us that \(\psi\) extends to a geometric map from the sprawl of \((q_{\operatorname{GL}_{m}\mathbb{R}}^{-1}(U),\omega_{{}_{\operatorname{Aff}(m )}})\) from \((0,\mathds{1})\) generated by \(\lambda\mathds{1}\). This sprawl happens to be just the Klein geometry \((\operatorname{Aff}(m),\omega_{{}_{\operatorname{Aff}(m)}})\). To see this, note that \(\lambda\mathds{1}\) fixes \(0\in\mathbb{R}^{m}\), so the iterates \(\widetilde{\lambda\mathds{1}}^{k}(U)\) all contain \(\widetilde{\lambda\mathds{1}}^{0}(0)\cong 0\). Moreover, for all sufficiently large positive \(k\), the iterate \((\lambda\mathds{1})^{-k}(U)=\lambda^{-k}\mathds{1}(U)\) will properly contain \(U\), so if \(\gamma\) is a path in \(\widetilde{\lambda\mathds{1}}^{0}(q_{\operatorname{GL}_{m}\mathbb{R}}^{-1}(U)) \cong q_{\operatorname{GL}_{m}\mathbb{R}}^{-1}(U)\) ending with \((0,\mathds{1})\), \(\widetilde{\lambda\mathds{1}}^{-k}(q_{\operatorname{GL}_{m}\mathbb{R}}^{-1}(U))\) must contain a path \(\gamma^{\prime}\) with the same development also ending at \((0,\mathds{1})\) so \(\gamma\star\overline{\gamma^{\prime}}\) is a thinly null-homotopic loop incremented from \(0\) to \(k\) by Lemma 4.11. In other words, \(\widetilde{\lambda\mathds{1}}^{0}(q_{{}_{\operatorname{GL}_{m}\mathbb{R}}}^{-1}( U))\subseteq\widetilde{\lambda\mathds{1}}^{-k}(q_{{}_{\operatorname{GL}_{m} \mathbb{R}}}^{-1}(U))\) for all sufficiently large \(k\), hence \(\widetilde{\lambda\mathds{1}}^{i}(q_{{}_{\operatorname{GL}_{m}\mathbb{R}}}^{-1} (U))\subseteq\widetilde{\lambda\mathds{1}}^{i-k}(q_{{}_{\operatorname{GL}_{m} \mathbb{R}}}^{-1}(U))\) for all sufficiently large \(k\). Since the union of all the \((\lambda\mathds{1})^{-k}(U)\) is \(\mathbb{R}^{m}\), this means that \(\widetilde{\lambda\mathds{1}}^{i_{1}}(v_{1},A_{1})\) and \(\widetilde{\lambda\mathds{1}}^{i_{2}}(v_{2},A_{2})\) are sprawl-equivalent if and only if \((\lambda\mathds{1})^{i_{1}}(v_{1},A_{1})=(\lambda\mathds{1})^{i_{2}}(v_{2},A _{2})\), so the sprawl is just \((\operatorname{Aff}(m),\omega_{{}_{\operatorname{Aff}(m)}})\). Since \((\operatorname{Aff}(m),\omega_{{}_{\operatorname{Aff}(m)}})\) is complete, the geometric map \(\psi\) must be a covering map. However, if \(\Gamma\) is a discrete subgroup of \(\operatorname{Aff}(m)\) that is normalized by \(\lambda\mathds{1}\), then \(\Gamma<\operatorname{GL}_{m}\mathbb{R}\), so there is no nontrivial covering map to a Cartan geometry with an automorphism having isotropy \(\lambda\mathds{1}\). In other words, \(\psi\) must be a geometric isomorphism. ## 5. Applications Here, we will prove Theorems A, B, and C from the introduction. To do this, we will prove a more general result, and then we will show that the hypotheses of this result are satisfied in each of the desired cases. Throughout this section, we will need a notion of codimension; it will be convenient to use the following terminology. **Definition 5.1**.: For a smooth manifold \(M\) and a closed subset \(N\subseteq M\), we say that \(v\in TM\) is _not tangent to \(N\)_ if and only if for every path \(\gamma:[0,1]\to M\) with \(\dot{\gamma}(0)=v\), there exists an \(\varepsilon>0\) such that \(\gamma((0,\varepsilon))\not\subseteq N\). We will say that \(N\) has _codimension at least one_ if and only if, for each \(p\in M\), there is a \(v\in T_{p}M\) that is not tangent to \(N\). **Definition 5.2**.: For a smooth manifold \(M\), we will say that a closed subset \(N\subseteq M\) has _codimension at least two_ if and only if, for each path \(\gamma:[0,1]\to M\), we can choose arbitrarily small intervals around each \(t_{0}\) for which \(\gamma(t_{0})\in N\) and get a homotopy \(c:[0,1]^{2}\to M\) with \(c_{0}=\gamma\) such that \(c((0,1]\times[0,1])\not\subseteq N\) and, for every \(s\in[0,1]\), \(c_{s}(t)=\gamma(t)\) for all \(t\) outside of those small intervals. Closed submanifolds and subvarieties \(N\subseteq M\) will, of course, have codimension at least one if \(\dim(M)-\dim(N)\geq 1\) and have codimension at least two if \(\dim(M)-\dim(N)\geq 2\). To consolidate assumptions, we also make the following definitions characterizing the isotropies \(a\) for the automorphisms in which we are currently interested. **Definition 5.3**.: A _flamboyance_ for \(a\in H\) in the model \((G,H)\) is a set \(\mathcal{L}\) of \(a\)-invariant simply connected compact subspaces of \(G/H\) for which the following three conditions are satisfied: * For each \(q_{{}_{H}}(g)\not\in\operatorname{Fix}_{G/H}(a)\), there is an \(\ell\in\mathcal{L}\) containing \(q_{{}_{H}}(g)\); * For every \(\ell\in\mathcal{L}\), \(\ell\cap\operatorname{Fix}_{G/H}(a)=\{q_{{}_{H}}(e)\}\); * For every \(\ell,\ell^{\prime}\in\mathcal{L}\), the intersection \(\ell\cap\ell^{\prime}\) is path-connected. **Definition 5.4**.: We say that \(a\in H\) is _flamboyant_ in the model \((G,H)\) if and only if there exists a flamboyance for \(a\), \(a^{k}(q_{{}_{H}}(g))\to q_{{}_{H}}(e)\) as \(k\to+\infty\) whenever \(q_{{}_{H}}(g)\not\in\operatorname{Fix}_{G/H}(a)\), and the set \(\operatorname{Fix}_{G/H}(a)\) of fixed points for \(a\) in \(G/H\) has codimension at least two. The idea of a flamboyance will allow us to place certain convenient restrictions on the sprawls generated by an isotropy \(a\in H\), as we will see in the next subsection. ### General result In this subsection, we will prove the following result, from which Theorems A, B, and C will follow. **Theorem 5.5**.: _Suppose \((\mathscr{F},\omega)\) is a Cartan geometry of type \((G,H)\) over a connected smooth manifold \(M\), together with an automorphism \(\alpha\in\operatorname{Aut}(\mathscr{F},\omega)\) and a distinguished element \(e\in\mathscr{F}\) such that \(\alpha(e)=ea\) for some flamboyant \(a\in H\). If \((\mathscr{F},\omega)\) is flat in a neighborhood of \(e\), then it geometrically embeds onto a dense open subset of the Klein geometry \((G,\omega_{{}_{G}})\) of type \((G,H)\)._ To start the proof, notice that we can get a geometric embedding \(\psi:(q_{{}_{H}}^{-1}(U),\omega_{{}_{G}})\hookrightarrow(\mathscr{F},\omega)\) with \(\psi(e)=e\) for all sufficiently small neighborhoods \(U\) of \(q_{{}_{H}}(e)\) in \(G/H\) because the geometry is flat in a neighborhood of \(e\). Since \(\alpha(e)=ea\) and \(\psi\) is \(H\)-equivariant, \[\psi(ae)=\psi(ea)=\psi(e)a=\alpha(\psi(e)),\] so by Theorem 4.12, \(\psi\) extends to a geometric map from the sprawl \((\mathscr{F},\sigma^{*}\omega_{{}_{G}})\) of \((q_{{}_{H}}^{-1}(U),\omega_{{}_{G}})\) generated by \(a\) from \(e\) into \((\mathscr{F},\omega)\). Next, we want to prove that the sprawl map \(\sigma:(\mathscr{F},\sigma^{*}\omega_{{}_{G}})\to(G,\omega_{{}_{G}})\) for the Klein geometry is a geometric embedding. To do this, we will use the following two lemmas. **Lemma 5.6**.: _Suppose \(\varphi:(\mathscr{F},\omega)\to(\mathscr{O},\upsilon)\) is a geometric map and \(V\subseteq\text{\textcircled{O}}\) is a dense open subset such that the complement \(\text{\textcircled{O}}\setminus V\) has codimension at least one. If \(\varphi|_{\varphi^{-1}(V)}\) is injective, then \(\varphi\) is injective._ Proof.: Suppose \(\varphi(\boldsymbol{g}_{1})=\varphi(\boldsymbol{g}_{2})\) for \(\boldsymbol{g}_{1},\boldsymbol{g}_{2}\in\mathscr{F}\). Choosing \(X\in\mathfrak{g}\) such that \(\upsilon_{\varphi(\boldsymbol{g}_{1})}^{-1}(X)=\upsilon_{\varphi(\boldsymbol{g }_{2})}^{-1}(X)\) is not tangent to \(\text{\textcircled{O}}\setminus V\), we have \[\varphi(\exp(t\omega^{-1}(X))\boldsymbol{g}_{1}) =\exp(t\upsilon^{-1}(X))\varphi(\boldsymbol{g}_{1})\] \[=\exp(t\upsilon^{-1}(X))\varphi(\boldsymbol{g}_{2})\] \[=\varphi(\exp(t\omega^{-1}(X))\boldsymbol{g}_{2})\] for all \(t\in\mathbb{R}\) such that both \(\exp(t\omega^{-1}(X))\boldsymbol{g}_{1}\) and \(\exp(t\omega^{-1}(X))\boldsymbol{g}_{2}\) are well-defined, and for all sufficiently small \(t\neq 0\), it is in \(V\). Thus, \[\exp(t\omega^{-1}(X))\boldsymbol{g}_{1}=\exp(t\omega^{-1}(X))\boldsymbol{g}_{2}\] by injectivity of \(\varphi|_{\varphi^{-1}(V)}\), hence \[\boldsymbol{g}_{1} =\exp(t\omega^{-1}(X))^{-1}\exp(t\omega^{-1}(X))\boldsymbol{g}_{1}\] \[=\exp(t\omega^{-1}(X))^{-1}\exp(t\omega^{-1}(X))\boldsymbol{g}_{2 }=\boldsymbol{g}_{2}.\qed\] **Lemma 5.7**.: _If \(a\in H\) is flamboyant in \((G,H)\), then the sprawl \((\mathscr{F},\sigma^{*}\omega_{{}_{G}})\) of \((q_{{}_{H}}^{-1}(U),\omega_{{}_{G}})\) generated by \(a\) from \(e\) has injective sprawl map \(\sigma:(\mathscr{F},\sigma^{*}\omega_{{}_{G}})\to(G,\omega_{{}_{G}})\)._ Proof.: Suppose \(\ell\) is an element of a flamboyance \(\mathcal{L}\) for \(a\), so that \(\ell\) is an \(a\)-invariant simply connected compact subspace of \(G/H\) whose only fixed point is \(q_{{}_{H}}(e)\). Let \((\ell\cap U)_{0}\) be the path component of \(\ell\cap U\) containing \(q_{{}_{H}}(e)\); by invariance of \(\ell\) and \(q_{{}_{H}}(e)\), \(a^{k}(\ell\cap U)_{0}=(\ell\cap a^{k}(U))_{0}\). Since \(U\) is open, \(\ell\setminus(\ell\cap U)_{0}\) is closed, hence compact. Therefore, for all sufficiently large \(k\), \[a^{k}(\ell\setminus(\ell\cap U)_{0})=\ell\setminus(\ell\cap a^{k}(U))_{0} \subseteq(\ell\cap U)_{0},\] since \(a^{k}(q_{{}_{H}}(g))\to q_{{}_{H}}(e)\) for each \(q_{{}_{H}}(g)\not\in\operatorname{Fix}_{G/H}(a)\). In other words, \[(\ell\cap U)_{0}\cup(\ell\cap a^{k}(U))_{0}=\ell\] for all sufficiently large \(k\). Because \(\ell\) is simply connected, \(\operatorname{H}_{1}(\ell)=\{0\}\), and by path-connectedness, \[\operatorname{H}_{0}((\ell\cap U)_{0})\approx\operatorname{H}_{0}((\ell\cap a ^{k}(U))_{0})\approx\operatorname{H}_{0}(\ell)\approx\mathbb{Z},\] so for all sufficiently large \(k\), we get a short exact sequence \[\{0\}\to\operatorname{H}_{0}((\ell\cap U)_{0}\cap(\ell\cap a^{k}(U))_{0}) \hookrightarrow\mathbb{Z}\oplus\mathbb{Z}\twoheadrightarrow\mathbb{Z}\to\{0\}\] by Mayer-Vietoris. It follows that \(\operatorname{H}_{0}((\ell\cap U)_{0}\cap(\ell\cap a^{k}(U))_{0})\approx \mathbb{Z}\) as well, so \((\ell\cap U)_{0}\cap(\ell\cap a^{k}(U))_{0}\) is path-connected. Thus, for every \(g_{1},g_{2}\in q_{{}_{H}}^{-1}((\ell\cap U)_{0})\) such that \(g_{1}=a^{k}(g_{2})\), there exists a path \[\zeta:[0,1]\to(\ell\cap U)_{0}\cap(\ell\cap a^{k}(U))_{0}\subseteq\ell\cap U \cap a^{k}(U)\] with \(\zeta(0)=q_{{}_{H}}(g_{1})\) and \(\zeta(1)=q_{{}_{H}}(e)\) whenever \(k\) is sufficiently large, so the concatenation \(\zeta\star\bar{\zeta}\) is a thinly null-homotopic loop incremented from \(0\) to \(k\) because \(q_{{}_{H}}(e)\) is in the connected component of \(a^{i}(q_{{}_{H}}(e))=q_{{}_{H}}(e)\) in \(a^{i-1}(U)\cap a^{i}(U)\) for every \(i\). By definition of sprawl-equivalence, this means that \(\tilde{a}^{0}(g_{1})\) is sprawl-equivalent to \(\tilde{a}^{k}(g_{2})\) if and only if \[\sigma(\tilde{a}^{0}(g_{1}))=g_{1}=a^{k}(g_{2})=\sigma(\tilde{a}^{k}(g_{2}))\] when \(g_{1}\in q_{{}_{H}}^{-1}((\ell\cap U)_{0})\) and \(a^{k}(g_{2})\in q_{{}_{H}}^{-1}((\ell\cap a^{k}(U))_{0})\). The sprawl map \(\sigma\) restricts to an embedding on \(\tilde{a}^{0}(q_{{}_{H}}^{-1}(U))\) and \(\tilde{a}^{k}(q_{{}_{H}}^{-1}(U))\), so it is injective on both \[L_{1}:=(\sigma|_{\tilde{a}^{0}(q_{H}^{-1}(U))})^{-1}(q_{{}_{H}}^{-1}((\ell\cap U )_{0}))\] and \[L_{2}:=(\sigma|_{\tilde{a}^{k}(q_{H}^{-1}(U))})^{-1}(q_{{}_{H}}^{-1}((\ell\cap a ^{k}(U))_{0})).\] Since \(\sigma(\tilde{a}^{0}(g_{1}))=\sigma(\tilde{a}^{k}(g_{2}))\) if and only if \(\tilde{a}^{0}(g_{1})\) is sprawl-equivalent to \(\tilde{a}^{k}(g_{2})\), it follows that \(\sigma\) is also injective on the union \(q_{{}_{H}}^{-1}(\tilde{\ell}):=L_{1}\cup L_{2}\) whose image under \(\sigma\) is precisely \[\sigma(q_{{}_{H}}^{-1}(\tilde{\ell})) =q_{{}_{H}}^{-1}((\ell\cap U)_{0})\cup q_{{}_{H}}^{-1}((\ell\cap a^ {k}(U))_{0})\] \[=q_{{}_{H}}^{-1}((\ell\cap U)_{0}\cup(\ell\cap a^{k}(U))_{0})\] \[=q_{{}_{H}}^{-1}(\ell).\] For each \(\ell\in\mathcal{L}\), we therefore get a subset \(q_{{}_{H}}^{-1}(\tilde{\ell})\) of \(\mathscr{F}\) such that the restriction \(\sigma|_{q_{{}_{H}}^{-1}(\tilde{\ell})}\) is injective onto \(q_{{}_{H}}^{-1}(\ell)\). Moreover, for every \(g\in q_{{}_{H}}^{-1}(\ell\cap\ell^{\prime})\) with \(\ell,\ell^{\prime}\in\mathcal{L}\), there is a path \(\gamma:[0,1]\to q_{{}_{H}}^{-1}(\ell\cap\ell^{\prime})\) such that \(\gamma(0)=e\) and \(\gamma(1)\in gH\) because \(\ell\cap\ell^{\prime}\) is path-connected; the paths \((\sigma|_{q_{{}_{H}}^{-1}(\tilde{\ell})})^{-1}(\gamma)\) and \((\sigma|_{q_{{}_{H}}^{-1}(\tilde{\ell})})^{-1}(\gamma)\) have the same development and both start at \(\tilde{a}^{0}(e)\), so they must be the same path, hence \((\sigma|_{q_{{}_{H}}^{-1}(\tilde{\ell})})^{-1}\) and \((\sigma|_{q_{{}_{H}}^{-1}(\tilde{\ell}^{\prime})})^{-1}\) must agree on the intersection \(q_{{}_{H}}^{-1}(\ell\cap\ell^{\prime})\). In particular, the restriction of \(\sigma\) to the union \(W:=\bigcup_{\ell\in\mathcal{L}}q_{{}_{H}}^{-1}(\tilde{\ell})\) is still injective, and the image of \(W\) under \(\sigma\) contains the complement \(G\setminus q_{{}_{H}}^{-1}(\operatorname{Fix}_{G/H}(a))\) because \(\mathcal{L}\) is a flamboyance for \(a\). Now, we want to prove that \(\sigma\) is injective on \(\sigma^{-1}(G\setminus q_{{}_{H}}^{-1}(\operatorname{Fix}_{G/H}(a)))\). Suppose \(a^{i}(g)\not\in q_{{}_{H}}^{-1}(\operatorname{Fix}_{G/H}(a))\), and let \(\gamma:[0,1]\to a^{i}(q_{{}_{H}}^{-1}(U))\) be a path such that \(\gamma(0)=e\) and \(\gamma(1)h=a^{i}(g)\) for some \(h\in H\). Because \(\operatorname{Fix}_{G/H}(a)\) has codimension at least two, we may assume, after possibly applying a homotopy, that \(\gamma((0,1])\subseteq G\setminus q_{{}_{H}}^{-1}(\operatorname{Fix}_{G/H}(a))\). Therefore, \(\gamma([0,1])\subseteq a^{i}(q_{{}_{H}}^{-1}(U))\cap\sigma(W)\), so we get paths \((\sigma|_{\tilde{a}^{i}(q_{{}_{H}}^{-1}(U))})^{-1}(\gamma)\) and \((\sigma|_{W})^{-1}(\gamma)\) with the same development and starting point, hence they are equal. In particular, \[(\sigma|_{W})^{-1}(a^{i}(g))=(\sigma|_{\tilde{a}^{i}(q_{{}_{H}}^{-1}(U))})^{-1 }(a^{i}(g))=\tilde{a}^{i}(g),\] so if \(a^{i_{1}}(g_{1})=a^{i_{2}}(g_{2})\not\in q_{{}_{H}}^{-1}(\operatorname{Fix}_{ G/H}(a))\), then \[\tilde{a}^{i_{1}}(g_{1})=(\sigma|_{W})^{-1}(a^{i_{1}}(g_{1}))=(\sigma|_{W})^{- 1}(a^{i_{2}}(g_{2}))=\tilde{a}^{i_{2}}(g_{2}).\] Thus, \(\sigma\) is injective on \(\sigma^{-1}(G\setminus q_{{}_{H}}^{-1}(\operatorname{Fix}_{G/H}(a)))\). By Lemma 5.6, it follows that \(\sigma\) is injective on all of \(\mathscr{F}\). Because the sprawl map \(\sigma:(\mathscr{F},\sigma^{*}\omega_{{}_{G}})\to(G,\omega_{{}_{G}})\) for the Klein geometry is a geometric embedding, we can identify \(\mathscr{F}\) with its image \(\sigma(\mathscr{F})=\bigcup_{i\in\mathbb{Z}}a^{i}(q_{{}_{P}}^{-1}(U))\), which will be a dense open subset of \(G\) with complement of codimension at least two. As a corollary, the geometric map \(\psi:(\mathscr{F},\sigma^{*}\omega_{{}_{G}})\to(\mathscr{F},\omega)\) must be a geometric embedding as well. **Corollary 5.8**.: _The map \(\psi:(\mathscr{F},\sigma^{*}\omega_{{}_{G}})\to(\mathscr{F},\omega)\) is also injective._ Proof.: Suppose \[\psi(\tilde{a}^{i_{1}}(g_{1}))=\alpha^{i_{1}}(\psi(g_{1}))=\alpha^{i_{2}}(\psi( g_{2}))=\psi(\tilde{a}^{i_{2}}(g_{2})).\] For all \(k\), \[\psi(\tilde{a}^{i_{1}+k}(g_{1}))=\alpha^{i_{1}+k}(\psi(g_{1}))=\alpha^{i_{2}+k} (\psi(g_{2}))=\psi(\tilde{a}^{i_{2}+k}(g_{2})),\] and if \(g_{1},g_{2}\not\in q_{{}_{H}}^{-1}(\operatorname{Fix}_{G/H}(a))\), then \(\tilde{a}^{i_{1}+k}(g_{1}),\tilde{a}^{i_{2}+k}(g_{2})\in\tilde{a}^{0}(q_{{}_{H}}^ {-1}(U))\) for all sufficiently large \(k\). Since \(\psi|_{\tilde{a}^{0}(q_{{}_{H}}^{-1}(U))}\) is a geometric embedding, \(\tilde{a}^{i_{1}+k}(g_{1})=\tilde{a}^{i_{2}+k}(g_{2})\), hence \(\tilde{a}^{i_{1}}(g_{1})=\tilde{a}^{i_{2}}(g_{2})\) if \(g_{1},g_{2}\not\in q_{{}_{H}}^{-1}(\operatorname{Fix}_{G/H}(a))\). If, on the other hand, \(g_{1},g_{2}\in q_{{}_{H}}^{-1}(\operatorname{Fix}_{G/H}(a))\), then \(\tilde{a}^{i_{1}}(g_{1})\) and \(\tilde{a}^{i_{2}}(g_{2})\) are in \(\tilde{a}^{0}(q_{{}_{H}}^{-1}(U))\) for all \(i_{1}\) and \(i_{2}\), so \(\psi\) is still injective there as well. Now that we know \[\psi:(\mathscr{F},\sigma^{*}\omega_{{}_{G}})\cong(\sigma(\mathscr{F}),\omega_ {{}_{G}})\hookrightarrow(\mathscr{F},\omega)\] is a geometric embedding, our goal is to reverse it, in some sense, to get a geometric embedding \(\delta:(\mathscr{F},\omega)\to(G,\omega_{{}_{G}})\) such that \(\delta\circ\psi\) is the identity map on \(\mathscr{F}\subseteq G\). To do this, we will need a bit of help from holonomy, which we will get from the following result. **Proposition 5.9**.: _Suppose \((\mathscr{F},\omega)\) is a Cartan geometry of type \((G,H)\) over a connected smooth manifold \(M\). Let \(\mathscr{F}\subseteq G\) be an \(H\)-invariant dense open subset of \(G\) containing the identity element \(e\in G\), whose complement \(G\setminus\mathscr{F}\) has codimension at least two. If there is a geometric embedding \(\psi:(\mathscr{F},\omega_{{}_{G}})\hookrightarrow(\mathscr{F},\omega)\), then \(\operatorname{Hol}_{\psi(e)}(\mathscr{F},\omega)=\{e\}\), and the resulting geometric map \(\delta:(\mathscr{F},\omega)\to(G,\omega_{{}_{G}})\), given by \(\gamma(1)h\mapsto\gamma_{G}(1)h\) for every \(h\in H\) and piecewise smooth path \(\gamma:[0,1]\to\mathscr{G}\) starting at \(\gamma(0)=\psi(e)\), is a geometric embedding such that \(\delta\circ\psi=\operatorname{id}_{\mathscr{F}}\)._ Proof.: Suppose \(\gamma:[0,1]\to\mathscr{F}\) is the lift of a loop \(q_{{}_{H}}(\gamma)\) on \(M\) with \(\gamma(0)=\psi(e)\), and \(h_{\gamma}\in H\) is such that \(\gamma(0)h_{\gamma}=\gamma(1)\). To start, we will prove that \(\gamma_{G}(1)h_{\gamma}^{-1}=e\). Because \(G\setminus\mathscr{F}\) has codimension at least two in \(G\), there exists a homotopy \(c:[0,1]^{2}\to G\) with \(c_{0}=\gamma_{G}\) such that \(c((0,1]\times[0,1])\subseteq\mathscr{F}\) and, for every \(s\in[0,1]\), \(c_{s}(t)=\gamma_{G}(t)\) outside of arbitrarily small intervals around each \(t\) for which \(\gamma_{G}(t)\not\in\mathscr{F}\); see Figures 5 and 6 for illustrations. Since \(\psi\) is a geometric map, \(\psi(c_{s})_{G}=c_{s}\) for all \(s\in(0,1]\); it follows that \(\psi(c_{s}(t))=\gamma(t)\) for all \(t\) outside of those small intervals and \(\overline{\psi(\mathscr{F})}=\mathscr{G}\). In particular, \(\psi(c_{s}(1))=\gamma(1)\) for all \(s\in[0,1]\), so \[\psi(e)=\gamma(0)=\gamma(1)h_{\gamma}^{-1}=\psi(c_{s}(1))h_{\gamma}^{-1}=\psi (c_{s}(1)h_{\gamma}^{-1}),\] hence \[\gamma_{G}(1)h_{\gamma}^{-1}=c_{0}(1)h_{\gamma}^{-1}=c_{s}(1)h_{\gamma}^{-1}=e\] because \(\psi\) is an embedding. Thus, \(\operatorname{Hol}_{\psi(e)}(\mathscr{F},\omega)=\{e\}\). Recall that, because \(\operatorname{Hol}_{\psi(e)}(\mathscr{F},\omega)=\{e\}\), we obtain a well-defined geometric map \(\delta:(\mathscr{F},\omega)\to(G,\omega_{{}_{G}})\) given by \(\gamma(1)h\mapsto\gamma_{G}(1)h\) for every \(h\in H\) and piecewise smooth path \(\gamma:[0,1]\to\mathscr{G}\) starting at \(\gamma(0)=\psi(e)\). By definition, then, we have that \(\delta(\gamma)=\gamma_{G}\) whenever \(\gamma(0)=\psi(e)\), so \((\delta\circ\psi)(\gamma_{G})=\gamma_{G}\) whenever \(\gamma_{G}\) is contained in \(\mathscr{F}\), hence \(\delta\circ\psi\) is the identity map on \(\mathscr{F}\). It just remains to show that \(\delta\) is a geometric embedding. To this end, suppose \(\delta(\not\!\!q)\in\mathscr{F}\), and let \(\gamma:[0,1]\to\mathscr{G}\) be a path with \(\gamma(0)=\psi(e)\) and \(\gamma(1)\in\mathpzc{G}H\). Then, \(\delta(\gamma)\) is a path in \(G\) with \(\delta(\gamma(0))=e\) and \(\delta(\gamma(1))\in\delta(\mathpzc{G})H\). Since \(G\setminus\mathpzc{F}\) has codimension at least two, we get another homotopy \(c:[0,1]^{2}\to G\) such that \(c((0,1]\times[0,1])\subseteq\mathpzc{F}\) and \(c_{s}(t)=c_{0}(t)\) for all \(s\in[0,1]\) and \(t\) outside small intervals on which \(c_{0}=\delta(\gamma)\) intersects \(G\setminus\mathpzc{F}\). For each \(s\in(0,1]\), \(\psi(c_{s})\) is well-defined with \(\psi(c_{s})_{G}=c_{s}\), so since \(\psi(c_{s}(1))\) is constant in \(s\), it must be equal to \(\gamma(1)\). In particular, \(\gamma(1)\in\psi(\mathpzc{F})\), so \(\mathpzc{G}\in\psi(\mathpzc{F})\), hence \(\delta^{-1}(\mathpzc{F})=\psi(\mathpzc{F})\). Finally, because \(\delta\circ\psi=\operatorname{id}_{\mathpzc{F}}\), the restriction \(\delta|_{\psi(\mathpzc{F})}\) is injective, so \(\delta\) is injective by Lemma 5.6. To summarize, we have shown that, if \(a\in H\) is flamboyant for \((G,H)\) and \(\alpha\in\operatorname{Aut}(\mathpzc{F},\omega)\) with isotropy \(a\) at some \(\mathpzc{e}\in\mathpzc{G}\) such that the curvature of \((\mathpzc{F},\omega)\) vanishes in some neighborhood of \(\mathpzc{e}\), then there is a geometric embedding \(\delta:(\mathpzc{G},\omega)\hookrightarrow(G,\omega_{{}_{G}})\) into the Klein geometry of type \((G,H)\) whose image is the dense open subset given by the image of the sprawl generated by \(a\). Thus, we have proven Theorem 5.5. Figure 5. The path \(\gamma_{G}\) in \(G\), with neighborhoods of its intersection with \(G\setminus\mathpzc{F}\) highlighted in gray Now, to prove Theorems A, B, and C, we just need to prove that the relevant isotropies \(a\in P_{+}\) are flamboyant, since the results from Section 3 guarantee that the curvature vanishes in a neighborhood of a higher-order fixed point with isotropy \(a\). ### Proof of Theorem A Recall that, in this case, our model is \((\operatorname{PGL}_{m+1}\mathbb{C},P)\), which corresponds to complex projective geometry over \(\mathbb{CP}^{m}\). Proposition 3.2 tells us that the curvature vanishes in a neighborhood of \(\mathscr{e}\in\mathscr{G}\), and using the same block sizes as in the proof of that proposition, we may assume that \[a=\begin{pmatrix}1&1&0\\ 0&1&0\\ 0&0&\mathds{1}\end{pmatrix}\] after conjugating by an element of \(P\). Using Theorem 5.5, it just remains to show that \(a\) is flamboyant for \((\operatorname{PGL}_{m+1}\mathbb{C},P)\). For arbitrary \[q_{{}_{P}}(g):=\begin{pmatrix}r\\ x\\ y\end{pmatrix}\in\mathbb{CP}^{m},\] with \(r,x\in\mathbb{C}\) and \(y\in\mathbb{C}^{m-1}\), we have \[a^{k}(q_{{}_{P}}(g))=a^{k}\begin{pmatrix}r\\ x\\ y\end{pmatrix}=\begin{pmatrix}1&k&0\\ 0&1&0\\ 0&0&\mathds{1}\end{pmatrix}\begin{pmatrix}r\\ x\\ y\end{pmatrix}=\begin{pmatrix}r+kx\\ x\\ y\end{pmatrix},\] which is equal to \[\begin{pmatrix}\frac{1}{\frac{x}{r+kx}}\\ \frac{1}{r+kx}y\end{pmatrix}\] whenever \(r+kx\neq 0\). In particular, fixed points are precisely those points for which \(x=0\), and if \(x\neq 0\), then \(a^{k}(q_{{}_{P}}(g))\) must go to \(q_{{}_{P}}(e)\) as \(k\to+\infty\). Since \(x=0\) implies \(\operatorname{Re}(x)=0\) and \(\operatorname{Im}(x)=0\), the set \(\operatorname{Fix}_{\mathbb{CP}^{m}}(a)\) of all points for which \(x=0\) is of codimension \(2\). Now, all that is left to do is construct a flamboyance for \(a\). Because \(a\) is a complex projective automorphism, it always sends complex lines to complex lines, and since \(a\) has a higher-order fixed point at \(q_{{}_{P}}(e)\), it specifically preserves each complex line through \(q_{{}_{P}}(e)\). Let \(\mathcal{L}\) be the set of all complex lines \(\ell\) such that \(\ell\cap\operatorname{Fix}_{\mathbb{CP}^{m}}(a)=\{q_{{}_{P}}(e)\}\). Each of these complex lines \(\ell\) is \(a\)-invariant, and since \(\ell\cong\mathbb{CP}^{1}\), they are all compact and simply connected. Moreover, whenever \(q_{{}_{P}}(g)\neq q_{{}_{P}}(e)\), there is a unique complex line containing both \(q_{{}_{P}}(g)\) and \(q_{{}_{P}}(e)\), so the intersection of different elements of \(\mathcal{L}\) is always the path-connected subset \(\{q_{{}_{P}}(e)\}\) and each non-fixed point of \(\mathbb{CP}^{m}\) is contained in some element of \(\mathcal{L}\). Thus, \(\mathcal{L}\) is a flamboyance for \(a\), which completes the proof of Theorem A. ### Proof of Theorem B This proof is largely the same as the one for Theorem A. In this case, the model becomes \((\operatorname{PGL}_{m+1}\mathbb{H},P)\), with \(\operatorname{PGL}_{m+1}\mathbb{H}/P\cong\mathbb{H}\mathbb{P}^{m}\). After conjugating by an element of \(P\), we may assume that \[a=\begin{pmatrix}1&1&0\\ 0&1&0\\ 0&0&\mathds{1}\end{pmatrix},\] and by Theorem 5.4 of [11], the curvature vanishes in a neighborhood of \(e\). Now, we just need to show \(a\) is flamboyant. For arbitrary \[q_{{}_{P}}(g):=\begin{pmatrix}r\\ x\\ y\end{pmatrix}\in\mathbb{H}\mathbb{P}^{m},\] with \(r,x\in\mathbb{H}\) and \(y\in\mathbb{H}^{m-1}\), we have \[a^{k}(q_{{}_{P}}(g))=a^{k}\begin{pmatrix}r\\ x\\ y\end{pmatrix}=\begin{pmatrix}1&k&0\\ 0&1&0\\ 0&0&\mathds{1}\end{pmatrix}\begin{pmatrix}r\\ x\\ y\end{pmatrix}=\begin{pmatrix}r+kx\\ x\\ y\end{pmatrix},\] which is equal to \[\begin{pmatrix}1\\ x(r+kx)^{-1}\\ y(r+kx)^{-1}\end{pmatrix}\] whenever \(r+kx\neq 0\). In particular, fixed points are precisely those points for which \(x=0\), and if \(x\neq 0\), then \(a^{k}(q_{{}_{P}}(g))\) must go to \(q_{{}_{P}}(e)\) as \(k\to+\infty\). This time, the set \(\operatorname{Fix}_{\mathbb{H}\mathbb{P}^{m}}(a)\) of all points for which \(x=0\) is of codimension \(\dim(\mathbb{H})=4\). We again end the proof by finding a flamboyance for \(a\). Because \(a\) is a quaternionic projective automorphism, it sends quaternionic lines to quaternionic lines, and since \(a\) has a higher-order fixed point at \(q_{{}_{P}}(e)\), it specifically preserves each quaternionic line through \(q_{{}_{P}}(e)\). Therefore, we can let \(\mathcal{L}\) be the set of all quaternionic lines \(\ell\) such that \(\ell\cap\operatorname{Fix}_{\mathbb{H}\mathbb{P}^{m}}(a)=\{q_{{}_{P}}(e)\}\). Each of these quaternionic lines \(\ell\cong\mathbb{H}\mathbb{P}^{1}\) is \(a\)-invariant, simply connected, and compact. Moreover, whenever \(q_{{}_{P}}(g)\neq q_{{}_{P}}(e)\), there is a unique quaternionic line containing both \(q_{{}_{P}}(g)\) and \(q_{{}_{P}}(e)\), so each non-fixed point of \(\mathbb{H}\mathbb{P}^{m}\) is contained in some element of \(\mathcal{L}\) and the intersection of different elements of \(\mathcal{L}\) is always the path-connected subset \(\{q_{{}_{P}}(e)\}\). Thus, \(\mathcal{L}\) is a flamboyance for \(a\), which completes the proof of Theorem B. ### Proof of Theorem C For this theorem, our model is given by \((\operatorname{PU}(\operatorname{h}_{p,q}),P)\), corresponding to the natural partially integrable almost CR-structure on the null-cone for \(\operatorname{h}_{p,q}\) in \(\mathbb{C}\mathbb{P}^{p+q+1}\). This time, there are essentially three different possibilities for \(a\) up to conjugation by an element of \(P\): \(a\in\exp(\mathfrak{g}_{1})\) "timelike", \(a\in\exp(\mathfrak{g}_{1})\) "spacelike", or \(a\in\exp(\mathfrak{g}_{2})=\operatorname{Z}(P_{+})\). Since the "timelike" and "spacelike" cases are mostly the same, we will just treat the "timelike" case. Therefore, after conjugation by an element of \(P\) and possibly considering \(\alpha^{-1}\) instead of \(\alpha\), we can assume that either \[a=\begin{pmatrix}1&0&0&\mathrm{i}\\ 0&1&0&0\\ 0&0&\mathds{1}&0\\ 0&0&0&1\end{pmatrix}\in\mathrm{Z}(P_{+})\] or \[a=\begin{pmatrix}1&1&0&-1/2\\ 0&1&0&-1\\ 0&0&\mathds{1}&0\\ 0&0&0&1\end{pmatrix}\in\exp(\mathfrak{g}_{1}),\] where for convenience we use the same block sizes as in the proof of Proposition 3.7 throughout. In either case, we get a flat neighborhood of \(\boldsymbol{e}\), by Theorem 3.9 of [3] for \(a\in\mathrm{Z}(P_{+})\) and by Proposition 3.7 of this paper for non-null \(a\in\exp(\mathfrak{g}_{1})\), so we just need to show that \(a\) is flamboyant. Let us do the \(a\in\mathrm{Z}(P_{+})\) case first. For an arbitrary element \[q_{{}_{P}}(g):=\begin{pmatrix}r\\ x\\ y\\ c\end{pmatrix}\in\mathrm{Null}(\mathrm{h}_{p,q})\subseteq\mathbb{CP}^{p+q+1}\] of the null-cone for \(\mathrm{h}_{p,q}\), so that \(r,x,c\in\mathbb{C}\) and \(y\in\mathbb{CP}^{p+q-1}\) satisfy \(2\mathrm{Re}(\bar{r}c)+|x|^{2}+\bar{y}^{\top}I_{p-1,q}y=0\), we can use our chosen representative for \(a\in\mathrm{Z}(P_{+})\) to get \[a^{k}(q_{{}_{P}}(g))=a^{k}\begin{pmatrix}r\\ x\\ y\\ c\end{pmatrix}=\begin{pmatrix}1&0&0&k\mathrm{i}\\ 0&1&0&0\\ 0&0&\mathds{1}&0\\ 0&0&0&1\end{pmatrix}\begin{pmatrix}r\\ x\\ y\\ c\end{pmatrix}=\begin{pmatrix}r+kci\\ x\\ y\\ c\end{pmatrix},\] which is equal to \[\begin{pmatrix}1\\ \frac{x}{r+kci}\\ \frac{y}{r+kci}\\ \frac{c}{r+kci}\end{pmatrix}\] whenever \(r+kci\neq 0\). In particular, the set of fixed points for \(a\) is precisely the set of points for which \(c=0\), and whenever \(c\neq 0\), \(a^{k}(q_{{}_{P}}(g))\) must go to \(q_{{}_{P}}(e)\) as \(k\to+\infty\). The set \(\mathrm{Fix}_{G/P}(a)\) of all points in \(\mathrm{Null}(\mathrm{h}_{p,q})\) with \(c=0\) is of codimension \(2\); note that, if \(c=0\), then in order to stay in the null-cone, we must also have \(|x|^{2}+\bar{y}^{\top}I_{p-1,q}y=0\). Now, we construct a flamboyance for the isotropy \(a\in\mathrm{Z}(P_{+})\). For each \((\begin{smallmatrix}x\\ y\end{smallmatrix})\in\mathbb{CP}^{p+q-1}\), with \(x\in\mathbb{C}\) and \(y\in\mathbb{C}^{p+q-1}\), consider the subspace \[\ell_{x,y}:=\left\{\begin{pmatrix}r\\ zx\\ zy\\ c\end{pmatrix}\in\mathrm{Null}(\mathrm{h}_{p,q}):r,c,z\in\mathbb{C}\right\}.\] Each of these subspaces \(\ell_{x,y}\) is preserved by \(a\), and if \(|x|^{2}+\bar{y}^{\top}I_{p-1,q}y\neq 0\), then the only fixed point of \(a\) in \(\ell_{x,y}\) is \(q_{{}_{P}}(e)\). Moreover, \(\ell_{x,y}\) is a copy of \(\mathrm{Null}(\mathrm{h}_{1,0})\) if \(|x|^{2}+\bar{y}^{\top}I_{p-1,q}y>0\) and a copy of \(\mathrm{Null}(\mathrm{h}_{0,1})\) if \(|x|^{2}+\bar{y}^{\top}I_{p-1,q}y<0\); either way, \(\ell_{x,y}\) is diffeomorphic to \(S^{3}\), hence compact and simply connected. Thus, we let \[\mathcal{L}:=\{\ell_{x,y}:|x|^{2}+\bar{y}^{\top}I_{p-1,q}y\neq 0\}.\] For \((\begin{smallmatrix}x\\ y\end{smallmatrix})\neq(\begin{smallmatrix}x^{\prime}\\ y^{\prime}\end{smallmatrix})\in\mathbb{CP}^{p+q-1}\), \[\ell_{x,y}\cap\ell_{x^{\prime},y^{\prime}}=\left\{\begin{pmatrix}r\\ 0\\ 0\\ c\end{pmatrix}\in\mathrm{Null}(\mathrm{h}_{p,q}):r,c\in\mathbb{C}\right\} \cong\mathbb{RP}^{1},\] which is path-connected, and \[\begin{pmatrix}r\\ x\\ y\\ c\end{pmatrix}\in\ell_{x,y},\] so every non-fixed point is contained in some \(\ell_{x,y}\in\mathcal{L}\). In other words, \(\mathcal{L}\) is a flamboyance for \(a\in\mathrm{Z}(P_{+})\), hence \(a\) is flamboyant in that case. It remains to prove that non-null \(a\in\exp(\mathfrak{g}_{1})\) are flamboyant. This time, using our chosen representative for \(a\in\exp(\mathfrak{g}_{1})\), we get \[a^{k}(q_{{}_{P}}(g))=a^{k}\begin{pmatrix}r\\ x\\ y\\ c\end{pmatrix}=\begin{pmatrix}1&k&0&-k^{2}/2\\ 0&1&0&-k\\ 0&0&\mathds{1}&0\\ 0&0&0&1\end{pmatrix}\begin{pmatrix}r\\ x\\ y\\ c\end{pmatrix}=\begin{pmatrix}r+kx-\frac{k^{2}c}{2}\\ x-kc\\ y\\ c\end{pmatrix}.\] The expression \(r+kx-k^{2}\frac{c}{2}\) grows quadratically in \(k\) if \(c\neq 0\), linearly in \(k\) if \(c=0\) but \(x\neq 0\), and is constant if \(x=c=0\), while \(x-kc\) grows linearly in \(k\) if \(c\neq 0\) and is constant if \(c=0\). In particular, \(a^{k}(q_{{}_{P}}(g))\) must converge to \(q_{{}_{P}}(e)\) as \(k\to+\infty\) unless \(x=c=0\). The points with \(x=c=0\) are the fixed points of \(a\), so this time \(\mathrm{Fix}_{G/P}(a)\) has codimension \(2+2=4\). Note that, in order to stay inside of the null-cone, if \(x=c=0\), then we must also have \(\bar{y}^{\top}I_{p-1,q}y=0\). To construct a flamboyance for \(a\in\exp(\mathfrak{g}_{1})\), we consider subspaces of the form \[\ell_{y}:=\left\{\left(\begin{array}{c}r\\ x\\ zy\\ c\end{array}\right)\in\operatorname{Null}(\mathrm{h}_{p,q}):r,x,c,z\in\mathbb{C}\right\}\] for each \(y\in\mathbb{C}^{p+q-1}\). These are each preserved by \(a\), and whenever \(\bar{y}^{\top}I_{p-1,q}y\neq 0\), the only fixed point contained in \(\ell_{y}\) is \(q_{{}_{P}}(e)\). Moreover, \(\ell_{y}\) is a copy of \(\operatorname{Null}(\mathrm{h}_{2,0})\) if \(\bar{y}^{\top}I_{p-1,q}y>0\) and a copy of \(\operatorname{Null}(\mathrm{h}_{1,1})\) if \(\bar{y}^{\top}I_{p-1,q}y<0\); either way, \(\ell_{y}\) is simply connected and compact. Thus, we let \[\mathcal{L}:=\{\ell_{y}:\bar{y}^{\top}I_{p-1,q}y\neq 0\}.\] For linearly independent \(y\) and \(y^{\prime}\) in \(\mathbb{C}^{p+q-1}\), \[\ell_{y}\cap\ell_{y^{\prime}}=\left\{\left(\begin{array}{c}r\\ x\\ 0\\ c\end{array}\right)\in\operatorname{Null}(\mathrm{h}_{p,q}):r,x,c\in\mathbb{C }\right\}\cong\operatorname{Null}(\mathrm{h}_{1,0}),\] which is path-connected, and \[\begin{pmatrix}r\\ x\\ y\\ c\end{pmatrix}\in\ell_{y},\] so every non-fixed point is contained in some \(\ell_{y}\in\mathcal{L}\). By definition, this means \(\mathcal{L}\) is a flamboyance for \(a\in\exp(\mathfrak{g}_{1})\), hence \(a\) is flamboyant in this case as well. This completes the proof of Theorem C. ## Appendix: the holonomy group of the sprawl In anticipation of a growing interest in techniques revolving around the holonomy group of a Cartan geometry, we have decided to include the following supplementary result. Throughout the proof, we will unabashedly use the ideas from [5] on developments of points and their relations to automorphisms. **Proposition 5.10**.: _For \((\mathscr{F},\sigma^{*}\omega)\) the sprawl of \((q_{{}_{H}}^{-1}(U),\omega)\) generated by \(\alpha\in\operatorname{Aut}(\mathscr{F},\omega)\) from \(e\), let \(a\in G\) be a development of \(\alpha(e)\) from \(e\) as elements of \((q_{{}_{H}}^{-1}(U),\omega)\). The holonomy group \(\operatorname{Hol}_{\varepsilon}(\mathscr{F},\sigma^{*}\omega)\) of the sprawl is the smallest subgroup of \(G\) containing \(\operatorname{Hol}_{\varepsilon}(q_{{}_{H}}^{-1}(U),\omega)\) that is normalized by \(a\)._ Proof.: Suppose \(\gamma:[0,1]\to\mathscr{F}\) is a path lying over a loop \(q_{{}_{H}}(\gamma)\) in \(q_{{}_{H}}(\mathscr{F})\), with \(\gamma(0)=e\) and \(\gamma(1)=\gamma(0)h_{\gamma}\). We want to compute \(\gamma_{G}(1)h_{\gamma}^{-1}\). First, we will show that we can assume \(\gamma\) lies over a loop incremented from \(0\) to \(0\), and then we will compute what \(\gamma_{G}(1)h_{\gamma}^{-1}\) can be. Let us break \(\gamma\) into a concatenation of segments \(\gamma=\gamma_{0}\star\cdots\star\gamma_{\ell-1}\) such that, for each \(0\leq j<\ell\), \(\gamma_{j}([0,1])\subseteq\tilde{\alpha}^{k_{j}}(q_{{}_{H}}^{-1}(U))\) for some \(k_{j}\in\mathbb{Z}\). By definition of sprawl-equivalence, for \[\gamma_{j}(1)=\gamma_{j+1}(0)\in\tilde{\alpha}^{k_{j}}(q_{{}_{H}}^{-1}(U))\cap \tilde{\alpha}^{k_{j+1}}(q_{{}_{H}}^{-1}(U)),\] there must be a thinly null-homotopic loop \(\gamma_{j,j+1}\) based at \(\gamma_{j}(1)\) with \(q_{{}_{H}}(\gamma_{j,j+1})\) incremented from \(k_{j}\) to \(k_{j+1}\). In particular, the modified path \[\gamma_{0}\star\gamma_{0,1}\star\gamma_{1}\star\gamma_{1,2}\star\cdots\star \gamma_{\ell-1}\star\gamma_{\ell-1,\ell}\star\gamma_{\ell}\] in \(\mathscr{F}\) descends to a loop in \(q_{{}_{H}}(\mathscr{F})\) with an incrementation from \(0\) to \(k_{\ell}\), and since the holonomy of a thinly null-homotopic loop is always trivial, \[(\gamma_{0}\star\gamma_{0,1}\star\gamma_{1}\star\gamma_{1,2}\star\cdots\star \gamma_{\ell-1}\star\gamma_{\ell-1,\ell}\star\gamma_{\ell})_{G}(1)=\gamma_{G}( 1).\] Thus, without loss of generality, we may assume that \(\gamma\) lies over a loop \(q_{{}_{H}}(\gamma)\) that is incremented from \(0\) to \(k_{\ell}\) for some \(k_{\ell}\in\mathbb{Z}\). Moreover, since \[q_{{}_{H}}(e)=q_{{}_{H}}(\gamma(0))=q_{{}_{H}}(\gamma(1))\in\tilde{\alpha}^{0} (U)\cap\tilde{\alpha}^{k_{\ell}}(U),\] there must be a thinly null-homotopic loop \(\gamma^{\prime}\) based \(e\) such that \(q_{{}_{H}}(\gamma^{\prime})\) is incremented from \(0\) to \(k_{\ell}\). Concatenating \(\gamma\) with \(\operatorname{R}_{h_{\gamma}}(\overline{\gamma^{\prime}})\), we again get a path \(\gamma\star\operatorname{R}_{h_{\gamma}}(\overline{\gamma^{\prime}})\) lying over a loop in \(q_{{}_{H}}(\mathscr{F})\), this time with an incrementation from \(0\) to \(0\), such that \((\gamma\star\operatorname{R}_{h_{\gamma}}(\overline{\gamma^{\prime}}))_{G}(1)= \gamma_{G}(1)\). Without loss of generality, we may therefore assume that \(\gamma\) lies over a loop \(q_{{}_{H}}(\gamma)\) with an incrementation from \(0\) back to \(0\). Let this incrementation of the loop \(q_{{}_{H}}(\gamma)\) from \(0\) to \(0\) be given by the partition \(0=t_{0}<\cdots<t_{\ell}=1\) and the finite integer sequence \(k_{0}=0,\ldots,k_{\ell-1}=0\in\mathbb{Z}\). By definition, \(\gamma(t_{j+1})\) lies over the connected component of \(\tilde{\alpha}^{k_{j}}(U)\cap\tilde{\alpha}^{k_{j+1}}(U)\) containing \(q_{{}_{H}}(\tilde{\alpha}^{\max(k_{j},k_{j+1})}(e))\), so there exist paths \[\beta_{j+1}:[0,1]\to\tilde{\alpha}^{k_{j}}(q_{{}_{H}}^{-1}(U))\cap\tilde{ \alpha}^{k_{j+1}}(q_{{}_{H}}^{-1}(U))\] with \(\beta_{j+1}(0)=\gamma(t_{j+1})\) and \(\beta_{j+1}(1)b_{j+1}=\tilde{\alpha}^{\max(k_{j},k_{j+1})}(e)\) for some \(b_{j+1}\in H\). In particular, \(\beta_{j+1}\star\overline{\beta_{j+1}}\) is a thinly null-homotopic loop in \(\tilde{\alpha}^{k_{j}}(q_{{}_{H}}^{-1}(U))\cap\tilde{\alpha}^{k_{j+1}}(q_{{} _{H}}^{-1}(U))\), so we may again construct a modified path \[\gamma|_{[0,t_{1}]}\star\beta_{1}\star\overline{\beta_{1}}\star\cdots\star \gamma|_{[t_{\ell-2},t_{\ell-1}]}\star\beta_{\ell-1}\star\overline{\beta_{\ell-1 }}\star\gamma|_{[t_{\ell-1},1]}\] with the same total development as \(\gamma\); this tells us that we may further assume, without loss of generality, that \(\gamma(t_{j+1})b_{j+1}=\alpha^{\max(k_{j},k_{j+1})}(e)\) for some \(b_{j+1}\in H\) for each \(0\leq j<\ell-1\). With this, each segment \(\gamma|_{[t_{j},t_{j+1}]}\) with \(0\leq j<\ell-1\) is a path from \(\gamma(t_{j})=\tilde{\alpha}^{\max(k_{j-1},k_{j})}(e)b_{j}^{-1}\) to \(\gamma(t_{j+1})=\tilde{\alpha}^{\max(k_{j},k_{j+1})}(e)b_{j+1}^{-1}\), so since the space of possible developments from \(\tilde{\alpha}^{\max(k_{j-1},k_{j})}(\boldsymbol{e})\) to \(\tilde{\alpha}^{\max(k_{j},k_{j+1})}(\boldsymbol{e})\) is just \[\operatorname{Hol}_{e}(q_{{}_{H}}^{-1}(U),\omega)a^{\max(k_{j},k_{j+1})-\max(k_ {j-1},k_{j})}=\operatorname{Hol}_{e}(q_{{}_{H}}^{-1}(U),\omega)a^{\frac{1}{2}( k_{j+1}-k_{j-1})},\] we must have \[(\gamma|_{[t_{j},t_{j+1}]})_{G}(t_{j+1})=b_{j}\eta_{j}a^{\frac{1}{2}(k_{j+1}-k_ {j-1})}b_{j+1}^{-1}\] for some \(\eta_{j}\in\operatorname{Hol}_{\boldsymbol{e}}(q_{{}_{H}}^{-1}(U),\omega)\). Crucially, note that for another path \(\zeta:[0,1]\to q_{{}_{H}}^{-1}(U)\) with \(\zeta(0)=\boldsymbol{e}\) and \(\zeta(1)=\zeta(0)h_{\zeta}=\boldsymbol{e}h_{\zeta}\), we can replace \(\gamma|_{[t_{j},t_{j+1}]}\) with \(\tilde{\alpha}^{\max(k_{j-1},k_{j})}(\operatorname{R}_{b_{j}^{-1}}(\zeta)) \star\operatorname{R}_{b_{j}h_{\zeta}b_{j}^{-1}}(\gamma|_{[t_{j},t_{j+1}]})\) to change the total development of \(\gamma|_{[t_{j},t_{j+1}]}\) from \(b_{j}\eta_{j}a^{\frac{1}{2}(k_{j+1}-k_{j-1})}b_{j+1}^{-1}\) to \[(b_{j}\zeta_{G}(1)b_{j}^{-1})(b_{j}h_{\zeta}^{-1}b_{j}^{-1})(b_{j}\eta_{j}a^{ \frac{1}{2}(k_{j+1}-k_{j-1})}b_{j+1}^{-1})(b_{j}h_{\zeta}b_{j}^{-1}),\] which is just \(b_{j}(\zeta_{G}(1)h_{\zeta}^{-1})\eta_{j}a^{\frac{1}{2}(k_{j+1}-k_{j-1})}(b_{j }h_{\zeta}b_{j}^{-1}b_{j+1})^{-1}\), so replacing \(b_{j+1}\) with \(b_{j}h_{\zeta}b_{j}^{-1}b_{j+1}\), every \(\eta_{j}\in\operatorname{Hol}_{e}(q_{{}_{H}}^{-1}(U),\omega)\) can be realized in the total development \(b_{j}\eta_{j}a^{\frac{1}{2}(k_{j+1}-k_{j-1})}b_{j+1}^{-1}\) of the segment \(\gamma|_{[t_{j},t_{j+1}]}\) for some \(\gamma\) with the given incrementation from \(0\) to \(0\). Similarly, for the final segment \(\gamma|_{[t_{\ell-1},1]}\) of \(\gamma\), we get a path from \(\gamma(t_{\ell-1})=\tilde{\alpha}^{\max(k_{\ell-2},k_{\ell-1})}(\boldsymbol{e })b_{\ell-1}^{-1}\) to \(\gamma(1)=\gamma(0)h_{\gamma}=\boldsymbol{e}h_{\gamma}\), so \[(\gamma|_{[t_{\ell-1},1]})_{G}(1)=b_{\ell-1}\eta_{\ell-1}a^{-\max(k_{\ell-2},k _{\ell-1})}h_{\gamma}\] for some \(\eta_{\ell-1}\in\operatorname{Hol}_{e}(q_{{}_{H}}^{-1}(U),\omega)\). Again, by modifying the segment and \(h_{\gamma}\), we can realize any \(\eta_{\ell-1}\in\operatorname{Hol}_{e}(q_{{}_{H}}^{-1}(U),\omega)\) in this total development of the segment. Putting all of this together, \[\gamma_{G}(1) =(\gamma|_{[0,t_{1}]})_{G}(t_{1})\cdots(\gamma|_{[t_{\ell-1},1]}) _{G}(1)\] \[=(\eta_{0}a^{k_{1}}b_{1}^{-1})(b_{1}\eta_{1}a^{\frac{1}{2}(k_{2}- k_{0})}b_{2}^{-1})\cdots(b_{\ell-1}\eta_{\ell-1}a^{-\max(k_{\ell-2},k_{\ell-1}) }h_{\gamma})\] \[=\eta_{0}a^{k_{1}}\eta_{1}a^{\frac{1}{2}(k_{2}-k_{0})}\cdots\eta_ {\ell-1}a^{-\max(k_{\ell-2},k_{\ell-1})}h_{\gamma},\] so \[\gamma_{G}(1)h_{\gamma}^{-1}=\eta_{0}a^{k_{1}}\eta_{1}a^{\frac{1}{2}(k_{2}-k_{0 })}\cdots\eta_{\ell-1}a^{-\max(k_{\ell-2},k_{\ell-1})}.\] Note, though, that because the labels \(k_{j}\) come from an incrementation, each of the powers of \(a\) in this expression is either \(a^{-1}\), \(a^{0}=e\), or \(a^{1}=a\), with the sum of the first \(j\) powers of \(a\) precisely equal to \(k_{j}\). Moreover, since the incrementation is from \(0\) to \(0\), the elements \(a\) and \(a^{-1}\) must occur in pairs, so that \(\gamma_{G}(1)h_{\gamma}^{-1}\) is in the smallest subgroup containing \(\operatorname{Hol}_{e}(q_{{}_{H}}^{-1}(U),\omega)\) closed under conjugation by powers of \(a\). Thus, \(\gamma_{G}(1)h_{\gamma}^{-1}\) is contained in the desired subgroup. Finally, because every \(\eta_{j}\in\operatorname{Hol}_{e}(q_{{}_{H}}^{-1}(U),\omega)\) can be realized in the above expression for some \(\gamma\) with the given incrementation, we can get every element of the desired subgroup by considering paths \(\gamma\) with different incrementation from \(0\) to \(0\), hence \(\operatorname{Hol}_{e}(\mathscr{F},\sigma^{*}\omega)\) is equal to this subgroup.
2310.18216
Kibble-Zurek dynamics in the anisotropic Ising model of the Si(001) surface
As a simplified description of the non-equilibrium dynamics of buckled dimers on the Si(001) surface, we consider the anisotropic 2D Ising model and study the freezing of spatial correlations during a cooling quench across the critical point. We observe a crossover from 1D to 2D behavior. For rapid cooling, we find effectively 1D behavior in the strongly coupled direction, for which we provide an exact analytic solution of the non-equilibrium dynamics. For slower cooling rates, we start to see 2D behavior where our numerical simulations show an approach to the usual Kibble-Zurek scaling in 2D.
Gernot Schaller, Friedemann Queisser, Seyedeh Parya Katoorani, Christian Brand, Christian Kohlfürst, Mark R. Freeman, Alfred Hucht, Peter Kratzer, Björn Sothmann, Michael Horn-von Hoegen, Ralf Schützhold
2023-10-27T15:44:41Z
http://arxiv.org/abs/2310.18216v2
# Sequential Kibble-Zurek dynamics in the anisotropic Ising model of the Si(001) surface ###### Abstract As a simplified description of the non-equilibrium dynamics of buckled dimers on the Si(001) surface, we consider the anisotropic 2D Ising model and study the freezing of spatial correlations during a cooling quench across the critical point. The dependence of the frozen correlation lengths \(\xi_{\parallel}\) and \(\xi_{\perp}\) on the cooling rate obtained numerically matches the Kibble-Zurek scaling quite well. However, we also find that the ratio \(\xi_{\parallel}/\xi_{\perp}\) of their frozen values deviates significantly from the ratio in equilibrium. Supported by analytical arguments, we explain this difference by the fact that the deviation from equilibrium in the weakly coupled direction occurs earlier than in the strongly coupled direction. IntroductionVon Neumann once [1] compared non-equilibrium theory to a theory of non-elephants - indicating the richness and complexity of this field, which we are just beginning to understand. In view of the diverging response time near the critical point, continuous phase transitions are prototypical candidates for observing non-equilibrium behavior [2; 3]. A prominent example is the Kibble mechanism describing the formation of topological defects during symmetry-breaking phase transitions in the early universe [4]. Later Zurek realized that quite analogous effects should also occur in condensed matter such as superfluid helium [5]. The Kibble-Zurek mechanism has been studied in numerous theoretical (e.g. [6; 7; 8; 9; 10; 11; 12; 13; 14; 15]) and experimental investigations (e.g. [16; 17; 18; 19; 20; 21]). An important point is the transition from adiabatic evolution to non-equilibrium behavior (such as freezing) when approaching or traversing the critical point. Apart from the original idea of creating topological defects, the general mechanism can also be applied to the frozen domain structure in symmetry-breaking phase transitions induced by the critical slowing down. In the following, we shall study the anisotropic Ising model in two spatial dimensions [22; 23; 24; 25; 26; 27] with special emphasis on possible differences in the non-equilibrium behavior between the two directions. Apart from advancing our fundamental understanding, these investigations are also motivated by the fact that the buckling dynamics of dimers on the Si(001) surface can be described by the anisotropic 2D Ising model [28; 29; 30; 31; 32; 33; 34; 35; 36; 37]. Here, we consider the transition from the \(p(2\times 1)\) to the \(c(4\times 2)\) reconstruction at a critical temperature \(T_{\rm crit}\approx 190\ {\rm K}\). Since the (001) face of single-crystalline silicon belongs to the most important surfaces both in technology and science, our results will also be relevant in this regard. For example, the dependence of the frozen domain structure on the cooling rate indicates how sufficiently homogeneous Si(001) surfaces should be prepared. Kibble-Zurek scalingLet us briefly recapitulate the main arguments leading to the standard Kibble-Zurek scaling. We consider a symmetry-breaking second-order phase transition at the critical temperature \(T_{\rm crit}\). Approaching the critical point \(T_{\rm crit}\), the equilibrium correlation length \(\xi^{\rm eq}\) obeys the universal scaling behavior \[\xi^{\rm eq}\sim\left|\frac{T-T_{\rm crit}}{T_{\rm crit}}\right|^{-\nu} \equiv|\tau|^{-\nu} \tag{1}\] with the universal critical exponent \(\nu\) and the dimensionless reduced temperature \(\tau\). Similarly, the response or relaxation time \(t_{\rm relax}^{\rm eq}\) (in equilibrium) scales as \[t_{\rm relax}^{\rm eq}\sim|\tau|^{-z\nu}\sim(\xi^{\rm eq})^{z} \tag{2}\] with the dynamical critical exponent \(z\). The divergence of \(t_{\rm relax}^{\rm eq}\) at the critical point is the hallmark of critical slowing down. Now the idea is to infer non-equilibrium properties from these equilibrium values. Let us assume a linear time-dependence for the cooling protocol \(\tau(t)=-\eta t\) with the cooling rate \(\eta\) such that, starting at the time \(t_{\rm in}<0\), the critical point is reached at \(t=0\). Then we may estimate the freezing time via \(|t_{\rm freeze}|=t_{\rm relax}^{\rm eq}(\tau_{\rm freeze})\) where \(\tau_{\rm freeze}=\tau(t_{\rm freeze})\), after which the system has no time to equilibrate any more. Insertion into Eq. (2) yields \(|t_{\rm freeze}|\sim\eta^{-z\nu/(z\nu+1)}\) and thus we obtain the frozen correlation length from Eq. (1) \[\xi_{\rm freeze}=\xi^{\rm eq}(\tau_{\rm freeze})\sim\eta^{-\nu/(z\nu+1)}\,, \tag{3}\] which is referred to as Kibble-Zurek scaling [16; 9; 20]. Anisotropic Ising modelNow let us apply these ideas to the non-equilibrium dynamics of buckled dimers on the Si(001) surface which form a rectangular lattice. If we describe the tilt of the dimer at lattice site \(i,j\in\mathbb{Z}\) to the left or the right by the pseudo-spin variable \(\sigma_{i,j}=+1\) or \(\sigma_{i,j}=-1\), respectively, the resulting energy landscape corresponds to the anisotropic Ising model [28; 29; 30; 32; 33; 34; 35]. \[E_{\mathbf{\sigma}} = -J_{x}\sum_{i,j}\sigma_{i,j}\sigma_{i+1,j}-J_{y}\sum_{i,j}\sigma_{i, j}\sigma_{i,j+1} \tag{4}\] \[-J_{\times}\sum_{i,j}\sigma_{i,j}[\sigma_{i+1,j+1}+\sigma_{i+1,j- 1}]\] with a strong anti-ferromagnetic coupling \(J_{x}\approx-25\) meV in \(x\)-direction (i.e., along the dimer rows) and weaker couplings \(J_{y}\approx 3.2\) meV in \(y\)-direction (i.e., across the rows) as well as in diagonal direction \(J_{\times}\approx 2.0\) meV [36; 37]. The latter two can be combined into an effective transversal coupling \(J_{\perp}=J_{y}-2J_{\times}\approx-0.8\) meV. As a result, the Ising model (4) favors anti-ferromagnetic order both in \(x\)- and \(y\)-direction. For convenience, we apply a checker-board transformation \(\sigma_{i,j}\to(-1)^{i+j}\sigma_{i,j}\) after which we have ferromagnetic order since \(J_{x}\) and \(J_{y}\) change sign. As explained above, the Kibble-Zurek scaling is derived from the equilibrium properties. For the Ising model (4), they can be obtained by Onsager theory [38]. In terms of the longitudinal \(J_{\parallel}=J_{x}\) and transversal \(J_{\perp}\) couplings, the critical temperature \(T_{\rm crit}=1/(k_{\rm B}\beta_{\rm crit})\) is determined by the relation \(\sinh(2\beta_{\rm crit}J_{\parallel})\sinh(2\beta_{\rm crit}J_{\perp})=1\). Thus, in the limit of strong anisotropy \(J_{\parallel}\gg J_{\perp}\), we obtain the hierarchy of scales \(J_{\parallel}\gg\beta_{\rm crit}^{-1}\gg J_{\perp}\). Approaching the critical point from above, the correlation lengths \(\xi_{\parallel}\) and \(\xi_{\perp}\) in \(x\)- and \(y\)-direction (i.e., along the dimer rows and perpendicular to them) both obey the scaling (1) with the critical exponent \(\nu=1\), though with different prefactors [24; 22]. Thus, their ratio stays constant and is given by \(\xi_{\perp}/\xi_{\parallel}=\sinh(2\beta_{\rm crit}J_{\perp})\approx 2\beta_{ \rm crit}J_{\perp}\). Rate equationsFor the 2D Ising model, the dynamical critical exponent reads \(z=2+\varepsilon\) where \(\varepsilon\) is a small and positive number [39; 40; 41; 42; 43; 44; 45]. Thus, the exponent in the Kibble-Zurek scaling relation (3) is roughly minus one third. To test this relation, we have to study the non-equilibrium dynamics of the Ising model (4). To this end, we employ rate equations for the probabilities \(P_{\mathbf{\sigma}}\) of the configurations \(\mathbf{\sigma}\) in the standard form \[\dot{P}_{\mathbf{\sigma}}=\sum_{\mathbf{\sigma}^{\prime}}[R_{\mathbf{\sigma}^{\prime} \to\mathbf{\sigma}}P_{\mathbf{\sigma}^{\prime}}-R_{\mathbf{\sigma}\to\mathbf{\sigma}^{\prime}} P_{\mathbf{\sigma}}]\,. \tag{5}\] Neglecting correlated flips of two or more pseudo-spins (i.e., dimers), we use single-flip transition rates \[R_{\mathbf{\sigma}^{\prime}\to\mathbf{\sigma}}=\frac{\Gamma\exp\{-\beta E_{\rm B}\}}{ \exp\{\beta(E_{\mathbf{\sigma}}-E_{\mathbf{\sigma}^{\prime}})\}+1}\,. \tag{6}\] The "knocking" frequency \(\Gamma\approx 10^{12}/\)s and Arrhenius barrier height \(E_{\rm B}\approx 100\) meV are obtained from microscopic considerations [46; 37; 47]. The Glauber factor in the denominator can also be motivated by microscopic models, e.g., in the form of a reservoir of two-level systems or via fermionic tunneling. It ensures that the rate is bounded \(R_{\mathbf{\sigma}^{\prime}\to\mathbf{\sigma}}<\Gamma\exp\{-\beta E_{\rm B}\}\) and satisfies the detailed balance condition \(R_{\mathbf{\sigma}^{\prime}\to\mathbf{\sigma}}/R_{\mathbf{\sigma}\to\mathbf{\sigma}^{\prime}}= \exp\{\beta(E_{\mathbf{\sigma}}-E_{\mathbf{\sigma}^{\prime}})\}\) which enforces convergence to thermal equilibrium for constant parameters \(\Gamma\) and \(\beta\) etc. Numerical simulationsDue to the exponential dimensionality of (5) for an \(N_{x}\times N_{y}\) spin lattice, we calculate trajectory solutions. For a given configuration \(\mathbf{\sigma}\), we propagate time by the stochastic waiting time \(\tau_{\mathbf{\sigma}}\) found by numerically solving \(\ln(1-r)=-\sum_{\mathbf{\sigma}^{\prime}}\int_{t}^{t+\tau\mathbf{\sigma}}R_{\mathbf{\sigma} \to\mathbf{\sigma}^{\prime}}(t^{\prime})dt^{\prime}\), with uniformly distributed random number \(r\in[0,1]\), and perform a jump to a different state with the conditional probability [48; 49] given by \(P_{\mathbf{\sigma}\to\mathbf{\sigma}^{\prime}}=R_{\mathbf{\sigma}\to\mathbf{\sigma}^{\prime}} /[\sum_{\mathbf{\sigma}^{\prime\prime}\neq\mathbf{\sigma}}R_{\mathbf{\sigma}\to\mathbf{\sigma}^{ \prime\prime}}]\). In the selection of jumps, we take advantage [50] of the fact that the \(N_{x}N_{y}\) different single-spin flip processes can be grouped into 45 classes with identical energy differences entering the rates (6). Eventually, denoting the fast Fourier transformed spin lattice by \(\tilde{\sigma}_{k_{x}k_{y}}\), the correlation lengths \(\xi_{\parallel}\) and \(\xi_{\perp}\) are then given by the inverse widths of the one-dimensional lattices \(\sum_{k_{y}}|\tilde{\sigma}_{k_{x}k_{y}}|^{2}\) and \(\sum_{k_{x}}|\tilde{\sigma}_{k_{x}k_{y}}|^{2}\), respectively. Averaging over multiple trajectories (and the resulting \(|\tilde{\sigma}_{k_{x}k_{y}}|^{2}\)) can be used to improve the statistics. In Fig. 1, we contrast the time-dependent averaged correlation lengths (solid curves) with equilibrium versions (symbols) for a cooling sweep. Already at temperatures above \(T_{\rm crit}\), the correlation lengths depart from their equilibrium limits, but furthermore we see that this happens earlier for the weakly coupled direction. The final (i.e., frozen) correlation lengths are depicted Figure 1: Plot of inverse correlation lengths versus temperature (or, equivalently, time) for a cooling sweep for a \(16000\times 2000\) lattice, averaged over 100 trajectories. The correlation length in weakly-coupled direction (yellow curve) departs earlier than the other (brown curve) from the equilibrium solutions (red squares and black circles). On top, we added snap-shots of the time evolution of an example configuration at the respective temperatures as an illustration. in Fig. 2 as a function of the cooling rate \(\eta\propto 1/t_{\rm prot}\). For very fast sweeps, the system cannot follow and basically remains at the initial equilibrium values. For intermediate-speed sweeps, we find that both final correlation lengths follow a universal power-law increase, consistent with the Kibble-Zurek exponent \(\nu/(1+z\nu)\approx 1/3\) in Eq. (3), see the fitted regions in Fig. 2. For very slow sweeps, we find that finite-size effects start to play a role. Again, since the frozen correlation lengths \(\xi_{\parallel}^{\rm freeze}\) and \(\xi_{\perp}^{\rm freeze}\) both obey the scaling (3) in this intermediate region, though with different pre-factors, their ratio \(\xi_{\perp}^{\rm freeze}/\xi_{\parallel}^{\rm freeze}\) is roughly constant - similar to the equilibrium case discussed above where \(\xi_{\perp}^{\rm eq}/\xi_{\parallel}^{\rm eq}=\sinh(2\beta_{\rm crit}J_{\perp}) \approx 2\beta_{\rm crit}J_{\perp}\). However, we find that these two ratios are not the same, but differ by roughly a factor of two. _1D Ising model_ In order to understand the difference between the strongly and the weakly coupled directions found above, let us first consider the limiting case \(J_{y}\to 0\) and \(J_{\times}\to 0\) of the 2D Ising model (4) where each row \(j\) separately forms a 1D Ising model with \(J=J_{x}\) \[E_{\mathbf{\sigma}}^{\rm 1D}=-J\sum_{i}\sigma_{i}\sigma_{i+1}\,. \tag{7}\] Assuming translational invariance, we may derive an exact evolution equation for the correlator \(c_{a}=\langle\sigma_{i}\sigma_{i+a}\rangle\) depending on distance \(a\). Furthermore, let us introduce the dimensionless conformal time coordinate \(\mathfrak{T}\) with adapted step size \(d\mathfrak{T}/dt=\Gamma e^{-\beta E_{\rm B}}\) such that \[\partial_{\mathfrak{T}}c_{a}=-2c_{a}+(c_{a+1}+c_{a-1})\tanh(2\beta J)\,, \tag{8}\] with the boundary condition \(c_{a=0}=1\). Setting the left-hand side of Eq. (8) to zero yields the well-known equilibrium solution \(c_{a}=[\tanh(\beta J)]^{a}\). Note that the 1D Ising model (7) does not have a critical point at finite temperature \(T_{\rm crit}>0\), instead the analogue of a critical point occurs at zero temperature \(T_{\rm crit}=0\) where \(\xi\) diverges as \(\xi\sim e^{2\beta J}\)[23]. In order to understand the non-equilibrium dynamics governed by Eq. (8), let us consider the continuum limit where \(c_{a+1}+c_{a-1}-2c_{a}\) becomes the second spatial derivative such that we obtain a diffusion-dissipation equation \(\partial_{\mathfrak{T}}c=\mathfrak{D}\partial_{x}^{2}c-\gamma c\). For large temperatures, the diffusion coefficient is small \(\mathfrak{D}\propto\tanh(2\beta J)\approx 2\beta J\ll 1\) and the damping term \(\gamma\approx 2\) dominates. For small temperatures, the damping rate \(\gamma=2-2\tanh(2\beta J)\) is suppressed as \(4e^{-4\beta J}\) and the diffusion term \(\mathfrak{D}\propto\tanh(2\beta J)\approx 1\) dominates. In analogy to the response time \(t_{\rm relax}\) in Eq. (2), we may introduce a response or relaxation time \(\mathfrak{T}_{\rm relax}\) from the inverse damping rate \(1/\gamma\) which then scales as \(\mathfrak{T}_{\rm relax}\sim e^{4\beta J}\), i.e., \(\mathfrak{T}_{\rm relax}\sim\xi^{2}\). Note, however, that the diffusion coefficient stays finite even for \(\mathfrak{T}_{\rm relax}\to\infty\), i.e., diffusion is still possible. _Freezing in 1D_ Since analyzing the non-equilibrium dynamics by means of analytic solutions of Eq. (8) is still quite involved, let us consider the weighted sum of correlations \(\mathfrak{C}=\sum_{a=1}^{\infty}ac_{a}\) which obeys the simpler evolution equation \[\partial_{\mathfrak{T}}\mathfrak{C}=[2\tanh(2\beta J)-2]\mathfrak{C}+\tanh(2 \beta J)\,. \tag{9}\] In order to provide an explicit example and to study the analogue of critical slowing down, let us assume the simple cooling protocol \(\beta(t)=\kappa t\) (i.e., \(T(t)\propto 1/t\)) starting at infinite temperature at \(t=0\) and cooling down to zero temperature at \(t\to\infty\). Then this infinite interval of laboratory time \(t\in(0,\infty)\) is mapped to a finite interval of conformal time \(\mathfrak{T}\in(-\Gamma/[\kappa E_{\rm B}],0)=(\mathfrak{T}_{\rm in},0)\). Incidentally, for our values with \(E_{\rm B}\approx 4J_{x}\), we may simplify Eq. (9) even further. For low temperatures \(\beta J\gg 1\), the source term \(\tanh(2\beta J)\) can be approximated by unity and the damping rate \(\gamma\) behaves as \(4e^{-4\beta J}\) which for \(E_{\rm B}=4J\) becomes \(4\mathfrak{T}/\mathfrak{T}_{\rm in}\). As a result, Eq. (9) simplifies to \(\partial_{\mathfrak{T}}\mathfrak{C}=-4\mathfrak{C}\mathfrak{T}/\mathfrak{T}_{ \rm in}+1\). The solution to this equation can be given in terms of the error function, but we may understand its behavior by means of general arguments. In the limit of slow cooling rates considered here, we have \(|\mathfrak{T}_{\rm in}|\gg 1\). Then, starting with \(\mathfrak{C}=0\) at \(\mathfrak{T}=\mathfrak{T}_{\rm in}\), the value of \(\mathfrak{C}\) quickly approaches its instantaneous equilibrium value \(\mathfrak{C}_{\rm eq}=\mathfrak{T}_{\rm in}/(4\mathfrak{T})\). However, once the response time \(\mathfrak{T}_{\rm relax}\sim\mathfrak{T}_{\rm in}/\mathfrak{T}\) becomes too short \(\mathfrak{T}_{\rm relax}\sim|\mathfrak{T}|\), the system cannot equilibrate anymore and thus the value of \(\mathfrak{C}\) freezes in at \(\mathfrak{T}_{\rm freeze}\sim\sqrt{|\mathfrak{T}_{\rm in}|}\) to its final value \(\mathfrak{C}_{\rm freeze}\sim\sqrt{|\mathfrak{T}_{\rm in}|}\). Using the asymptotic behavior of the error function, we may also determine the pre-factor to \(\mathfrak{C}_{\rm freeze}=\sqrt{|\mathfrak{T}_{\rm in}|/(2\pi)}\). Furthermore, as \(\mathfrak{C}\) scales with the square of the correlation length \(\xi_{\parallel}\), we Figure 2: Plot of the final (frozen) correlation lengths for different lattice sizes and averaged over 100 trajectories [\(N_{x}\times N_{y}\times N_{\rm trj}\)] for different cooling sweeps, where the system is cooled down from \(k_{\rm B}T_{\rm in}=25\) meV to \(k_{\rm B}T_{\rm out}=10\) meV (as in Fig. 1) in various time intervals, i.e., protocol times \(t_{\rm prot}\). For too fast protocols (left), the system can never follow, but Kibble-Zurek scaling is recovered for intermediate protocol times. Finite-size effects are visible for slow protocols. obtain \(\xi_{\parallel}^{\rm freeze}\sim|\mathfrak{T}_{\rm in}|^{1/4}=(\Gamma/[\kappa E_{ \rm B}])^{1/4}\), which is the analogue to the Kibble-Zurek scaling (3) for this case. _2D Ising model_ Now we can apply our findings to the anisotropic Ising model (4). Again assuming translational invariance, the evolution equation for the correlations \(c_{a,b}=\langle\sigma_{i,j}\sigma_{i+a,j+b}\rangle\) reads \[\partial_{\mathfrak{T}}c_{a,b} = -2c_{a,b}+(c_{a+1,b}+c_{a-1,b})\tanh(2\beta J_{x}) \tag{10}\] \[+\,\mathcal{O}(J_{y},J_{\times})\,.\] Here, we used the separation of scales \(J_{x},E_{\rm B}\gg J_{y},J_{\times}\) in order to keep the zeroth order in the first line, while all terms which are suppressed by \(J_{y}\) or \(J_{\times}\) are in the second line. Note that these terms actually contain factors of \(\tanh(2\beta J_{y})\) and \(\tanh(2\beta J_{\times})\), i.e., they would not grow without bound even for extremely small temperatures \(\beta\sim 1/J_{y,\times}\). At such ultra-low temperatures, the rates (6) are exponentially suppressed anyway by the barrier \(E_{\rm B}\), i.e., basically no flips would occur anymore. Let us study the behavior resulting from Eq. (10). Since there are no source terms in the first line of Eq. (10) for \(b\neq 0\), correlations between rows can only be created by the small terms in the second line, which also contain the source term \(c_{0,0}=1\). Without them, inter-row correlations \(c_{a,b\neq 0}\) are only damped or diffused. Since the diffusion in \(x\)-direction (i.e., along the rows) is very fast, let us focus on the remaining slower evolution by considering the total correlator between rows \(\mathfrak{R}_{b}=\sum_{a=-\infty}^{\infty}c_{a,b}\) \[\partial_{\mathfrak{T}}\mathfrak{R}_{b}=[2\tanh(2\beta J_{x})-2]\mathfrak{R}_{ b}+\,\mathcal{O}(J_{y},J_{\times})\,, \tag{11}\] in analogy to Eq. (9). In the following, we consider the total correlation between neighboring rows \(b=1\). In equilibrium, this quantity scales with \(\xi_{\parallel}\), provided that we assume a correlation length \(\xi_{\perp}\) of order unity or more. Thus, if \(\xi_{\parallel}\) becomes very large, the system would need to shuffle more and more correlations from one row to the next in order to stay close to equilibrium. However, since this can only be done via the small coupling terms \(\mathcal{O}(J_{y},J_{\times})\), this bottleneck limits the growth of \(\mathfrak{R}_{b}\) and thus the system departs from equilibrium at some point. Since this departure time is determined by the smallness of \(\,\mathcal{O}(J_{y},J_{\times})\), the smaller these coupling are, the earlier this departure occurs. Thus, for small \(\,\mathcal{O}(J_{y},J_{\times})\), this departure from equilibrium in the weakly coupled direction will occur earlier than in the strongly coupled direction, where the system can still be close to equilibrium. _Conclusions_ For a cooling quench of the anisotropic 2D Ising model across the critical point, we studied the freezing of the spatial correlation lengths \(\xi_{\perp}\) and \(\xi_{\parallel}\). We found that their dependence on the cooling rate obtained numerically matches the Kibble-Zurek scaling quite well. However, as a distinct signature of the non-equilibrium dynamics in the presence of the anisotropy, we also observed that the ratio \(\xi_{\perp}^{\rm freeze}/\xi_{\parallel}^{\rm freeze}\) of their frozen values differs from the ratio in equilibrium \(\xi_{\perp}^{\rm eq}/\xi_{\parallel}^{\rm eq}\) by roughly a factor of two. This difference contrasts with the simple picture which underlies the Kibble-Zurek scaling and assumes that the whole system stays close to equilibrium before the freezing time \(t_{\rm freeze}\) and basically does not evolve anymore afterwards. Instead, our combination of analytical and numerical methods shows that the non-equilibrium dynamics in the two directions is different and cannot be grasped by a single freezing time \(t_{\rm freeze}\). Finally, let us discuss potential experimental evidence for the formation of a frozen domain structure for the Si(001) dimerized surface. The surface exhibits parallel rows of alternately buckled dimers which arrange in a \(c(4\times 2)\) reconstruction indicating the anti-phase correlation between neighboring dimer rows [51; 52]. Fig. 3 shows a low-temperature scanning tunneling microscopy image taken at 5 K after preparation of the Si(001) surface through flash annealing and rapid cool-down to liquid nitrogen temperatures \(T<100\) K. The cooling rate was on the order of 1-10 K/s. Further experimental detailed 3s can be found elsewhere [47]. The STM image was taken at constant current conditions with positive sample bias, i.e., in Fig. 3 filled orbitals of the Si atoms are displayed in bright. The dimer rows run vertically from top to bottom. In each row the alternating buckling along the dimer row can nicely be identified. The anti-phase correlation between neighboring dimers cause the \(c(4\times 2)\) reconstruction which becomes apparent as "honeycomb" pattern. During the rapid cool-down the regime of critical slowing down is reached for \(T>T_{\rm crit}\), resulting in a frozen domain structure which is apparent in Fig. 3. The domain boundaries can be identified as one-dimensional "defects" separating ordered areas with Figure 3: Low temperature STM image of a Si(001) surface taken at 5 K with \(U_{\rm bias}=1.3\) V and \(I_{\rm tunnel}=1\) nA. Field of view is \(24\times 16\) nm\({}^{2}\). Areas with \(c(4\times 2)\) reconstruction exhibit a “honeycomb” pattern. Domain boundaries of the frozen domain structure can be identified by a zig-zag chain of local \(p(2\times 2)\) reconstruction. The two dark spots are missing dimer vacancies and frazzy vertical lines correspond to active phase boundary changes (”phasons”). a \(c(4\times 2)\) reconstruction. As expected from our findings described above and from electron diffraction [36; 37], these ordered domains are extremely elongated. However, albeit quite intriguing, these observations can only be interpreted as a "smoking gun" for the non-equilibrium dynamics studied here and further studies are required to settle this issue. _Acknowledgments_ Funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) through the Collaborative Research Center SFB 1242 "Nonequilibrium dynamics of condensed matter in the time domain" (Project-ID 278162697).
2308.02004
The Quest for the Nature of the Dark Matter: The Need of a New Paradigm
The phenomenon of the Dark matter baffles the researchers: the underlying dark particle has escaped so far the detection and its astrophysical role appears complex and entangled with that of the standard luminous particles. We propose that, in order to act efficiently, alongside with abandoning the current $\Lambda CDM$ scenario, we need also to shift the Paradigm from which it emerged.
Fabrizio Nesti, Paolo Salucci, Nicola Turini
2023-08-03T19:39:03Z
http://arxiv.org/abs/2308.02004v1
# The Quest for the Nature of the Dark Matter: The Need of a New Paradigm ###### Abstract The phenomenon of the Dark matter baffles the researchers: the underlying dark particle has escaped so far the detection and its astrophysical role appears complex and entangled with that of the standard luminous particles. We propose that, in order to act efficiently, alongside with abandoning the current \(\Lambda CDM\) scenario, we need also to shift the Paradigm from which it emerged. dark matter; galaxy structure; cosmology + Footnote †: journal: _Astronomy_ 0000-0002-2300-3820]P. Paolo Salucci 0000-0002-389-7886]N. T. Turin ## 1 The Phenomenon of Dark Matter The phenomenon of the Dark Matter is one of the most intriguing mysteries in the Universe. In fact, not only it implies the existence of unknown Science and in particular of unknown Physics, but it concerns the fabrics itself of the Universe. A new Law of Nature, yet to be discovered, seems to be at work. As Zwicky found back in the 30's [1] and Vera Rubin in the late 70's [2], the law of Gravity seems to fail in Clusters of Galaxies and in (Disk) Galaxies. Especially in the latter, one detects large anomalous motions: the stars and the gaseous component in a galaxy do not move as they should do under their own gravity, but as they were attracted by something of invisible. Disk systems can be divided in normal spirals, dwarf irregulars and Low Surface Brightness galaxies. In all these objects, the equilibrium between the force of Gravity and the motions that oppose it, has a simple realization: the stars (and the subdominant HI gas) rotate around the galaxy center. However, we realize [1] that such rotation is very much unrelated to the spatial distribution of the stars and gas, in strong disagreement with the Newton Law of Gravity. The objects belonging to the above most common types of galaxies are relatively simple to investigate, in that we have: \[R\frac{d\Phi(R)}{dR}=V^{2}(R) \tag{1}\] where the (measured) circular velocity and the total galaxy gravitational potential are indicated by: \(V(R)\) and \(\Phi(R)\). A disk of stars is their main luminous component, whose surface mass density \(\Sigma_{*}(R)\),[2] proportional to the surface luminosity measured by the photometry, is well approximated by [3]: \[\Sigma_{*}(r)=\frac{M_{D}}{2\pi R_{D}^{2}}\,e^{-R/R_{D}} \tag{2}\] where \(M_{D}\) is the mass of the stellar disk to be determined and \(R_{D}\) is its scale length measured from the photometry. At \(R\geq 3\)\(R_{D}\), in all objects in similar fashion, this component rapidly disappears, so that \(R_{D}\) plays the role of the characteristic radius of the stellar matter. Equation (1) together with the Poisson equation for this component (in cylindrical coordinates, \(\delta\) is the Kronecker function): \[\Delta\Phi_{\star}(R,z)=4\pi G\;\Sigma_{\star}(R)\delta(z) \tag{3}\] yields \(V_{\star}(y)\), the luminous matter contribution to the circular velocity \(V(R)\). With \(y\equiv R/R_{D}\) and \[v_{\star}^{2}(y)\equiv\frac{G^{-1}V_{\star}^{2}(y)R_{D}}{M_{D}}\] we have (\(I,K\) are the Bessel functions evaluated at \(y/2\)): \[v_{\star}^{2}(y)=\frac{1}{2}\,y^{2}(I_{0}\,K_{0}-I_{1}\,K_{1})|_{y/2} \tag{4}\] It is interesting to briefly show the dynamical evidence for the presence of a DM halo in the above rotating galaxies and how to derive its spatial density. Defining \(\nabla\equiv dlog\,V/dlog\,R\), from the above equations, we have: \(\nabla_{\star}(y)\simeq 0.87-0.5\;y+0.043\;y^{2}\). According to Newtonian gravity one should expect: \(\nabla(y)=\nabla_{\star}(y)\), instead, we find: \(\nabla(y)>\nabla_{\star}(y)\) (a) at all radii \(y\) in galaxies with steep rotation curves (\(\nabla(2)>0.5\)) and (b) for \(y>2\), in galaxies with a flatter RC (e.g. see Fig 1). In order to restore the law of Gravity one adds a "spherical dark halo" component for which:[3] \[V_{h}^{2}(y)=-V_{\star}^{2}(y)+V^{2}(y) \tag{5}\] with the constraint: \[\nabla_{h}(y)=\frac{\nabla(y)\;V^{2}(y)-\nabla_{\star}(y)\;V_{\star}^{2}(y)}{V ^{2}(y)-\;V_{\star}^{2}(y)} \tag{6}\] Then, we have: \(V_{h}^{2}(R)=G\frac{\int 4\pi\rho_{h}(R)R^{2}dR}{R}\) with \(\rho_{h}(R)\) the DM halo density. It is well known that the above aspects of the "Dark Matter Phenomenon" are present also in the other types of galaxies (see, e.g., [5]) and imply the existence of a massive particle Figure 1: M33: the profile of the stellar disk contribution to the circular velocity does not coincide with the profile of the latter, being at all radii: \(\nabla>\nabla_{\star}\) (from [4]). that does not interact with the standard matter via electromagnetic or strong force. Remarkably, these are also needed to explain a number of cosmological observations including the rate of the expansion of the universe, the anisotropies in the Cosmic Background Radiation, the properties of the large scale structures, the phenomenology of the gravitational lensing of distant galaxies by nearby clusters of galaxies, and the existence itself of galaxies (e.g., see: [6; 7]) [4]. The starting point to account for all the above has been, therefore, to postulate the ubiquitous presence in the Universe of massive particles that emit radiation at a level which is totally negligible with respect to that emitted by the Standard Model (SM) particles. Then, this particle, by definition beyond the SM, is hidden to us also when it aggregates in vast amounts. As matter of fact, we take the dark particle option as a foundation of Physics and Cosmology. However, one has to stress that this does not automatically lead to infer neither the mass nor the nature itself of such a particle. Furthermore, the present status of "darkness" means that the particle has a very small, but not necessarily zero, self-interactions or interactions with the SM particles, with a number of cosmological, physical and astrophysical consequences. ## 2 The Standard Paradigm for the Dark Matter Phenomenon The next step of the investigation has been to endow the particle behind the Dark Matter Phenomenon with a theoretical scenario. First, let us introduce the concept of a "Paradigm for the Dark Matter Phenomenon", i.e., a refined set of properties that the _actual_ DM scenario must possess and that, in turn, reveals the nature itself of the dark particle. After the first "detections" of DM in the Universe, a Paradigm has, indeed, emerged and lasted until today. According to this paradigm, the correct _scenario_ behind the DM Phenomenon must have the following properties: 1. it connects the (new) Dark Matter physics with the (known) physics of the Early Universe; it introduces in a natural way the required massive dark particle and relates it with the value of the cosmological mass density of the expanding Universe. 2. it is mathematically described by a very small number of parameters and by a very well known and specific initial conditions, while having, at the same time, a strong predictive power on the evolution of the structures of the Universe. Furthermore (and far than being obvious), such evolution can be thoroughly followed by proper numerical simulations. 3. its (unique) dark particle can be detected by experiments and observations with the present technology. 4. it sheds light on issues of the Standard Model particle physics. 5. it provides us with hints for solving long standing big issues of Physics. In other words, the ruling paradigm heads us towards scenarios for the dark matter phenomenon that are very beautiful, hopefully towards the most beautiful one, where beauty is meant in the sense of simplicity, naturalness, usefulness, achieving expectations and harmonically extending our knowledge. For definiteness and clarity of the discussion, we name this paradigm as: "The Apollonian paradigm". Let us point out that, in doing so, we just _name_ concepts emerged and solidified in the mid 80' and that, since then, were used as lighthouses in the investigation of the DM mystery. Continuing our narration, this procedure has resulted very successful: the above Paradigm has straightforwardly led Cosmologists to one specific scenario, the well known \(\Lambda\)CDM (e.g., [6; 7]), that proved able to reproduce several crucial aspects of the DMP. Let us also stress that the Apollonian paradigm, as a consequence of its definition, in addition to providing us with a very strong candidate for the actual scenario behind the dark particle, is also linked very directly to the (in)successes that the latter has in reproducing the DMP. Thus, to adopt a-priori the above scenario or to adhere to the originating paradigm, it is conceptually the same thing. Finally, the \(\Lambda\)CDM scenario is rather unique: in the past 30 years no other scenario has emerged with such complete Apollonian status. \(\Lambda\) stays for the Dark Energy having 70% of the total energy density of the Universe and CDM for Cold Dark Matter. Cold refers to the fact that the dark matter particles move very slowly compared to the speed of light. Dark means that these particles, in normal circumstances, do not interact with the ordinary matter via electromagnetic force but very feebly with a cross section of the order of \(3\times 10^{-26}\) cm\({}^{3}\)/s characteristic of the Weak Force. This specific value of the cross section inserted in the Physics of the early Universe, makes the predicted WIMP (Weak Interacting Massive Particles) relic density compatible with the observed value of about \(3\times 10^{-30}\) g/cm\({}^{3}\) (e.g., [6]). Among the CDM particles, all in line with the above paradigm, we must stress the prominent role is taken by the one that the (much favoured) Supersymmetry theory has inside his corpus: i.e., the Neutralino. To choose this particle brings also the bonus of explaining, in one shot, the existence of the DM particle, its relic density, and the presumed "naturalness problem" of the Standard Model. It is well known that the recognised beauty of this theory has been the main motivation for searching the related particle by means of numerous observational and experimental programs of Fundamental Physics. In this scenario, the density perturbations evolve through a series of halos mergings from the smallest to the biggest in mass, the final state being a matrioska of halos with smaller halos inside bigger ones. Very distinctively, these dark halos show an universal spherical spatial density [8]: \[\rho_{NFW}(r)=\frac{\rho_{s}}{\left(r/r_{s}\right)\left(1+r/r_{s}\right)^{2}} \tag{7}\] where \(r_{s}\) is a characteristic inner radius, and \(\rho_{s}\) the related density. Notably, this scenario confirms its beauty, and turns out to be extremely falsifiable since in all the Universe and throughout its history, the related dark component has to create structures with the same configuration. Now, the well known situation is that no such dark particle has been detected in the past 30 years. This has occurred in experiments at underground laboratories, searching for the soft scatter of these particles with particular nucleus; in particle collisions at LHC collider with a general search for Supersymmetric partners or more exotic invisible particles to be seen as missing momentum of unbalanced collision events; in measurements at space observatories as gamma rays coming from dense regions of the Universe where the dark particle should annihilate with its antiparticle (see e.g., [9; 10]). Furthermore, the current upper limits for the energy scale of SuSy, as indicated by LHC experiments, rules out the Neutralino as the DM particle. Nevertheless, a WIMP particle from Effective Field Theories, outside the SuSy environment, can be still proposed for detection experiments. It is, important to notice that, in the attempts made so far, only WIMP particles have been thoroughly searched. The search for particles related to other DM scenarios has been very limited and almost no blind search has been performed. Thus, the lack of detection of the dark particle so far, in no way indicates that this does not exist, but just indicates the failure of the detection strategies related to particular scenarios. In recent years, at different cosmological scales, observational evidence in strong tension with the above scenario has emerged (e.g., [11; 12]). Here, we focus on the distribution of dark matter in galaxies, a topic for which the failure of the \(\Lambda\)CDM scenario is the most eventful and striking [13]. Dark Matter is, in fact, located mostly in galaxies that come with very large ranges of total masses, luminosities, sizes, dynamical state and morphologies. While each of them is a laboratory for the new physics, the diversity of the properties of their luminous components is an asset for the investigation of their dark components. ## 3 The Cored DM Halos The rotation curves of disk systems are well measured from the Doppler measurements of the H\({}_{\alpha}\) and the 21 cm galaxy emission lines. In many cases they extend well beyond the stellar disk edge and, in some case, out to 20% of the dark halo size. In the outermost regions of the dark halos, devoid of rotating stars and HI gas, we have other useful tracers of the galaxy mass profile; the latter is available, from the galaxy center to the edge of the dark matter halo, for a sufficient number of disk systems [14]. By investigating several thousands RCs covering: (a) all the _morphologies_ of the disk systems: normal spirals, dwarf irregulars and low surface brightness galaxies and (b) all the values of their _magnitudes_ from the faintest to the most luminous objects, one finds that the RCs, combine in an Universal Rotation Curve (see [5]) defined from the center of the galaxies out to the edge of the dark matter halos. Specifically, for the disk systems of the local Universe, in the pipeline set to retrieve the galaxy dark and luminous mass distributions from their circular velocities \(V(R)\), the great majority of the latter can be represented by an unique function \(V_{URC}(R/R_{D},Mag,C,T)\), where, for each object, \(R_{D}\) is the disk length scale of Equation (2), \(Mag\) is the magnitude, \(C\) indicates how compact its distribution of light is and T the Hubble morphology (see Figures 4 and 5 in the pioneering work by Rubin and collaborators [15] and the subsequent URC series of papers [14; 16; 17; 18; 19; 20; 21]). Individual features in the RCs are sometime present, but they originate from physical phenomenons (such as non circular motions, non exponential stellar disks, presence of (small) bulges and bars etc.) that are not directly related to the DM phenomenon and get mostly damped out by the stacking procedure. \(V_{condd}(R/R_{D},P_{i})\) and \(\delta V_{condd}(R/R_{D},P_{i})\), the coadded velocity data and their r.m.s. (the points with errorbars in Figure 2) are obtained by stacking, with a proper procedure, a large number of individual RCs in bins of the observed quantity(ies) \(P_{i}\), (e.g., \(Mag\) and \(T\)). \(V_{URC}(R/R_{D},P_{i})\), the ensemble of solid lines in Figure 2, is an analytical function found to fit the \(V_{condd}\) data (see [18]). Let us stress that the \(V_{condd}\) and their r.m.s. \(\delta V_{condd}\) are crucial kinematical quantities; first, since \(\delta V_{condd}\ll V_{condd}\), the latter provide us with excellent _templates_ for a very large majority of the _individual_ RCs of the disk systems. Furthermore, the analytical function \(V_{URC}\), derived from the \(V_{condd}\), allows one to interpret this set of data in terms of a universal mass model. Remarkably, all the identifying quantities of the RCs (i.e., \(Mag,T,R_{D}\)) belong to the _stellar_ component of the galaxies despite that the _dark_ component dominates the mass distribution. This is a first indication of a _direct coupling_ between the dark and luminous components. The proposed mass model features the following two components: the above stellar disk with mass \(M_{D}\) as a free parameter and a dark halo with the Burkert density profile [22]: \[\rho_{B}(r)=\frac{\rho_{0}}{(1+r/r_{0})(1+(r/r_{0})^{2})} \tag{8}\] This profile has 2 free parameters like the NFW profile (but with different physical meanings): the central density \(\rho_{0}\) and the core radius \(r_{0}\) that marks the edge of the region in which the DM density is roughly constant. The stellar disk + Burkert halo model reproduces well the coadded RCs [17; 18; 19; 20; 21; 22; 23; 24] and also individual RCs of disk galaxies (see also [5]) and it is dubbed as the URC model. Figure 2: Stacking of 1000 individual RCs in 3 typical luminosity bins. The coadded curves \(V(R/R_{opt})/V(R_{opt})\) (points with error bars) are fitted with the URC model (solid line) which includes a cored DM halo (dashed line) and a Freeman Disk (dotted line) (see [18; 22]). Notably, for the Burkert and any other halo density profile with a core of size \(a\), we have: \[\nabla_{h}=\kappa\;a/R_{D} \tag{9}\] with \(\kappa\) a constant depending of the density profile. Its success highlights the failure of the NFW halo + stellar disk mass model in reproducing the _coadded_ RCs [25], so as (almost) the totality of the available high quality _individual_ RCs (e.g., [26; 27; 28; 29; 30; 31; 32; 33; 34; 35]). Such a failure is very serious in that one often finds, for the NFW halo + Freeman disk mass model, not only bad fits, but also implausible best-fitting values for the masses of the stellar disk and of the dark halo and for the two structural parameters of the NFW halo (see, e.g., [25]). This raises strong doubts about the collisionless status itself of the DM particles in galaxies, a fundamental aspect of the \(\Lambda CDM\) scenario. Furthermore, at radii \(r\gg r_{0}\), the density profile of the dark matter halos of disk galaxies falls back to be that of the collisionless particles [14] (see Figure 3). These facts fit well with the above observational scenario: in the outermost regions of halos, the luminous and dark matter are so rarefied that, in the past 10 Gyrs, had no time to interact appreciably among themselves, even if this were physically allowed. Thus, on the scale of the halo's virial radius, the standard physics of galaxy formation is not in tension with the observed distribution of dark matter. Differently, on the scales of the distribution of the luminous component, observation imply that the DM halo density has undergone a significant and not yet well understood evolution over the Hubble time (see also [36]). The mass distribution of a disk galaxy is described, in principle, by three parameters: one belonging to the luminous world and two to the dark one, representing structural quantities not existing in the standard \(\Lambda\)CDM scenario. In disk galaxies a further extraordinary observational evidence emerges: the three parameters \(r_{0}\;\rho_{0}\) and \(M_{\rm D}\) result well correlated among themselves (see Figure 4, [14] and Figure 11 in [5]), which poses the basis of the URC model. It is important to realize that the above correlations cannot occur in the standard \(\Lambda\)CDM scenario, and should then thus a crucial subject of investigation. In next section, therefore, we focus on these evidences and on the resulting structural physical properties of disk galaxies. Figure 3: The density of the DM halos today (blue) and the (extrapolated) primordial one (red) as function of radius and halo mass (from [14]). The agreement of the two density profiles at outer radii reveals a time evolution of the density of the central regions of the DM halo. Log-units: kpc g/cm\({}^{3}\), \(10^{11}\) M\({}_{\odot}\). ## 4 Unexpected Relationships Let us first remark that the properties of the internal structure of the disk galaxies, at the basis of this work, have been discovered and independently confirmed in a series of works since 1991, to which we direct the reader for further information.[5] In the present work, we adopt them as the motivation for originally proposing a paradigm shift in how we shall investigate the dark matter mystery. ### Central Halo Surface Density The quantity: \[\Sigma_{0}\equiv\rho_{0}r_{0}\] i.e., the central surface density of the DM halo, is found constant in objects of any magnitude and disk morphology (see Fig(5) and [5; 22; 33; 34; 37; 38; 39]): Figure 4: The relationship linking the DM and LM structural parameters \(\rho_{0},r_{0},M_{D}\) (see [14]). Log-units: M\({}_{\odot}\), kpc, g/cm [3]. Figure 5: The dark halo central surface density \(\Sigma_{0}\) as a function of the reference velocity \(V_{opt}\) in disk systems and in the giant elliptical M87 (from [19]). \[\text{Log}\ \frac{\Sigma_{0}}{\text{M}_{\odot}\text{pc}^{-2}}=2.2\pm 0.25 \tag{10}\] this means that \(\rho_{0}\), the value of the DM halo density at the center of galaxy, is inversely proportional to the size \(r_{0}\) of the region in which the density is about constant. This seems to imply that the dark particle possesses some form of self-interaction of unspecified nature. ### DM Core Radii Vs. Disk Length Scales Amazingly, since the pioneering work of [40]6 the core radius \(r_{0}\) is found to tightly correlate with the stellar disc scale length \(R_{D}\)[18; 19; 20; 23; 41], Footnote 6: DM halo core radius \(r_{0}\) (see Equation (8)) vs. the stellar disk length scale \(R_{D}\) in Spirals, Dwarf Disks, Low Surface Brightness and the giant cD galaxy M87 (from [23]). \[\text{Log}\ r_{0}=(1.38\pm 0.15)\ \text{Log}\ R_{D}+0.47\pm 0.03 \tag{11}\] see Figure 6. This relationship, initially found in Spirals, has also emerged in LSBs, Dwarf Irregulars and in the giant elliptical M 87 (see Figure 6). Overall, it extends in objects whose luminosities span over five orders of magnitudes. Then, the size of the region in which the DM density does not change (much) with radius, is found to be related with the size of the stellar disk \(R_{D}\). It is very difficult to understand such tight correlation between very different quantities without postulating that dark and luminous matter are able to interact more directly than via the gravitational force. ### Stellar Disks vs. DM Halos Compactness Similar mysterious entanglement emerges also from the evidence that, in galaxies with the _same_ stellar disk mass, the more compact is the stellar disk, i.e., the larger is the value of \(C_{\star}=M_{D}/R_{D}^{2}\), the more compact results the 2-D DM density projected on the core region, i.e., the larger is the value of \(C_{DM}=M_{h}(r_{0})/r_{0}^{2}\) (see Figure 7, details in [19; 20]. More globally, the stellar and the DM surface density, once they are estimated inside \(r_{0}\), are found to be proportional [42]). Again, the dark and luminous worlds seem to have communicated in an unknown language. Figure 6: DM halo core radius \(r_{0}\) (see Equation (8)) vs. the stellar disk length scale \(R_{D}\) in Spirals, Dwarf Disks, Low Surface Brightness and the giant cD galaxy M87 (from [23]). ### Total Vs. Baryonic Radial Accelerations Even without assuming a-priori the presence of a dark halo in galaxies, the dark halo emerges and shows a mysterious entanglement with the baryonic component. One can for instance consider \(V^{2}(y)/y\equiv g\), the radial acceleration of a point mass in rotational equilibrium at a distance \(y\) from the center of a disk galaxy, and \(V_{b}^{2}(y)/y\equiv g_{b}\), its baryonic (stellar) component. In spiral galaxies we find \(g(y)>g_{b}(y)\), that calls for a dark component, but also \(g=g(g_{b})\): the two accelerations are thus related quite tightly [43]. Including in the analysis also dwarf Irregulars and Low Surface Brightness galaxies, the above relation gains an other parameter, the radius \(y\equiv R/R_{D}\).[7] The points with coordinates \((g,g_{b},y)\) are found to be very well reproduced by a smooth surface \(log\ g=\tilde{g}(log\ g_{b},y)\) (see Figure 8). More specifically, in all galaxies and at all radii, the individual points lay distant from the average relationship by not more than 0.04 dex [44]. In a pure collisionless scenario, the origin of this thin surface, built by a fine tuning of dark and luminous quantities, is extremely difficult to understand. ### The Crucial Role of \(r_{0}\) The relationships above indicate the quantity \(r_{0}\) as the radius of the region inside which the DM-LM interaction takes or has taken place. Let us show further direct support for such identification. In the self-annihilating DM scenario the number of interactions per unit time has a dependence on the DM halo density given by: \(K_{SA}(R)=\rho_{DM}^{2}(R)\); in analogy, in the scenario featuring DM-baryons interactions (absorption and/or scattering), we focus on the quantity \(K_{C}(R)\equiv\rho_{DM}(R)\rho_{\star}(R)\) which has no physical role in a collisionless DM particle scenario. From the above URC mass model we get a striking relation when evaluating \(K_{C}\) at \(r_{0}\): \[K_{C}(r_{0})\simeq\ const=10^{-47.5\pm 0.3}\,\mathrm{g^{2}\,cm^{-6}} \tag{12}\] Figure 7: The compactness of the stellar disks vs. the compactness of DM halos, in units of their average values, in two different samples of galaxies, (see [19]) for details. Impressively, we see in Figure 9 that the kernel \(K_{C}(R)\), at any given physical radius \(R\), varies largely (i) among galaxies of different mass, and, (ii) in each galaxy, at different radii. Figure 8: The amazing relationship in dwarf (dark blue) and LSB (blue) galaxies among (1) the total and (2) the baryonic acceleration (both evaluated at the radius \(y\equiv R/R_{D}\)) and (3) \(y\). Points represent the values derived from the RCs (see details in [44]). Figure 9: The kernel \(\rho_{DM}(R)\rho_{LM}(R)\) as function of radius and halo virial mass (yellow). In all objects the value of the latter, at the boundary of the constant density region \(r_{0}\), lies inside the two red planes. Also shown \(\rho_{DM}(R)^{2}\) (blue) relative to the dark particle annihilation. Units: log M\({}_{\odot}\), kpc, log (g\({}^{2}\) cm\({}^{-6}\)) (from [13]). Instead, at \(R\simeq r_{0}\) and only there, this quantity takes the same value in all galaxies. In the scenario of interacting dark matter, this clearly suggests the radius \(r_{0}\) as the edge of the region inside which interactions between dark matter particles and a Standard Model particles have taken place so far, flattening the original halo cusp. Let us notice that, at small scales, there are further observational evidences that cannot be framed by a scenario featuring a _collisionless_ and _simple_ dark particle [45]. Furthermore, in the \(\Lambda\)CDM scenario, at large scales and at high \(z\), tensions of different types exist (e.g., [11]). ### Discussion Dark Matter particles have been originally envisioned with the crucial characteristic of interacting with the rest of the Universe essentially only by Gravity. However, once we set in such a framework, we realize that the properties of the mass distribution in galaxies do not make much sense for explaining the observed properties. An other interaction has to be considered. Remarkably, this interaction causes no effect on the structure of the galaxy dark halos on the time scale of their free fall, the one governing the WIMP particles. It acts within a timescale as long as the age of the Universe, by slowly modifying the dark halo density distribution. It is worth, before proceeding, to discuss the possibility of a coexistence between the \(\Lambda\)CDM scenario and the above observational evidences. The best chance for this to work seems to be an astrophysical effect leading to the formation of the DM halo cores, via a global feedback created by explosions of galactic supernovae (e.g., [46]). We stress that, for this process and for any other with the same aim, the most serious trouble is not the efficiency in the core-forming process, but the ability to build up from scratch the above very complex and fine-tuned observational scenario. In addition, there are also specific issues affecting the core-forming role of the baryonic feedback. According to the latter, in objects with the same stellar mass, one should find that the _more compact_ is the stellar distribution (and consequently the _more efficient_ is the process of removing the DM particles from the original cusped halo by a greater number of supernovae explosions) and the _less compact_ the DM halo should be. This is in strong disagreement with present day observations (see Figure 7). Similarly, LSB galaxies, where the number of supernovae per unit area had been _much smaller_ than normal spirals, are instead found to possess a DM core of size _larger_ than that of the Spirals of the same mass (see Figure 6). Finally, we detect Dark Matter cores also in dwarf spirals [20], in giant LSBs [19] and ellipticals [23], i.e., in situations where the SN explosions have been too few or where the gravitational potential is too strong to allow for a baryonic feedback flattening the primordial cusps (see e.g., [47]). As a result, the idea of bringing observations in line with the standard DM scenario of collisionless particles via astrophysical processes, seems to have essential problems. ## 5 A New Paradigm The impact of the above observational scenario goes beyond the evidence of its tension with the \(\Lambda CDM\) WIMP theoretical scenario. In fact, the disagreement between the two scenarios is so strong and so deep that we are led to think that it can rule out the Apollonian paradigm itself (from which the \(\Lambda CDM\) scenario has emerged). The same defining criteria (1)-(5) of the paradigm appear unable to account for the above observational evidence. Thus, the spectacular DM-LM entanglement found in galaxies, allied with the fact that the WIMP particle has escaped detection, becomes a strong motivation for demanding a shift of the Paradigm that we shall follow to approach the dark matter Phenomenon and determine the nature of the dark particle. Reflecting upon the failure of the current paradigm, we realize that it originates from the fact that it forces any scenario created to explain the DMP under its ruling, to have inbuilt a direct positive correlation between truth and beauty. On the contrary, the observational properties of the dark and luminous matter in galaxies seem to favor scenarios which may appear "ugly". Indeed, the found observational relationships and the galaxy properties seem to indicate that the (proper) theoretical scenario for the DMP may have a large number of free parameters, a limited predictive power, no obvious connection with known Physics, or _expected_ new Physics, including the currently open issues in Fundamental Theoretical Physics. Then, the true scenario could likely be at odds with the entire Apollonian paradigm. In other words, we need a new Paradigm that opens the door to "ugliness", thus allowing scenarios for the DMP that are forbidden by the current Apollonian Paradigm. Many philosophers have expressed their interest in situations like this; most notably, F. Nietzsche [48] has been obsessed by the concepts of beauty and ugliness in relation to those of truth and falsity, so we name after him the proposed new Paradigm. Thus, we claim that, in order to formulate the correct scenario for the DMP, we need to abandon both the \(\Lambda CDM\)_scenario_ and its generating Apollonian _paradigm_ and to adopt the newly proposed Nietzschean _paradigm_. This new paradigm: (i) _values_ and (ii) _protects_ from negative biases any theoretical scenario for the DMP that emerges from observations even if it appears exotic, complex or full of mysterious entanglements. Then, it directs our investigations according to the following loop: reverse-engineering the available observations leads us to a DM scenario that gets tested by a _new_ set of especially selected observations. Reverse engineering the old and the new observation improves then the scenario. The paradigm affirms that that after some iteration, the actual scenario for the DMP will emerge and reveal, at the same time, the nature of the dark particle. Before proceeding, let us stress that the proposed paradigm shift is not an straightforward and painless step. In fact, the old paradigm has created the \(\Lambda\)CDM scenario which has a number of clear advantages: * The underlying Physics is rather simple and at the same time is connected with new Physics in the fields of Cosmology and Elementary Particles. * When it is adopted, the initial conditions and the theoretical framework at the basis of any new investigation are well-established. * It has a clear agenda for the investigation of the dark matter mystery, already in use in the scientific community and fostering a global spirit of research. * It connects "state of the art" computer simulations, observations and experiments. Therefore, to abandon the Apollonian paradigm and, in turn, the generated \(\Lambda\) CDM scenario, has important consequences in the investigation of the DM phenomenon. In fact, we do not have yet a scenario ready to take the role that the \(\Lambda\) CDM scenario has played so far. More specifically: from the available observational evidence collected so far, we can definitely argue that the true scenario behind the DMP will result much more complicated, complex in its background physics and less able to take advantage of computer simulations than the current \(\Lambda\)CDM scenario. Moreover, very likely, no other future scenario will profit of the united effort of the large majority of cosmologists, as it happened for the \(\Lambda\)CDM one. Given this, it is not possible to sneak away from the \(\Lambda\)CDM scenario to some other scenario without performing a deep rethinking that involves also the generating Paradigm. Summarizing, we propose a new Paradigm according to which the search for the true DMP scenario can violate or/and go beyond the five points in Section 2, but, on the other hand, must reverse-engineer the available observational and experimental data. ## 6 Uncharted Territories? We complete the goal of calling for a DM paradigm switch by showing that, effectively, the new paradigm outlined above is able to provide us with promising scenarios. Within this, the first relevant observation to be made is that all the correlations emerging between luminous and dark parameters appear to be essentially a manifestation of some (new) physics taking place at galactic scales as it is clear in the outstanding issue of the formation of galactic cores. In the search for the true DM scenario it is intriguing that, within the new Nietzschean paradigm, we are allowed to speculate that the detected dark-luminous relationships are just the consequence of a non-standard interaction between DM and baryons and, above all, to proceed by neglecting the constraints (1)-(5), whose obedience has limited so far the birth and growth of scenarios alternative to the DM. More specifically, we can start to consider of the following scenarios: * Scenario for which the baryon-only physics, in various forms of feedback, by means of (a likely complex) energy release is able to modify the DM distribution in galaxies. If it includes collisionless dark particles it has clear difficulties in accounting for the DM-DM and DM-baryons relations described above, however, these difficulties are likely to disappear if we postulate _also_ the presence of a _proper_ SM particle-DM particle interaction. * Scenario in which a new direct Baryon-DM interaction is responsible for the core formation. A simple estimate assuming a total dark matter core transmutation, leads to a quite high value for the relative cross section, \(\sim\)10\({}^{-24,25}\)cm\({}^{2}\)\(\sim\)0.1-1 barn. This might be considered not realistic, but let us note that if we consider the _dynamical_ evolution in the dark halo particles, this may help to reach an adequate transfer of energy from the LM to the DM component also with a much smaller interaction cross section. * Scenario featuring a DM-DM interaction whose existence and value of its cross section derive from the detection (in galaxies) of a roughly constant value for the DM surface density inside the core radius \(r_{0}\).of \(\Sigma_{0}\simeq 100\,M_{\odot}/\)pc\({}^{2}\) for the DM surface density inside the core radius \(r_{0}\). This leads to a quite large cross section of \(\sigma/m=1\) cm\({}^{2}\)/g = 1 barn/GeV, but again, a proper treatment of the evolution of the dark matter halos at short scales could reveal that also smaller cross sections are effective in core formation, turning on some gravitational energy transfer between the dark and the luminous components. * Scenario in which the core-forming dark-luminous interactions occur (in a time scale of 10 Gyr) inside or at the surface of bound objects like individual or binary stars, white dwarfs, BHs of any mass and their accretion disks, planets and their atmospheres and supernovae expanding shells, i.e., in realistic places that, however, have not been theoretically and observationally explored so far. * The scenario featuring a WIMP particle + baryonic feedback can likely come back into the play if inserted in a modified gravity frame. ## 7 Conclusions Here, we have motivated our proposal according to which, in the investigation of the complex and entangled world of the phenomenon of the Dark matter in galaxies, we take a new and tailored approach. In detail, we advocate for a paradigm according to which, after abandoning the failing \(\Lambda\)CDM scenario, we must be poised to search for scenarios without requiring that: (a) they naturally come from (known) "first principles" (b) they obey to the Occam razor idea (c) they have the bonus to lead us towards the solution of presently open big issues of fundamental Physics. On the other side, the proper search shall: (i) give precedence to observations and the experiment results wherever they may lead (ii) consider the possibility that the Physics behind the Dark Matter phenomenon be disconnected from the Physics we know and and does not comply with the usual canons of beauty. Finally, as regard of the impact of this work in the scientific community, it is irrelevant whether such a search is undertaken to follow the proposed paradigm shift or as consequence of a more agnostic approach regarding any paradigm for the DMP. Notes \(\star\) and \(h\) refer to the disk and the halo component. Let us define: HI = neutral hydrogen. DM = Dark Matter. DMP = Dark Matter Phenomenon. RC = Rotation Curve. SM = Standard Model of elementary particles. LHC = Large Hadron Collider (CERN). \(\Lambda\)CDM = Lambda CDM cosmological model. WIMP = Weakly Interacting Dark Matter Particle. Apollonian (philosophy) = ideas from the famous Greek school of philosophy. Nietzschean (philosophy) = (some) ideas from the German philosopher. * for simplicity, we neglect here the small contribution of the HI gaseous disk. * The DMP is the ensemble of all the available cosmological and astrophysical observations which result not existing in an Universe made of only SM particles. * References in this work and in the review [5]). * [6] See their Equation (9a) in combination with the Equation (8) above. * [7] Notice that the new parameter is not the expected physical galactocentric radius \(R\), but this quantity _normalized_ to the length-scale of the galaxy stellar disk \(\propto R_{D}\).
2310.17125
The production of charmonium pentaquark from b-baryon and B-meson decay: SU(3) analysis
In this paper, we study the production of charmonium pentaquark $c \bar c q q q$ from bottom baryon and B-meson decays under the flavor SU(3) symmetry. Decay amplitudes for various processes are parametrized in terms of the SU(3) irreducible nonperturbative amplitudes. A number of relations between decay widths have been deduced. Moreover, the strong decays of pentaquark is also taken into account. These results can be tested in future measurements at LHCb, Belle II and CEPC. Once a few decay branching fractions have been measured, our work could provide hints for exploring new decay channels or new pentaquark states.
Wei-Hao Han, Ye Xing, Ji Xu
2023-10-26T03:52:37Z
http://arxiv.org/abs/2310.17125v2
# The production of charmonium pentaquark from b-baryon and B-meson decay: SU(3) analysis ###### Abstract In this paper, we study the production of charmonium pentaquark \(c\bar{c}qqq\) from bottom baryon and B-meson decays under the flavor SU(3) symmetry. Decay amplitudes for various processes are parametrized in terms of the SU(3) irreducible nonperturbative amplitudes. A number of relations between decay widths have been deduced. Moreover, the strong decays of pentaquark is also taken into account. These results can be tested in future measurements at LHCb, Belle II and CEPC. Once a few decay branching fractions have been measured, our work could provide hints for exploring new decay channels or new pentaquark states. ## I Introduction In 2015, the observation of \(J/\psi\,p\) resonances consistent with charmonium pentaquark states in \(\Lambda_{b}^{0}\to J/\psi K^{-}p\) decays were reported by the LHCb Collaboration [1]. In practice, states that decay into \(J/\psi\,p\) may have distinctive signatures [2], the minimal quark content can be identified as \(c\bar{c}uud\), and thus is charmonium pentaquark. Although the existence of pentaquark, which composed of four quarks and an antiquark, has been predicted since the establishment of quark model [3; 4; 5], the experimental search has taken a pretty long time. Such new particles dramatically changed our understanding of exotic states which can not be included in the conventional quark-antiquark and three-quark schemes of standard spectroscopy. These charmonium pentaquarks are labeled as \(P_{c}\) which carries an electric charge and couples to charmonium. In addition, they are the first exotic states observed in the heavy-flavor baryonic sector. Subsequently, a series of pentaquark candidates were reported. In 2019, the LHCb Collaboration updated their analysis of \(\Lambda_{b}^{0}\to J/\psi K^{-}p\) and found a new state \(P_{c}(4312)\)[6]. In 2020, a new structure in the \(J/\psi\Lambda\) invariant mass distribution, consistent with a charmonium-like pentaquark with strangeness \(P_{cs}(4459)\), was obtained from an amplitude analysis of \(\Xi_{b}^{-}\to J/\psi\Lambda K^{-}\) decays [7]. In 2022, evidence for a charmonium pentaquark \(P_{c}(4337)\) in the \(J/\psi\,p\) and \(J/\psi\,\bar{p}\) systems was found in \(B_{s}^{0}\to J/\psi\,p\,\bar{p}\) decays [8]. In 2023, an amplitude analysis of \(B^{-}\to J/\psi\Lambda\,\overline{p}\) is performed, a narrow resonance in the \(J/\psi\Lambda\) system, consistent with a pentaquark candidate with strangeness is observed [9]. It seems that we will experience a new era with more and more such exotic states observed in the near future, therefore it is of prime importance to understand the sub-structure of these pentaquarks as well as provide information for exploring new pentaquark states on experiment. These experimental progresses have made a great impact on the hadron spectroscopy and evoked a lot of theoretical interest. Proposed interpretations of pentaquarks include the compact pentaquark scenarios [10; 11; 12; 13; 14], the molecular models [15; 16; 17; 18; 19], the hadrocharmonium model [20; 21], or peaks due to triangle-diagram processes [22; 23; 24]. Besides, there are also studies on the properties of other pentaquark candidates with different quark components [25; 26; 27; 28; 29]. Despite of these encouraging results in literatures, we should stress here that the precise structures of pentaquarks remain unknown; there is no consensus as to the explanation of how the five quarks, i.e., the four quarks and an antiquark, are dynamically structured. At this moment, both the experimental and theoretical studies are not yet conclusive. It is widely recognized that to disentangle the various models and further understand the nature of charmonium pentaquark, searches for additional productions and decay channels are crucial [30]. Unlike baryonic \(\Lambda_{b}^{0}\) and \(\Xi_{b}^{-}\) decays which receives a relatively large contribution from the intermediate excited resonances, no conventional states are expected to be produced in the \(B_{s}^{0}\) decay, thus offering us a clean environment to search for new pentaquark [31]. However, whether through baryonic decay or mesonic decay, calculating the decay amplitudes of these transitions is a formidable challenge, there is no factorization approach established to handle production processes of \(P_{c}\). On the other hand, flavor SU(3) symmetry can be used to relate various relevant decays and provide very useful guide for the future pentaquark searches. One significant advantage of the SU(3) analysis is that it is independent of the factorization details, allowing us to relate various decay modes despite the unknown nonperturbative dynamics of QCD [32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45]. Certain theoretical models predict that some of these charmonium pentaquarks belong to octet multiplet of flavor SU(3) [55; 56], thus finding the other states in the multiplet will provide key evidence for these models. In this work, we consider the production of charmonium pentaquark from b-baryon and B-meson decays by utilizing flavor SU(3) analysis. Some testable relations for b-baryon decays into a pentaquark plus a light meson and B-meson decays into a pentaquark plus a light baryon are presented. Afterwards, the strong decays of charmonium pentaquark will be discussed as well. Some particular processes can be used as signatures to reconstruct pentaquark. The main motivation of this work is to provide suggestions which may help experimentalists find new \(P_{c}\) states or new production and decay modes of already observed \(P_{c}\). The rest of this paper is organized as follows. In Sec. II, we will collect the the irreducible forms for the particle multiplets in the SU(3) symmetry. In Sec. III, we will analyze the nonleptonic decays of b-baryon and B-meson. The strong decays of charmonium pentaquark are investigated in Sec. IV. Finally, we conclude in Sec. V. ## II Particle Mutiplites In this section, we will collect the representations for the hadron multiplets involved in our work. Under the flavor SU(3) symmetry, the \(b\) quark is a singlet and the light quark \(q\) belongs to the fundamental representation 3. Thus the b-baryon contains an antitriplet and a sextet in the SU(3) space which are denoted as \(\mathcal{B}\) and \(\mathcal{C}\) \[(\mathcal{B})^{ij} =\left(\begin{array}{ccc}0&\Lambda_{b}^{0}&\Xi_{b}^{0}\\ -\Lambda_{b}^{0}&0&\Xi_{b}^{-}\\ -\Xi_{b}^{0}&-\Xi_{b}^{-}&0\end{array}\right)\,,\] \[(\mathcal{C})^{ij} =\left(\begin{array}{ccc}\Sigma_{b}^{+}&\frac{\Sigma_{b}^{0}}{ \sqrt{2}}&\frac{\Xi_{b}^{0}}{\sqrt{2}}\\ \frac{\Sigma_{b}^{0}}{\sqrt{2}}&\Sigma_{b}^{-}&\frac{\Xi_{b}^{\prime}}{\sqrt{2 }}\\ \frac{\Xi_{b}^{0}}{\sqrt{2}}&\Xi_{b}^{-}&\Omega_{b}^{-}\end{array}\right)\,. \tag{1}\] The bottom meson forms an SU(3) antitriplet: \[B_{i}=\left(\begin{array}{cc}B^{-},&\overline{B}^{0},&\overline{B}_{s}^{0} \end{array}\right)\,. \tag{2}\] The charmonium pentaquark discussed in this work contains at least three light quarks in addition to a \(c\bar{c}\) pair, i.e. \([c\bar{c}qqq]\). Under the flavor SU(3) symmetry, the heavy quarks are singlet, the light quark transforms under the flavor SU(3) symmetry as \(3\otimes 3\otimes 3=1+8+8+10\). We denote the octet pentaquark as \[\mathcal{P}_{i}^{j}=\left(\begin{array}{ccc}\frac{P_{c0}}{\sqrt{2}}+\frac{P _{\Lambda}}{\sqrt{6}}&P_{\Sigma^{+}}&P_{p}\\ P_{\Sigma^{-}}&-\frac{P_{\Sigma^{0}}}{\sqrt{2}}+\frac{P_{\Lambda}}{\sqrt{6}}&P _{n}\\ P_{\Xi^{-}}&P_{\Xi^{0}}&-\frac{P_{\Lambda}}{\sqrt{6}}\end{array}\right)\,. \tag{3}\] Discoverying these pentaquarks in the multiplet is one of the way to verify the relevant theoretical model. For the meson sector, the light pseudoscalar mesons form an octet: \[(M_{8})_{i}^{j}=\left(\begin{array}{ccc}\frac{\pi^{0}}{\sqrt{2}}+\frac{\eta }{\sqrt{6}}&\pi^{+}&K^{+}\\ \pi^{-}&-\frac{\pi^{0}}{\sqrt{2}}+\frac{\eta}{\sqrt{6}}&K^{0}\\ K^{-}&\overline{K}^{0}&-2\frac{\eta}{\sqrt{6}}\end{array}\right)\,. \tag{4}\] Here \(\eta\) is only considered as a member of octet, while the singlet \(\eta_{1}\) is not considered to avoid the octet-singlet mixture complexity. Light baryons made of three light quarks are presented as: \[T_{8}=\left(\begin{array}{ccc}\frac{1}{\sqrt{2}}\Sigma^{0}+\frac{1}{\sqrt{6} }\Lambda^{0}&\Sigma^{+}&p\\ \Sigma^{-}&-\frac{1}{\sqrt{2}}\Sigma^{0}+\frac{1}{\sqrt{6}}\Lambda^{0}&n\\ \Xi^{-}&\Xi^{0}&-\sqrt{\frac{2}{3}}\Lambda^{0}\end{array}\right)\,. \tag{5}\] The singly charmed baryons can form an antitriplet or sextet. In former case, we have the matrix expression: \[T_{\mathbf{e3}}\ =\ \left(\begin{array}{ccc}0&\Lambda_{c}^{+}&\Xi_{c}^{+}\\ -\Lambda_{c}^{+}&0&\Xi_{c}^{0}\\ -\Xi_{c}^{+}&-\Xi_{c}^{0}&0\end{array}\right)\,, \tag{6}\] and in latter case: \[T_{\mathbf{e6}}\ =\ \left(\begin{array}{ccc}\Sigma_{c}^{++}&\frac{1}{\sqrt{2}} \Sigma_{c}^{+}&\frac{1}{\sqrt{2}}\Xi_{c}^{+}\\ \frac{1}{\sqrt{2}}\Sigma_{c}^{+}&\Sigma_{c}^{0}&\frac{1}{\sqrt{2}}\Xi_{c}^{0} \\ \frac{1}{\sqrt{2}}\Xi_{c}^{-}&\frac{1}{\sqrt{2}}\Xi_{c}^{0}&\Omega_{c}^{0} \end{array}\right)\,. \tag{7}\] The anticharmed meson forms an SU(3) triplet: \[\overline{D}^{i}=\left(\begin{array}{cc}\overline{D}^{0},&D^{-},&D_{s}^{-} \end{array}\right)\,. \tag{8}\] Here we also present the best determination of the magnitudes of CKM matrix elements [57] \[\left[\begin{array}{ccc}|V_{ud}|&|V_{us}|&|V_{ub}|\\ |V_{cd}|&|V_{cs}|&|V_{cb}|\\ |V_{td}|&|V_{ts}|&|V_{tb}|\end{array}\right]=\] \[\left[\begin{array}{ccc}0.97370\pm 0.00014&0.2245\pm 0.0008&0.003 82\pm 0.00024\\ 0.221\pm 0.004&0.987\pm 0.011&0.0410\pm 0.0014\\ 0.0080\pm 0.0003&0.0388\pm 0.0011&1.013\pm 0.030\end{array}\right]\,,\] for the convenience of the subsequent discussions. To describe the various decay modes in the frame of SU(3) analysis, we need to construct the hadron-level effective Hamiltonian with the representations for initial and final states listed above. It is worth stressing that a hadron in the final state must be created by its antiparticle field. For instance, we need a \(\overline{P}_{\Lambda}\) field in Hamiltonian to create a \(P_{\Lambda}\) pentaquark in the final state. The constructions of hadron-level effective Hamiltonian are displayed in the following sections, they will result in strikingly simple relations among the decay amplitudes. ## III Production of pentaquark from b-baryon and b-meson ### Decays of b-baryon First, we discuss the b-baryon decays into an octet pentaquark and a light meson. The leading-order effective Hamiltonian is given by \[{\cal H}_{\rm w.e.}(b\to qc\bar{c})=\frac{G_{F}}{\sqrt{2}}\bigg{(}V_{cb}V_{cq}^{*} \left(C_{1}O_{1}+C_{2}O_{2}\right)\bigg{)}\,, \tag{10}\] with \[O_{1} = \left(\bar{c}_{\alpha}b_{\beta}\right)_{V-A}\left(\bar{q}_{\beta} c_{\alpha}\right)_{V-A}\,,\] \[O_{2} = \left(\bar{c}_{\alpha}b_{\alpha}\right)_{V-A}\left(\bar{q}_{\beta }c_{\beta}\right)_{V-A}\,, \tag{11}\] where \(q\) can be \(d\) or \(s\). The \(G_{F}\) and \(V_{ij}\) are Fermi coupling constant and CKM matrix element respectively. \(O_{i}\) is the low-energy effective operator and \(C_{i}\) is the corresponding Wilson coefficient. We have neglected contributions from penguin diagrams, they are substantially suppressed compared to the tree diagrams. The operators \(O_{i}\) transfer under the flavor SU(3) as \(\bar{3}\), the corresponding quark level transition \(b\to c\bar{c}d/s\) can form an effective vertices \(H_{3}\) with \((H_{3})^{1}=0\), \((H_{3})^{2}=V_{cd}^{*}\) and \((H_{3})^{3}=V_{cs}^{*}\). At the hadron level, for a b-baryon which belongs to the antitriplet decays into an octet pentaquark and a light meson, the effective Hamiltonian is constructed as \[{\cal H}_{\it eff} = a_{1}({\cal B})^{ij}(H_{3})^{m}\epsilon_{ijk}(\overline{\cal P} )^{k}_{l}(\overline{M})^{l}_{m} \tag{12}\] \[+a_{2}({\cal B})^{il}(H_{3})^{m}\epsilon_{ijk}(\overline{\cal P} )^{k}_{l}(\overline{M})^{j}_{m}\] \[+a_{3}({\cal B})^{im}(H_{3})^{l}\epsilon_{ijk}(\overline{\cal P} )^{k}_{l}(\overline{M})^{j}_{m}\] \[+a_{4}({\cal B})^{im}(H_{3})^{i}\epsilon_{ijk}(\overline{\cal P} )^{k}_{l}(\overline{M})^{l}_{m}\] \[+a_{5}({\cal B})^{lm}(H_{3})^{i}\epsilon_{ijk}(\overline{\cal P} )^{k}_{l}(\overline{M})^{j}_{m}\,.\] For a b-baryon belongs to the sextet, the effective Hamiltonian is expressed as \[{\cal H}_{\it eff} = b_{1}({\cal C})^{il}(H_{3})^{m}\epsilon_{ijk}(\overline{\cal P} )^{k}_{l}(\overline{M})^{j}_{m} \tag{13}\] \[+b_{2}({\cal C})^{im}(H_{3})^{l}\epsilon_{ijk}(\overline{\cal P} )^{k}_{l}(\overline{M})^{j}_{m}\] \[+b_{3}({\cal C})^{im}(H_{3})^{j}\epsilon_{ijk}(\overline{\cal P} )^{k}_{l}(\overline{M})^{l}_{m}\] \[+b_{4}({\cal C})^{lm}(H_{3})^{i}\epsilon_{ijk}(\overline{\cal P} )^{k}_{l}(\overline{M})^{j}_{m}\,.\] In the above, we suppressed the Lorentz indices and spinor forms, but only concentrate on flavor SU(3) indices. Here the \(a_{i}\) and \(b_{i}\) are SU(3) irreducible nonperturbative amplitudes. Topological diagrams for these decay modes are given in Fig. 1. The individual decay amplitude can be obtained by expanding Eqs. (12) and (13), they are collected in Tables 1 and 2. From these results, one can read off much information. We present some of the interesting properties in the following. 1. Table 1 and 2 are arranged according to the dependence on CKM matrix elements, \(c\to s\) transition is proportional to \(|V_{cs}^{*}|\sim 1\); while \(c\to d\) transition is Cabibbo suppressed \(|V_{cd}^{*}|\sim 0.2\). 2. A number of relations for different decay widths can be readily read off from Table 1: \[\Gamma(\Lambda_{b}^{0}\to P_{\Sigma^{-}}\pi^{+}) = \Gamma(\Lambda_{b}^{0}\to P_{\Sigma^{0}}\pi^{0})\] \[= \Gamma(\Lambda_{b}^{0}\to P_{\Sigma^{+}}\pi^{-})\,,\] \[\Gamma(\Lambda_{b}^{0}\to P_{\Sigma^{-}}K^{+}) = 2\Gamma(\Lambda_{b}^{0}\to P_{\Sigma^{0}}K^{0})\] \[= \Gamma(\Xi_{b}^{-}\to P_{n}K^{-})\,,\] \[\Gamma(\Lambda_{b}^{0}\to P_{p}\pi^{-}) = 2\Gamma(\Lambda_{b}^{0}\to P_{n}\pi^{0})\,,\] \[\Gamma(\Lambda_{b}^{0}\to P_{p}K^{-}) = \Gamma(\Lambda_{b}^{0}\to P_{n}\overline{K}^{0})\,,\] \[\Gamma(\Xi_{b}^{-}\to P_{\Lambda}\pi^{-}) = 2\Gamma(\Xi_{b}^{0}\to P_{\Lambda}\pi^{0})\,,\] \[\Gamma(\Xi_{b}^{0}\to P_{\Lambda}\overline{K}^{0}) = \Gamma(\Xi_{b}^{-}\to P_{\Lambda}K^{-})\,,\] \[\Gamma(\Xi_{b}^{0}\to P_{\Sigma^{-}}\pi^{+}) = \Gamma(\Xi_{b}^{0}\to P_{n}\overline{K}^{0})\,,\] \[\Gamma(\Xi_{b}^{-}\to P_{\Sigma^{-}}\overline{K}^{0}) = \Gamma(\Xi_{b}^{0}\to P_{\Sigma^{+}}K^{-})\] (14) \[= 2\Gamma(\Xi_{b}^{0}\to P_{\Sigma^{0}}\overline{K}^{0})\] \[= 2\Gamma(\Xi_{b}^{-}\to P_{\Sigma^{0}}K^{-})\,,\] \[\Gamma(\Xi_{b}^{-}\to P_{\Sigma^{-}}\pi^{0}) = \Gamma(\Xi_{b}^{-}\to P_{\Sigma^{0}}\pi^{-})\,.\] And the relations deduced from Table 2: \[\Gamma(\Sigma_{b}^{+}\to P_{\Lambda}\pi^{+}) = \Gamma(\Sigma_{b}^{0}\to P_{\Lambda}\pi^{0})\] \[= \Gamma(\Sigma_{b}^{-}\to P_{\Lambda}\pi^{-})\,,\] \[\Gamma(\Sigma_{b}^{+}\to P_{\Lambda}K^{+}) = 2\Gamma(\Sigma_{b}^{0}\to P_{\Lambda}K^{0})\,,\] \[\Gamma(\Sigma_{b}^{+}\to P_{\Sigma^{0}}\pi^{+}) = \Gamma(\Sigma_{b}^{+}\to P_{\Sigma^{+}}\pi^{0})\] \[= \Gamma(\Sigma_{b}^{0}\to P_{\Sigma^{+}}\pi^{-})\] \[= \Gamma(\Sigma_{b}^{-}\to P_{\Sigma^{0}}\pi^{-})\] \[= \Gamma(\Sigma_{b}^{0}\to P_{\Sigma^{-}}\pi^{+})\,,\] \[\Gamma(\Sigma_{b}^{+}\to P_{\Sigma^{+}}K^{0}) = 2\Gamma(\Xi_{b}^{0}\to P_{\Sigma^{+}}\pi^{-})\,,\] \[\Gamma(\Sigma_{b}^{+}\to P_{p}\pi^{0}) = \Gamma(\Sigma_{b}^{0}\to P_{p}\pi^{-})\,,\] \[\Gamma(\Sigma_{b}^{+}\to P_{p}\overline{K}^{0}) = \Gamma(\Sigma_{b}^{-}\to P_{n}K^{-})\] \[= 2\Gamma(\Sigma_{b}^{0}\to P_{n}\overline{K}^{0})\] \[= 2\Gamma(\Sigma_{b}^{0}\to P_{p}K^{-})\,,\] \[\Gamma(\Sigma_{b}^{0}\to P_{\Lambda}\pi^{0}) = \Gamma(\Sigma_{b}^{-}\to P_{\Lambda}\pi^{-})\,,\] \[\Gamma(\Sigma_{b}^{-}\to P_{\Sigma^{-}}K^{0}) = \Gamma(\Sigma_{b}^{-}\to P_{n}\pi^{-})\,,\] \[\Gamma(\Xi_{b}^{-}\to P_{\Lambda}\pi^{-}) = 2\Gamma(\Xi_{b}^{0}\to P_{n}\pi^{0})\,,\] \[\Gamma(\Xi_{b}^{\prime 0}\to P_{\Lambda}\overline{K}^{0}) = \Gamma(\Xi_{b}^{\prime 0}\to P_{\Lambda}K^{-})\,,\] \begin{table} \begin{tabular}{c c c c} \hline channel & amplitude & channel & amplitude \\ \hline \(\Lambda_{b}^{0}\to P_{\Sigma^{-}}\pi^{+}\) & \((a_{5}-a_{4})\,V_{\rm cs}^{*}\) & \(\Lambda_{b}^{0}\to P_{\Sigma^{-}}K^{+}\) & \(-(a_{3}+a_{5})\,V_{\rm cd}^{*}\) \\ \(\Lambda_{b}^{0}\to P_{\Sigma^{0}}\pi^{0}\) & \((a_{5}-a_{4})\,V_{\rm cs}^{*}\) & \(\Lambda_{b}^{0}\to P_{\Sigma^{0}}K^{0}\) & \(\frac{(a_{3}+a_{5})}{\sqrt{2}}V_{\rm cd}^{*}\) \\ \(\Lambda_{b}^{0}\to P_{\Sigma^{+}}\pi^{-}\) & \((a_{5}-a_{4})\,V_{\rm cs}^{*}\) & \(\Lambda_{b}^{0}\to P_{\eta}\pi^{-}\) & \((2a_{1}+a_{2}+a_{4}-a_{5})\,V_{\rm cd}^{*}\) \\ \(\Lambda_{b}^{0}\to P_{p}K^{-}\) & \((2a_{1}+a_{2})\,V_{\rm cs}^{*}\) & \(\Lambda_{b}^{0}\to P_{n}\pi^{0}\) & \(-\frac{(2a_{1}+a_{2}+a_{4}-a_{5})}{\sqrt{2}}V_{\rm cd}^{*}\) \\ \(\Lambda_{b}^{0}\to P_{n}\overline{K}^{0}\) & \((2a_{1}+a_{2})\,V_{\rm cs}^{*}\) & \(\Lambda_{b}^{0}\to P_{A}K^{0}\) & \(-\frac{(4a_{1}+2a_{2}+a_{3}+2a_{4}-a_{5})}{\sqrt{6}}V_{\rm cd}^{*}\) \\ \(\Xi_{b}^{0}\to P_{\Lambda}\overline{K}^{0}\) & \(-\frac{(2a_{1}+a_{2}+2a_{3}+a_{4}+a_{5})}{\sqrt{6}}V_{\rm cs}^{*}\) & \(\Xi_{b}^{0}\to P_{\Sigma^{0}}\pi^{0}\) & \(\frac{(-2a_{1}-a_{2}+a_{3}+a_{4})}{2}V_{\rm cd}^{*}\) \\ \(\Xi_{b}^{0}\to P_{\Sigma^{0}}\overline{K}^{0}\) & \(\frac{(2a_{1}+a_{2}+a_{4}-a_{5})}{\sqrt{2}}V_{\rm cs}^{*}\) & \(\Xi_{b}^{0}\to P_{\Sigma^{+}}\pi^{-}\) & \(-(2a_{1}+a_{2})\,V_{\rm cd}^{*}\) \\ \(\Xi_{b}^{0}\to P_{\Sigma^{+}}K^{-}\) & \(-(2a_{1}+a_{2}+a_{4}-a_{5})\,V_{\rm cs}^{*}\) & \(\Xi_{b}^{0}\to P_{p}K^{-}\) & \((a_{4}-a_{5})\,V_{\rm cd}^{*}\) \\ \(\Xi_{b}^{-}\to P_{\Lambda}K^{-}\) & \(\frac{(2a_{1}+a_{2}+a_{4}+a_{5})}{\sqrt{6}}V_{\rm cs}^{*}\) & \(\Xi_{b}^{0}\to P_{\Lambda}\pi^{0}\) & \(\frac{(2a_{1}+a_{2}-a_{3}+a_{4}-2a_{5})}{2\sqrt{3}}V_{\rm cd}^{*}\) \\ \(\Xi_{b}^{-}\to P_{\Sigma^{-}}\overline{K}^{0}\) & \((2a_{1}+a_{2}+a_{4}-a_{5})\,V_{\rm cs}^{*}\) & \(\Xi_{b}^{0}\to P_{n}\overline{K}^{0}\) & \((a_{3}+a_{4})\,V_{\rm cd}^{*}\) \\ \(\Xi_{b}^{-}\to P_{\Sigma^{0}}K^{-}\) & \(\frac{(2a_{1}+a_{2}+a_{4}-a_{5})}{\sqrt{2}}V_{\rm cs}^{*}\) & \(\Xi_{b}^{-}\to P_{\Lambda}\pi^{-}\) & \(\frac{(2a_{1}+a_{2}-a_{3}+a_{4}-2a_{5})}{\sqrt{6}}V_{\rm cd}^{*}\) \\ & & \(\Xi_{b}^{-}\to P_{\Sigma^{-}}\pi^{0}\) & \(-\frac{(2a_{1}+a_{2}+a_{3}+a_{4})}{\sqrt{2}}V_{\rm cd}^{*}\) \\ & & \(\Xi_{b}^{0}\to P_{\Sigma^{-}}\pi^{+}\) & \((a_{3}+a_{4})\,V_{\rm cd}^{*}\) \\ & & \(\Xi_{b}^{-}\to P_{n}K^{-}\) & \(-\frac{(a_{3}+a_{5})}{\sqrt{2}}V_{\rm cd}^{*}\) \\ & & \(\Xi_{b}^{-}\to P_{\Sigma^{0}}\pi^{-}\) & \(\frac{(2a_{1}+a_{2}+a_{3}+a_{4})}{\sqrt{2}}V_{\rm cd}^{*}\) \\ \hline \end{tabular} \end{table} Table 1: Amplitudes for b-baryon (antitriplet) decays into a pentaquark and a light meson. Figure 1: Topological diagrams for a b-baryon decays into an octet pentaquark and a light meson. ments, a rigorous analysis would be necessary in future [58; 59]. ### Decays of B-meson At the hadron level, for a B-meson which belongs to an SU(3) antitriplet decays into an octet pentaquark and a light antibaryon, the corresponding effective Hamiltonian is constructed as \[{\cal H}_{\it eff} = c_{1}(B)_{n}(H_{3})^{n}\epsilon_{ijk}(\overline{\cal P})_{k}^{ \dagger}\epsilon^{ijm}(T_{\rm S})_{m}^{l} \tag{16}\] \[+c_{2}(B)_{n}(H_{3})^{l}\epsilon_{ijk}(\overline{\cal P})_{l}^{ \dagger}\epsilon^{ijm}(T_{\rm S})_{m}^{n}\] \[+c_{3}(B)_{n}(H_{3})^{n}\epsilon_{ijk}(\overline{\cal P})_{l}^{ \dagger}\epsilon^{im}(T_{\rm S})_{m}^{j}\] \[+c_{4}(B)_{n}(H_{3})^{l}\epsilon_{ijk}(\overline{\cal P})_{k}^{ \dagger}\epsilon^{im}(T_{\rm S})_{m}^{j}\] \[+c_{5}(B)_{n}(H_{3})^{j}\epsilon_{ijk}(\overline{\cal P})_{l}^{ \dagger}\epsilon^{im}(T_{\rm S})_{m}^{n}\] \[+c_{6}(B)_{n}(H_{3})^{j}\epsilon_{ijk}(\overline{\cal P})_{l}^{ \dagger}\epsilon^{im}(T_{\rm S})_{m}^{l}\] \[+c_{6}(B)_{n}(H_{3})^{j}\epsilon_{ijk}(\overline{\cal P})_{l}^{ \dagger}\epsilon^{im}(T_{\rm S})_{m}^{l}\] \[+c_{6}(B)_{n}(H_{3})^{j}\epsilon_{ijk}(\overline{\cal P})_{l}^{ \dagger}\epsilon^{im}(T_{\rm S})_{m}^{l}\] The topological diagrams for these decays are given in Fig. 2. The decay amplitudes for different channels can be deduced from the Hamiltonian in Eq. (16), they are displayed in Table 3. From these amplitudes, we can find the relations for decay widths in the SU(3) symmetry limit: \begin{table} \begin{tabular}{c c c c} \hline channel & amplitude & channel & amplitude \\ \hline \(\Sigma_{b}^{+}\to P_{\Lambda}\pi^{+}\) & \(-\frac{(2b_{2}+b_{3}+b_{4})}{\sqrt{6}}V_{\rm cs}^{*}\) & \(\Sigma_{b}^{+}\to P_{\Lambda}K^{+}\) & \(\frac{(-b_{2}-2b_{3}+b_{4})}{\sqrt{6}}V_{\rm cd}^{*}\) \\ \(\Sigma_{b}^{+}\to P_{\Sigma^{0}}\pi^{+}\) & \(\frac{(b_{3}-b_{4})}{\sqrt{2}}V_{\rm cs}^{*}\) & \(\Sigma_{b}^{+}\to P_{\Sigma^{0}}K^{+}\) & \(\frac{(b_{3}+b_{4})}{\sqrt{2}}V_{\rm cd}^{*}\) \\ \(\Sigma_{b}^{+}\to P_{\Sigma^{+}}\pi^{0}\) & \(\frac{(b_{4}-b_{3})}{\sqrt{2}}V_{\rm cs}^{*}\) & \(\Sigma_{b}^{+}\to P_{\Sigma^{+}}K^{0}\) & \(-b_{1}V_{\rm cd}^{*}\) \\ \(\Sigma_{b}^{+}\to P_{p}\overline{K}^{0}\) & \(b_{1}V_{\rm cs}^{*}\) & \(\Sigma_{b}^{+}\to P_{n}\pi^{+}\) & \((b_{2}+b_{3})\,V_{\rm c}^{*}\) \\ \(\Sigma_{b}^{0}\to P_{\Lambda}\pi^{0}\) & \(\frac{(2b_{2}+b_{3}+b_{4})}{\sqrt{6}}V_{\rm cs}^{*}\) & \(\Sigma_{b}^{0}\to P_{\Lambda}K^{0}\) & \(-\frac{(b_{2}+2b_{3}-b_{4})}{2\sqrt{3}}V_{\rm cd}^{*}\) \\ \(\Sigma_{b}^{0}\to P_{\Sigma^{-}}\pi^{+}\) & \(\frac{(b_{3}-b_{4})}{\sqrt{2}}V_{\rm cs}^{*}\) & \(\Sigma_{b}^{0}\to P_{\Sigma^{-}}K^{+}\) & \(\frac{(b_{2}+b_{4})}{\sqrt{2}}V_{\rm cd}^{*}\) \\ \(\Sigma_{b}^{0}\to P_{\Sigma^{+}}\pi^{-}\) & \(\frac{(b_{4}-b_{3})}{\sqrt{2}}V_{\rm cs}^{*}\) & \(\Sigma_{b}^{+}\to P_{p}\pi^{0}\) & \(-\frac{(b_{1}-b_{3}+b_{4})}{\sqrt{2}}V_{\rm cd}^{*}\) \\ \(\Sigma_{b}^{0}\to P_{\mu}K^{-}\) & \(-\frac{b_{1}}{\sqrt{2}}V_{\rm cs}^{*}\) & \(\Sigma_{b}^{0}\to P_{\Sigma^{0}}K^{0}\) & \(\frac{(2b_{1}+b_{2}+b_{4})}{2\sqrt{6}}V_{\rm cd}^{*}\) \\ \(\Sigma_{b}^{0}\to P_{\mu}\overline{K}^{0}\) & \(\frac{b_{3}}{\sqrt{2}}V_{\rm cs}^{*}\) & \(\Sigma_{b}^{0}\to P_{p}\pi^{-}\) & \(-\frac{(b_{1}-b_{3}+b_{4})}{\sqrt{2}}V_{\rm cd}^{*}\) \\ \(\Sigma_{b}^{-}\to P_{\Sigma^{-}}\pi^{0}\) & \(\frac{(b_{4}-b_{3})}{\sqrt{2}}V_{\rm cs}^{*}\) & \(\Sigma_{b}^{-}\to P_{\Sigma^{-}}K^{0}\) & \((b_{1}+b_{2}+b_{4})\,V_{\rm cd}^{*}\) \\ \(\Sigma_{b}^{-}\to P_{\Sigma^{0}}\pi^{-}\) & \(\frac{(b_{3}-b_{4})}{\sqrt{2}}V_{\rm cs}^{*}\) & \(\Sigma_{b}^{-}\to P_{\Lambda}\pi^{-}\) & \(-(b_{1}+b_{2}+b_{4})\,V_{\rm cd}^{*}\) \\ \(\Sigma_{b}^{-}\to P_{\mu}K^{-}\) & \(-\frac{b_{1}-b_{3}+b_{4})}{\sqrt{2}}V_{\rm cs}^{*}\) & \(\Sigma_{b}^{0}\to P_{\mu}\pi^{0}\) & \(\frac{(3b_{1}+b_{2}-b_{3}+2b_{4})}{2\sqrt{6}}V_{\rm cd}^{*}\) \\ \(\Sigma_{b}^{+}\to P_{\Lambda}\overline{K}^{0}\) & \(-\frac{(3b_{1}+2b_{2}+b_{3}+b_{4})}{2\sqrt{3}}V_{\rm cs}^{*}\) & \(\Xi_{b}^{0}\to P_{\Sigma^{-}}\pi^{+}\) & \(-\frac{(b_{3}+b_{3})}{\sqrt{2}}V_{\rm cd}^{*}\) \\ \(\Xi_{b}^{\prime 0}\to P_{\Sigma^{0}}\overline{K}^{0}\) & \(-\frac{(b_{1}-b_{3}+b_{2})}{2}V_{\rm cs}^{*}\) & \(\Xi_{b}^{\prime 0}\to P_{\Sigma^{+}}\pi^{-}\) & \(\frac{(b_{1}-b_{2}-b_{3})}{2\sqrt{2}}V_{\rm cd}^{*}\) \\ \(\Xi_{b}^{\prime 0}\to P_{\Sigma^{+}}K^{-}\) & \(\frac{(b_{1}-b_{3}+b_{4})}{\sqrt{2}}V_{\rm cs}^{*}\) & \(\Xi_{b}^{0}\to P_{\Sigma^{+}}\pi^{-}\) & \(\frac{b_{1}^{\prime}-b_{2}}{\sqrt{2}}V_{\rm cd}^{*}\) \\ \(\Xi_{b}^{\prime-}\to P_{\Lambda}K^{-}\) & \(\frac{(3b_{1}+2b_{2}+b_{3}+b_{4})}{2\sqrt{3}}V_{\rm cs}^{*}\) & \(\Xi_{b}^{0}\to P_{P}K^{-}\) & \(\frac{(b_{1}-b_{4})}{\sqrt{2}}V_{\rm cd}^{*}\) \\ \(\Xi_{b}^{\prime-}\to P_{\Sigma^{-}}\overline{K}^{0}\) & \(-\frac{(b_{1}-b_{3}+b_{4})}{2}V_{\rm cs}^{*}\) & \(\Xi_{b}^{\prime 0}\to P_{\Lambda}\overline{K}^{0}\) & \(\frac{(b_{2}+b_{3})}{\sqrt{2}}V_{\rm cd}^{*}\) \\ \(\Xi_{b}^{\prime-}\to P_{\Sigma^{0}}K^{-}\) & \(-\frac{(b_{1}-b_{3}+b_{4})}{2}V_{\rm cs}^{*}\) & \(\Xi_{b}^{\prime-}\to P_{\Lambda}\pi^{-}\) & \(\frac{(3b_{1}+b_{2}+b_{3})}{ \begin{table} \begin{tabular}{l c c c} \hline channel & amplitude & channel & amplitude \\ \hline \(B^{-}\to P_{\Lambda}\overline{p}\) & \(-\frac{(4c_{0}+2c_{4}+2c_{6}+c_{6})}{\sqrt{6}}V_{\rm cs}^{*}\) & \(B^{-}\to P_{\Sigma^{-}}\overline{\Lambda}^{0}\) & \(\frac{(2c_{0}+c_{4}+c_{6}-c_{6})}{\sqrt{6}}V_{\rm cd}^{*}\) \\ \(B^{-}\to P_{\Sigma^{-}}\overline{n}\) & \(-c_{6}V_{\rm cs}^{*}\) & \(B^{-}\to P_{\Lambda}\overline{\Sigma}^{-}\) & \(\frac{(2c_{0}+c_{4}+c_{5}-c_{6})}{\sqrt{6}}V_{\rm cd}^{*}\) \\ \(B^{-}\to P_{\Sigma^{0}}\overline{p}\) & \(-\frac{c_{6}}{\sqrt{2}}V_{\rm cs}^{*}\) & \(B^{-}\to P_{\Sigma^{-}}\overline{\Sigma}^{0}\) & \(\frac{(2c_{0}+c_{4}+c_{5}+c_{6})}{\sqrt{2}}V_{\rm cd}^{*}\) \\ \(\overline{B}^{0}\to P_{\Lambda}\overline{n}\) & \(-\frac{(4c_{2}+2c_{4}+2c_{5}+c_{6})}{\sqrt{6}}V_{\rm cs}^{*}\) & \(B^{-}\to P_{\Sigma^{0}}\overline{\Sigma}^{-}\) & \(-\frac{(2c_{0}+c_{4}+c_{5}+c_{6})}{\sqrt{2}}V_{\rm cd}^{*}\) \\ \(\overline{B}^{0}\to P_{\Sigma^{0}}\overline{n}\) & \(\frac{c_{0}}{\sqrt{2}}V_{\rm cs}^{*}\) & \(B^{-}\to P_{n}\overline{p}\) & \((2c_{2}+c_{4}+c_{5})\,V_{\rm cd}^{*}\) \\ \(\overline{B}^{0}\to P_{\Sigma^{+}}\overline{p}\) & \(-c_{6}V_{\rm cs}^{*}\) & \(\overline{B}^{0}\to P_{\Lambda}\overline{\Lambda}^{0}\) & \(\frac{(12c_{1}+2c_{2}+c_{6}+c_{3}+c_{4}+c_{5}+c_{6})}{6}V_{\rm cd}^{*}\) \\ \(\overline{B}_{s}^{0}\to P_{\Lambda}\overline{\Lambda}^{0}\) & \(\frac{(6c_{1}+c_{2}+3c_{3}+2c_{4}+2c_{5}+c_{6})}{3}V_{\rm cs}^{*}\) & \(\overline{B}^{0}\to P_{\Lambda}\overline{\Sigma}^{0}\) & \(-\frac{(2c_{0}+c_{4}+c_{5}-c_{6})}{2\sqrt{3}}V_{\rm cd}^{*}\) \\ \(\overline{B}_{s}^{0}\to P_{\Sigma^{-}}\overline{\Sigma}^{+}\) & \((2c_{1}+c_{3}+c_{6})\,V_{\rm cs}^{*}\) & \(\overline{B}^{0}\to P_{\Sigma^{-}}\overline{\Sigma}^{+}\) & \((2c_{1}+2c_{2}+c_{3}+c_{4}+c_{5}+c_{6})\,V_{\rm cd}^{*}\) \\ \(\overline{B}_{s}^{0}\to P_{\Sigma^{0}}\overline{\Sigma}^{0}\) & \((2c_{1}+c_{3}+c_{6})\,V_{\rm cs}^{*}\) & \(\overline{B}^{0}\to P_{\Sigma^{0}}\overline{\Lambda}^{0}\) & \(-\frac{(2c_{2}+c_{4}+c_{5}-c_{6})}{2\sqrt{3}}V_{\rm cd}^{*}\) \\ \(\overline{B}_{s}^{0}\to P_{\Sigma^{+}}\overline{\Sigma}^{-}\) & \((2c_{1}+c_{3}+c_{6})\,V_{\rm cs}^{*}\) & \(\overline{B}^{0}\to P_{\Sigma^{0}}\overline{\Sigma}^{0}\) & \(\frac{(4c_{1}+2c_{2}+2c_{3}+c_{4}+c_{5}+c_{6})}{2}V_{\rm cd}^{*}\) \\ \(\overline{B}_{s}^{0}\to P_{p}\overline{p}\) & \((2c_{1}+c_{3})\,V_{\rm cs}^{*}\) & \(\overline{B}^{0}\to P_{\Sigma^{+}}\overline{\Sigma}^{-}\) & \((2c_{1}+c_{3})\,V_{\rm cd}^{*}\) \\ \(\overline{B}_{s}^{0}\to P_{n}\overline{n}\) & \((2c_{1}+c_{3})\,V_{\rm cs}^{*}\) & \(\overline{B}^{0}\to P_{p}\overline{p}\) & \((2c_{1}+c_{3}+c_{6})\,V_{\rm cd}^{*}\) \\ & & \(\overline{B}^{0}\to P_{n}\overline{n}\) & \((2c_{1}+2c_{2}+c_{3}+c_{4}+c_{5}+c_{6})\,V_{\rm cd}^{*}\) \\ & & \(\overline{B}_{s}^{0}\to P_{\Lambda}\overline{\Xi}^{0}\) & \(\frac{(2c_{2}+c_{4}+c_{5}+2c_{6})}{\sqrt{6}}V_{\rm cd}^{*}\) \\ & & \(\overline{B}_{s}^{0}\to P_{\Sigma^{-}}\overline{\Xi}^{+}\) & \((2c_{2}+c_{4}+c_{5})\,V_{\rm cd}^{*}\) \\ & & \(\overline{B}_{s}^{0}\to P_{\Sigma^{0}}\overline{\Xi}^{0}\) & \(-\frac{(2c_{2}+c_{4}+c_{5})}{\sqrt{2}}V_{\rm cd}^{*}\) \\ & & \(\overline{B}_{s}^{0}\to P_{p}\overline{\Sigma}^{-}\) & \(-c_{6}V_{\rm cd}^{*}\) \\ & & \(\overline{B}_{s}^{0}\to P_{n}\overline{\Lambda}^{0}\) & \(-\frac{(4c_{2}+2c_{4}+2c_{5}+c_{6})}{\sqrt{6}}V_{\rm cd}^{*}\) \\ & & \(\overline{B}_{s}^{0}\to P_{n}\overline{\Sigma}^{0}\) & \(\frac{c_{0}}{\sqrt{2}}V_{\rm cd}^{*}\) \\ \hline \end{tabular} \begin{tabular}{l c c} \(\Gamma(B^{-}\to P_{\Lambda}\overline{p})\) & \(\Gamma(\overline{B}_{s}^{0}\to P_{\Lambda}\overline{\eta})\), \\ \(\Gamma(\overline{B}_{s}^{0}\to P_{p}\overline{\Sigma}^{-})\) & \(2\Gamma(\overline{B}_{s}^{0}\to P_{n}\overline{\Sigma}^{0})\), \\ \(\Gamma(\overline{B}_{s}^{0}\to P_{\Sigma^{-}}\overline{\Xi}^{+})\) & \(\Gamma(\overline{B}_{s}^{0}\to P_{n}\overline{\eta})\), \\ \(\Gamma(\overline{B}_{s}^{0}\to P_{\Sigma^{-}}\overline{\Xi}^{+})\) & \(\Gamma(\overline{B}_{s}^{0}\to P_{\Sigma^{0}}\overline{\Sigma}^{0})\) \\ \(\Gamma(\overline{B}_{s}^{0}\to P_{\Sigma^{-}}\overline{\Xi}^{+})\) & \(\Gamma(\overline{B}_{s}^{0}\to P_{\Sigma^{-}}\overline{\Xi}^{+})\) & \(\Gamma(\overline{B}_{s}^{0}\to P_{\Sigma^{0}}\overline{\Sigma}^{0})\) \\ \(\Gamma(\overline{B}_{s}^{0}\to P_{\Sigma^{-}}\overline{\Xi}^{-})\) & \(\Gamma(\overline{B}_{s}^{0}\to P_{\Sigma^{-}}\overline{\Xi}^{-})\) & \(\Gamma(\overline{B}_{s}^{0}\to P_{\Sigma^{-}}\overline{\Xi}^{-})\) \\ \(\Gamma(\overline{B}_{s}^{0}\to P_{\Sigma^{-}}\overline{\Xi}^{+})\) & \(\Gamma(\overline{B}_{s}^{0}\to P_{\Sigma^{-}}\overline{\Xi}^{+})\) & \(\Gamma(\overline{B}_{s}^{0}\to P_{\Sigma^{-}}\overline{\Xi}^{+})\) \\ \(\Gamma(B^{-}\to P_{n}\overline{p})\) & \(=\)\(2\Gamma(\overline{B}_{s}^{-}\to P_{\Sigma^{0}}\overline{\Xi}^{0})\), \\ \(\Gamma(\overline{B}_{s}^{0}\to P_{p}\overline{p})\) & \(\Gamma(\overline{B}_{s}^{0}\to P_{p}\overline{p})\) & \(\Gamma(\overline{B}_{s}^{0}\to P_{n}\overline{n})\). \\ \end{tabular} \end{table} Table 3: Amplitudes for B-meson decays into a pentaquark and a light baryon. Figure Amplitude analyses of \(B_{s}^{0}\to J/\psi\,p\,\bar{p}\) and \(B^{-}\to J/\psi\Lambda\,\overline{p}\) were performed by LHCb Collaboration recently, and evidences for charmonium pentaquarks were found. Unlike the baryonic decay, the mesonic decay offers a cleaner environment to search for new pentaquark. The relations in Eq. (17) can be utilized to find new decay channels; for instance, the Cabibbo-allowed processes \(\overline{B}^{0}\to P_{\Lambda}\overline{n}\), \(\overline{B}^{0}\to P_{\Sigma^{+}}\overline{p}\) and \(\overline{B}^{0}_{s}\to P_{n}\overline{n}\) have the potential to be experimentally discovered in future. ## IV Strong decay of pentaquark Particular decay processes of \(P_{c}\) states in the detectors can be adapted as signatures to reconstruct these exotic states. Currently, the experimental searches for pentaquark mainly focus on the strong decays of \(P_{c}\), like the discoveries of \(P_{c}(4312)\to J/\psi\,p\)[6] and \(P_{cs}(4459)\to J/\psi\Lambda\)[7]. The effective Hamiltonian for an octet pentaquark decays into \(J/\psi\) plus a light baryon is given as \[{\cal H}_{\it eff} = d_{1}e^{ijk}({\cal P})^{l}_{k}\epsilon_{ilm}(\overline{T}_{8}) ^{m}_{l}J/\psi \tag{18}\] \[+d_{2}\epsilon^{ijk}({\cal P})^{l}_{k}\epsilon_{ilm}(\overline{T} _{8})^{m}_{j}J/\psi\,.\] These processes belong to strong decay, thus there is no effective vertices \(H_{3}\), this is a unique property comparing with weak decays of b-baryon and B-meson. The effective Hamiltonian in Eq. (18) indicates all the decay widths are same: \[\Gamma(P_{\Lambda}\to\Lambda^{0}J/\psi) = \Gamma(P_{\Sigma^{-}}\to\Sigma^{-}J/\psi) \tag{19}\] \[= \Gamma(P_{\Sigma^{0}}\to\Sigma^{0}J/\psi)\] \[= \Gamma(P_{\Sigma^{+}}\to\Sigma^{+}J/\psi)\] \[= \Gamma(P_{p}\to pJ/\psi)\] \[= \Gamma(P_{n}\to nJ/\psi)\,.\] Other possible processes include an octet pentaquark decays into anticharmed meson plus singly charmed baryon in antitriplet or sextet. \[{\cal H}_{\it eff} = e_{1}\epsilon^{ijk}({\cal P})^{l}_{k}(\overline{T}_{\bf c\bar{3} })_{ij}D_{l} \tag{20}\] \[+e_{2}\epsilon^{ijk}({\cal P})^{l}_{k}(\overline{T}_{\bf c\bar{3} })_{il}D_{j}\] \[+e_{3}\epsilon^{ijk}({\cal P})^{l}_{k}(\overline{T}_{\bf c\bar{6} })_{il}D_{j}\,.\] The corresponding decay amplitudes are given in Table 4, which gives the relations between various decay widths \[2\Gamma(P_{p}\to\Lambda^{+}_{c}\overline{D}^{0}) = 2\Gamma(P_{n}\to\Lambda^{+}_{c}D^{-})\] \[= 2\Gamma(P_{\Sigma^{-}}\to\Xi^{0}_{c}D^{-})\] \[= 2\Gamma(P_{\Sigma^{+}}\to\Xi^{+}_{c}\overline{D}^{0})\] \[= 3\Gamma(P_{\Lambda}\to\Lambda^{+}_{c}D^{-}_{s})\] \[= 4\Gamma(P_{\Sigma^{0}}\to\Xi^{+}_{c}D^{-})\] \[= 4\Gamma(P_{\Sigma^{0}}\to\Xi^{+}_{c}D^{-})\] \[= 4\Gamma(P_{\Sigma^{0}}\to\Xi^{0}_{c}\overline{D}^{0})\] \[= 4\Gamma(P_{\Lambda}\to\Xi^{+}_{c}D^{-})\] \[= 6\Gamma(P_{\Sigma^{+}}\to\Xi^{\prime+}_{c}\overline{D}^{0})\] \[= 6\Gamma(P_{\Sigma^{-}}\to\Xi^{\prime 0}_{c}D^{-})\] \[= 6\Gamma(P_{n}\to\Sigma^{+}_{c}D^{-})\] \[= 12\Gamma(P_{\Sigma^{0}}\to\Xi^{\prime 0}_{c}\overline{D}^{0})\] \[= 12\Gamma(P_{\Sigma^{0}}\to\Xi^{\prime+}_{c}D^{-})\,.\] If we take \(P_{c}(4312)\) as \(P_{p}\) and \(P_{cs}(4459)\) as \(P_{\Lambda}\) in Eq. (3), the discovery cascade decay modes reported by LHCb Collaboration are \[\Lambda^{0}_{b}\to P_{p}\,K^{-}\to J/\psi\,p\,K^{-}\,,\] \[\Xi^{-}_{b}\to P_{\Lambda}\,K^{-}\to J/\psi\,\Lambda\,K^{-}\,. \tag{22}\] Having the results in Sec. III and Sec. IV, we can write down the cascade decay modes of b-baryon which might be useful for finding new pentaquark states. In addition, there are also cascade decay modes of B-meson with probability of being experimentally discovered. All of them are collected in Table 5. One might notice the singly Cabibbo-suppressed decays of b-baryon are also presented in this table, since pentaquark has been identified through Cabibbo-suppressed process \(\Lambda^{0}_{b}\to P_{c}\,\pi^{-}\to J/\psi\,p\,\pi^{-}\) by LHCb [60]. As the fact that most of the multiquark states \((X,Y,Z,P_{c})\) have been observed in the decays of B-meson and b-baryon, we anticipate some of the cascade modes in Table 5 can be measured in the near future. ## V Conclusions In summary, we have studied the production of pentaquark through weak decays of b-baryon and B-meson under the flavor SU(3) symmetry. Amplitudes for various decay channels have been parametrized in terms of a few SU(3) irreducible amplitudes and a number of testable relations were provided. Then the strong decays of charmonium pentaquark have been discussed as well. Using these results, we have listed some cascade decay modes which are likely to be used for reconstructing pentaquark states in experiments. Finally we stress that the charmonium pentaquark provides us a unique platform for understanding the nature of strong force. The pentaquark spectrum with hidden \(c\bar{c}\) and three light quarks is very rich. Without a reliable theory for many-body quark interactions, we have to speculate the reason why the less massive pentaquarks, whose components are all light quarks, are not seen yet. The flavor SU(3) analysis has the potential to help to interpret the results of existed and future experimental searches of charmonium pentaquarks. Finding the new cascade decay modes of b-baryon and B-meson in Table 5 could provide crucial evidence which would help resolve the current and longstanding puzzles in the exotic charmonium sector. ## Acknowledgements We thank Prof. Wei Wang and Dr. Ya-Teng Zhang for valuable discussions. W.H.H and J.X. is supported in part by National Natural Science Foundation of China under Grant No. 12105247, the China Postdoctoral Science Foundation under Grant No. 2021M702957. Y.X is supported in part by National Natural Science Foundation of China under grant No. 12005294.
2310.18570
The Roelcke Precompactness and Compactifications of Transformations Groups of Discrete Spaces and Homogeneous Chains
The Roelcke precompactness of transformation groups of discrete spaces and chains in the permutation topology and LOTS in the topology of pointwise convergence is studied. For ultratransitive actions compactifications of transformation groups using the Ellis construction are built.
B. V. Sorin
2023-10-28T02:35:12Z
http://arxiv.org/abs/2310.18570v2
The Roelcke Precompactness and Compactifications of Transformations Groups of Discrete Spaces and Homogeneous Chains ###### Abstract The Roelcke precompactness of transformation groups of discrete spaces and chains in the permutation topology and LOTS in the topology of pointwise convergence is studied. For ultratransitive actions compactifications of transformation groups using the Ellis construction are built. ## 1 Introduction and preliminary remarks Studying "big" topological groups, we face the problem of evaluating their "smallness" in a sense [27]. In particular, one is to determine a uniformity on a group compatible with a group structure, which would be totally bounded. The Roelcke uniformity is the lower uniformity on the group (the greatest lower bound of the right and left uniformities) [28]. V. Uspenskij initiated the search of Roelcke-compactifications of topological transformation groups by considering homeomorphisms as points of hyperspaces (graphs in a squared space) [32, 33, 34]. T. Tsankov [31] gave characterization of Roelcke precompact subgroups of permutation groups of countable discrete spaces \(X\) in the permutation topology (pointwise convergence topology where \(X\) is discrete) using group oligomorphism. An approach to the question when, under the action, the Roelcke precompactness of the acting group follows from total boundedness of the maximal equiuniformity on a phase space can be found in [28, Proposition 9.17] and [10, Proposition 6.4]. The construction of an enveloping Ellis semigroup (for transformation groups on which the topology of pointwise convergence is an admissible group topology) is used in [30]. This construction allows connecting the maximal equiuniformity on a phase space with the Roelcke uniformity via an intermediate equiuniformity determined by a system of small subgroups introduced in [18]. The latter uniformity coincides with the Roelcke uniformity in the case of the permutation topology on a group. The work studies the Roelcke precompactness of transformation groups of discrete spaces and chains in the permutation topology and LOTS in the topology of pointwise convergence. For ultratransitive actions compactifications of transformation groups using the Ellis construction are built. The main techniques used to obtain results of the work are presented in SS2. R. Ellis proposed a construction of building a compactification of a transformation group of a compactum, which is a semitopological semigroup -- an enveloping Ellis semigroup [6]. In [30] the Ellis construction was used for building compactifications of transformation groups of compacta, for which the topology of pointwise convergence was an admissible group topology, as well as for finding a sufficient condition of their Roelcke precompactness. In SS2 the Ellis construction extends to the homeomorphism group \(G\) of a non-compact space \(X\). The condition of the compactness of the \(G\)-space \(X\) is replaced by the condition of \(X\) is \(G\)-Tikhonoff, i.e. the presence of an equiuniformity on \(X\). A sufficient condition, under which a uniformity on a group generated by its embedding in the product of uniform spaces (the usage of the Ellis construction) and the uniformity built on a family of small subgroups (point stabilizers) [18, SS4] coincide, is found in Theorem 2.1. The fact that the latter uniformity is totally bounded ensures the Roelcke precompactness of a group [18, Corollary 4.5]. The combination of these two facts allows obtaining a sufficient condition of the Roelcke precompactness of groups in Corollary 2.2. The topology of pointwise convergence is the smallest admissible group topology. This fact allows studying the question of the Roelcke precompactness of a transformation group in the permutation topology (Corollary 1.2). The Roelcke precompactness of subgroups of permutations groups of discrete spaces is studied in SS3. The equality of Roelcke uniformity and uniformity built on the family of point stabilizers is established (Proposition 3.1). Theorems 3.3 and 3.4 present sufficient conditions and a criterion for the Roelcke precompactness of subgroups of permutations groups of discrete spaces in the permutation topology using maximal equiumformities on phase spaces and their connection with oligomorphism of actions. Theorem 3.5 provides a complete solution to the question of the Roelcke precompactness of automorphism groups of simple chains. It is restated as follows: **Corollary 3.6**. Let \(X\) be a simple chain. (1) \(X\) is rigid \(\Longleftrightarrow\) the group \(\operatorname{Aut}(X)\) is not Roelcke precompact in any admissible group topology for its action on the corresponding \(X\) homogeneous GO-spaces. (2) \(X\) is ultrahomogeneous \(\Longleftrightarrow\) the group \((\operatorname{Aut}(X),\tau_{\partial})\) is Roelcke precompact \(\Longleftrightarrow\) the group \((\operatorname{Aut}(X),\tau_{p})\) is Roelcke precompact. (3) The group \((\operatorname{Aut}(X),\tau_{\partial})\) is Roelcke precompact iff the group \((\operatorname{Aut}(X),\tau_{p})\) is Roelcke precompact. The Roelcke precompactness of ultratransitive subgroups of an automorphism group of ultrahomogeneous (and cyclic) chains in the topology of pointwise convergence (the permutation topology in terms of this paper) is proved in [10, Proposition 6.6]. Examples 3.7 show the possibilities of using results from SS3. A sufficient condition of equality between the Roelcke uniformity and the uniformity obtained using the Ellis construction (Theorem 4.1) is given in SS4. This result made it possible to show (Corollary 4.2) that Roelcke compactification of ultrahomogeneous transformation group in permutation topology is a semitopological semigroup -- an enveloping Ellis semigroup. Corollary 4.3 describes the (Roelcke) compactifications of automorphism groups of ultrahomogeneous chains which are enveloping Ellis semigroups. It uses compactifications of their phase spaces outlined in Lemma 1.12. The construction of compactifications of chains from [8] and the construction of the transition from a GO-space to a linearly ordered space from [23] are used. The proof of point (2) of Corollary 4.3 is an extension of Theorem 3[30] from a compact case to a case of ultrahomogeneous chains. See [32, SS4] for more information about algebraic structures on Roelcke compactifications. The results of studying the Roelcke precompactness of automorphism groups of chains that are not simple are presented in SS5. Theorem 5.2 describes the structure of automorphism groups of chains that are not simple. They are semidirect (topological) products. Theorem 8 from [30] is used. In Theorems 5.3 and 5.6 characterizations of the Roelcke precompactness of automorphism groups of chains that are not simple are given using their structure. From the constructed inverse spectrum in the proof of Theorem 5.6 it is clear that the topology of pointwise convergence on the automorphism group of a LOTS corresponding to the chain that is not simple and does not contain simple proper regular intervals is "approximated" by permutation topologies on the automorphism groups of its quotient spaces (by the equivalence relation that is determined by a regular interval). Corollaries 5.4 and 3.6 solve the problem of the equivalence of the condition of the Roelcke precompactness of groups \((\mathrm{Aut}(X),\tau_{p})\) and \((\mathrm{Aut}(X),\tau_{\partial})\) in the case of homogeneous chains that are either simple or have a simple proper regular interval. In preliminary remarks, the consideration of homogeneous GO-spaces provides motivation for the study of the permutation topology. Maximal equivariant compactifications of ultrahomogeneous LOTS and discrete chains are also constructed. The author is grateful to Prof. K. L. Kozlov, Prof. V. G. Pestov and Prof. M. G. Megrelishvili for providing useful information related to the subject of the study. Non-empty sets, (topological) Hausdorff spaces are considered. The uniformities on the space are compatible with its topology. Terminology and notations from [7] and [28] are used. \(\mathbb{Q}\) are rational, \(\mathbb{P}\) are irrational, and \(\mathbb{R}\) are real numbers. For a family \(\Omega\) of subsets \(X\) and \(Y\subset X\), \(\Omega\wedge Y=\{O\cap Y|O\in\Omega\}\). \(N_{G}(e)\) is a family of open neighbourhoods of the unit of the topological group \(G\). ### Basic properties of Roelcke precompactness All the necessary information about the Roelcke uniformity \(L\wedge R\) on a group \(G\) can be found in [28]. A group \(G\) is Roelcke precompact if the Roelcke uniformity is totally bounded. Let us recall the main properties of Roelcke precompactness of topological groups. **Facts 1.** (1) A dense subgroup \(H\) of a group \(G\) is Roelcke precompact iff the group \(G\) is Roelcke precompact [28, Proposition 3.24] and [31, Proposition 2.2]. Moreover, Roelcke compactifications of \(H\) and \(G\) are isomorphic [7, Corollary 8.3.11]. (2) An open subgroup of a Roelcke precompact group is Roelcke precompact [28, Proposition 3.24]. (3) A continuous homomorphic image of a Roelcke precompact group is a Roelcke precompact group [31, Proposition 2.2]. (4) If the normal subgroup \(H\) and the factor group \(G/H\) of the group \(G\) are Roelcke precompact, then the group \(G\) is Roelcke precompact [31, Proposition 2.2]. (5) The inverse limit of inverse spectrum of Roelcke precompact groups and homomorphisms is Roelcke precompact [31, Proposition 2.2]. (6) A product of topological groups is Roelcke precompact iff the factors are Roelcke precompact groups [28, Proposition 3.35]. If two group topologies \(\sigma\leq\tau\) are given on the group \(G\), then from setting the bases of the right, left, two-sided, Roelcke uniformities it follows that \(R_{\sigma}\subset R_{\tau}\), \(L_{\sigma}\subset L_{\tau}\), \((L\lor R)_{\sigma}\subset(L\lor R)_{\tau}\), \((L\wedge R)_{\sigma}\subset(L\wedge R)_{\tau}\) for uniformities on a group in the corresponding topologies. We consider infinite cardinals \(\kappa\). The smallest cardinal \(\kappa\) such that the uniformity \(\mathcal{U}\) has a basis of coverings of cardinality no more than \(\kappa\) is called the narrowness index \(\mathrm{ib}(\mathcal{U})\) of the uniformity \(\mathcal{U}\). The concept was introduced by I. Guran [13], and it is called index of boundedness in [4, Ch. 1, SS1]. The uniformity \(\mathcal{U}\) is totally bounded if there exists a basis of finite coverings. **Lemma 1.1**.: _If two group topologies \(\sigma\leq\tau\) are given on the group \(G\), then_ \((1)\;\mathrm{ib}(R_{\sigma})\leq\mathrm{ib}(R_{\tau}),\mathrm{ib}(L_{\sigma}) \leq\mathrm{ib}(L_{\tau}),\mathrm{ib}(L\lor R)_{\sigma}\leq\mathrm{ib}(L\lor R )_{\tau},\mathrm{ib}(L\wedge R)_{\sigma}\leq\mathrm{ib}(L\wedge R)_{\tau}.\)__ \((2)\) _From the totally boundedness of uniformity \(R_{\tau}\)_(_respectively \(L_{\tau}\)_, \((L\lor R)_{\tau}\), \((L\wedge R)_{\tau}\)_) _it follows that the uniformity \(R_{\sigma}\)_(_respectively \(L_{\sigma}\), \((L\lor R)_{\sigma}\), \((L\wedge R)_{\sigma}\)_) is totally bounded._ \(\square\) ### Topologization of a transformation group The action of the group \(G\) on the set \(X\) is called effective if the kernel of the action \(\{g\in G|g(x)=x,\forall x\in X\}\) is the unit of \(G\). If \(G\) effectively acts on \(X\), then \(G\subset\mathrm{S}(X)\), where \(\mathrm{S}(X)\) is the permutation group of \(X\). The subgroup \(\mathrm{St}_{x_{1},\ldots,x_{n}}=\{g\in G|g(x_{i})=x_{i},i=1,\ldots,n\}\) is a stabilizer of the group \(G\). If \(X\) is a topological space, \(\mathrm{Hom}(X)\) is the group of its homeomorphisms, then the group \(G\) effectively acts on the space \(X\) if \(G\subset\mathrm{Hom}(X)\). If \(X\) is a discrete space, then \(\mathrm{Hom}(X)=\mathrm{S}(X)\). A topology in which a group is a topological group and its action \(G\curvearrowright X\) is continuous is called an admissible group topology [1] on the group \(G\) effectively acting on the topological space \(X\). For an effective action of the group \(G\) on the discrete space \(X\), the permutation topology \(\tau_{\partial}\), the subbase of the neighbourhoods of whose unit is formed by open-closed subgroups, i.e. stabilizers of points, is the smallest admissible group topology. The group \(G\) in the permutation topology is non-Archimedean (a unit neighbourhood base is formed by open-closed subgroups). If topology of pointwise convergence \(\tau_{p}\) (the subbase is formed by sets of the form \([x,O]=\{f\in G|f(x)\in O\}\), \(O\) is open in \(X\)) is an admissible group topology on the group \(G\) of homeomorphisms of the space \(X\) acting effectively, then it is the smallest admissible group topology [18, Lemma 3.1], \(\tau_{\partial}\geq\tau_{p}\), \(\tau_{\partial}\) is an admissible group topology and \(\tau_{\partial}=\tau_{p}\), if \(X\) is a discrete space. From Lemma 1.1 we have **Corollary 1.2**.: _If \((G,\tau_{\partial})\) is Roelcke precompact, then \((G,\tau_{p})\) is Roelcke precompact._ _If \((G,\tau_{p})\) is not Roelcke precompact, then the group \(G\) in any admissible group topology is also not Roelcke precompact. \(\square\)_ **Remark 1.3**.: _Generally speaking, the Roelcke precompactness of \((G,\tau_{p})\) does not imply the Roelcke precompactness of \((G,\tau_{\partial})\)._ Indeed, the action of the (infinite) topological group \(G\) on itself by multiplication on the left is uniformly equicontinuous with respect to the left uniformity \(L\) and the topology of pointwise convergence \(\tau_{p}\) is an admissible group topology on \(G\) coinciding with the topology of \(G\) (see, for example, [18, Example 3.6]). If \(G\) is compact, then \((G,\tau_{p})\) is Roelcke precompact. However, in the permutation topology, \(G\) is discrete (and infinite). Hence, it is not Roelcke precompact. ### Ultratransitive action **Definition 1.4**.: _The action of the group \(G\) on the set \(X\) is strongly \(n\)-transitive, \(n\geq 1\), if for any families of distinct \(n\) points \(x_{1},\ldots,x_{n}\) and \(y_{1},\ldots,y_{n}\) there exists \(g\in G\) such that \(g(x_{k})=y_{k}\), \(k=1,\ldots,n\)._ _The action \(G\curvearrowright X\), which is strongly \(n\)-transitive for all \(n\in\mathbb{N}\), is called ultratransitive._ **Facts 2.** (1) A group \(G\) which acts ultratransitively on \(X\) is a dense subgroup of \((\mathrm{S}(X),\tau_{\partial})\). Indeed, let the set \(O\) be open in \((\mathrm{S}(X),\tau_{\partial})\) and \(g\in O\). Then there are \(x_{1},\ldots,x_{n}\in X\) such that \(g\mathrm{St}_{x_{1},\ldots,x_{n}}\subset O\) and \(g\mathrm{St}_{x_{1},\ldots,x_{n}}=\{h\in G|h(x_{i})=g(x_{i}),i=1,\ldots,n\}\) an open neighbourhood of \(g\). Since \(G\) acts ultratransitively on \(X\) there is \(h\in G\) such that \(h(x_{i})=g(x_{i})\), \(i=1,\ldots,n\). Evidently, \(h\in O\). \(\Box\) (2) The subgroup \(\mathrm{S}_{<\omega}(X)\) of the group \(\mathrm{S}(X)\) whose elements has finite supports acts ultratransitively on \(X\). Hence, \(\mathrm{S}_{<\omega}(X)\) is a dense subgroup of \((\mathrm{S}(X),\tau_{\partial})\). (3) The Roelcke precompactness of the group \((\mathrm{S}(X),\tau_{\partial})\) is proved in [9] (see also [28, Example 9.14]), the Roelcke precompactness of the group \((\mathrm{S}_{<\omega}(X),\tau_{\partial})\) is proved in [3]. From Fact 1 (1) it follows that any group \(G\) which acts ultratransitively on \(X\) is Roelcke precompact. Moreover, their Roelcke compactifications are isomorphic to the Roelcke compactification of \((\mathrm{S}(X),\tau_{\partial})\). (4) The space \(X\) is ultrahomogeneous if the action of its homeomorphism group \(\mathrm{Hom}(X)\) is ultratransitive. Ultrahomogeneous spaces are locally compact metrizable CDH spaces whose complement to any finite subset is connected. These include the spheres \(S^{n-1}\) in Euclidean spaces \(\mathbb{R}^{n}\), \(n\geq 3\), the Hilbert cube \(Q\) (see, for example, [2]). The group of homeomorphisms of an ultrahomogeneous space with the permutation topology is Roelcke precompact, although the permutation topology is not necessarily admissible. The homeomorphism groups of the spheres \(S^{n}\), \(n\geq 2\), and the Hilbert cube \(Q\) with the compact open topology (the smallest admissible group topology) are not Roelcke precompact [29]. Hence, by Lemma 1.1, and with any admissible group topology, these groups are not Roelcke precompact. ### Homogeneous chains If \(X\) and \(Y\) are chains, then their product \(X\times Y\) with lexicographic order is denoted by \(X\otimes_{\ell}Y\); their concatenation is denoted by \(X\Diamond Y\) (on the disjoint union of \(X\) and \(Y\), the linear order is as follows: \(x<y\) if \(x\in X\), \(y\in Y\), linear order restricted on \(X\) and \(Y\) coincide with linear orders on \(X\) and \(Y\), respectively). The subset \(Y\) of the chain \(X\) is called an interval if for any \(x\leq y\in Y\) and \(x\leq z\leq y\Longrightarrow z\in Y\). The intervals are: half-intervals \([a,b)=\{x\in X|a\leq x<b\}\), \((a,b]=\{x\in X|a<x\leq b\}\); open intervals \((a,b)=\{x\in X|a<x<b\}\), \((a,\rightarrow)=\{x\in X|a<x\}\), \((\leftarrow,b)=\{x\in X|x<b\}\), the set \(X\); segments \([a,b]=\{x\in X|a\leq x\leq b\}\) (in particular, points of \(X\)). **Definition 1.5**.: _Let \(X\) be a chain, \(\mathrm{Aut}(X)\) be a group of order-preserving bijections \((\)automorphisms\()\) of \(X\)._ \(X\) _is called a homogeneous chain if the action \(\mathrm{Aut}(X)\curvearrowright X\) is transitive \((\)i.e., for any \(x,y\in X\) there exists \(f\in\mathrm{Aut}(X)\) such that \(f(x)=y\))._ **Facts 3.** (1) A homogeneous chain is either single-point or infinite. (2) A homogeneous chain \(X\) is either discrete (for any \(x\in X\) there exists \(x<x^{+}\) and \((x,x^{+})=\emptyset\)[19]), or dense (\(X\) is dense if for any \(x<y\in X\) there exists \(z\in(x,y)\)[19]). **Definition 1.6**.: _The interval \(J\) of a homogeneous chain \(X\) is called regular [25, Definition 5] if_ \[\forall x,y\in J,\forall g\in\operatorname{Aut}(X)(gx\in J)\Longrightarrow( gy\in J).\] _A homogeneous chain \(X\) is called simple [25, Definition 6] if \(X\) has no proper regular intervals \((\)being non-empty intervals, non-proper are either single-point or all \(X\)\()\). The group \(\operatorname{Aut}(X)\) in this case is called o-primitive [12]._ _A chain \(X\) is called 2-homogeneous if for any pairs of points \(x<y\) and \(x^{\prime}<y^{\prime}\) there exists \(g\in\operatorname{Aut}(X)\) such that \(g(x)=x^{\prime}\), \(g(y)=y^{\prime}\). The group \(\operatorname{Aut}(X)\) in this case is called o-2-transitive [12]._ _A homogeneous chain \(X\) is called rigid [11] if for any \(x,y\in X\) there exists an only \(g\in\operatorname{Aut}(X)\) such that \(g(x)=y\). The group \(\operatorname{Aut}(X)\) in this case is called regular or uniquely transitive [12]._ **Facts 4.** (1) A 2-homogeneous chain is dense. (2) A 2-homogeneous chain is ultrahomogeneous [12, Lemma 1.10.1] (see also [26]), i.e. for any families of different \(n\) points \(x_{1}<\ldots<x_{n}\) and \(y_{1}<\ldots<y_{n}\) there exists \(g\in\operatorname{Aut}(X)\) such that \(g(x_{k})=y_{k}\), \(k=1,\ldots,n\), \(n\in\mathbb{N}\). (3) A proper regular interval \(J\) of a homogeneous chain \(X\) is a homogeneous interval, a chain and \(X=Y\otimes_{\ell}J\), where \(Y\) is a homogeneous chain [25, Theorem 7]. In [25, point 2, point 3.5], [12] and [14] it is established that the following theorem holds: **Theorem 1.7**.: _For a homogeneous chain \(X\), the following conditions are equivalent:_ (1) _the set \(X\) is simple;_ (2) _the set \(X\)_ (i) _2-homogeneous \((\)ultrahomogeneous\()\), or_ (ii) _is rigid and is a subgroup of \(\mathbb{R}\)\((\)isomorphic to \(\operatorname{Aut}(X))\);_ (3) _the group \(\operatorname{Aut}(X)\) is o-primitive;_ (4) _the group \(\operatorname{Aut}(X)\)_ (i) _is o-2 transitive, or_ (ii) _is uniquely transitive and is a subgroup of the abelian group \(\mathbb{R}\)\((\)their full description is given in [11]\()\). _ ### Topologization of the automorphism group of a homogeneous chain A generically ordered space, or GO-space is a chain with a topology stronger than the linear order topology, whose base is formed by the intervals [20]. **Lemma 1.8**.: _In the following topologies, a homogeneous infinite chain \(X\) is a GO-space and each element \(\operatorname{Aut}(X)\) is a homeomorphism:_ (1) _topology of linear order \(\tau\)\((\)the topology base is intervals \((x,y)\), \(x<y\in X\), in this case \(X\) is a linearly ordered space \((\)LOTS\()\)\()\);_ (2) _"arrow" topology \(\tau_{\rightarrow}\) or \(\tau_{\leftarrow}\)\((\)topology base is half-intervals \([x,y)\)\((\)right arrow\()\) or half-intervals \((x,y]\)\(,\)\((\)left arrow\()\), \(x<y\in X\)\);_ (3) _discrete topology \(\tau_{d}\)._ \[\tau\leq\left\{\begin{array}{l}\tau_{\rightarrow}\\ \tau_{\leftarrow}\end{array}\right\}\leq\tau_{d}.\] \(\tau=\left\{\begin{array}{l}\tau_{\rightarrow}\\ \tau_{\leftarrow}\end{array}\right\}=\tau_{d}\Longleftrightarrow X\) _is a discrete homogeneous chain._ Proof.: Let \(\sigma\) be an arbitrary topology on \(X\) in which \(X\) is a GO-space. If \(X\) is discrete, then \(\tau=\left\{\begin{array}{l}\tau_{\rightarrow}\\ \tau_{\leftarrow}\end{array}\right\}=\tau_{d}\), since for any \(x\in X\) there are \(x^{-}<x<x^{+}\) and the intervals \((x^{-},x)\), \((x,x^{+})\) are empty sets. If \(X\) is not discrete, then let \(x\) be an arbitrary point of \(X\). Intervals containing \(x\) can be either open intervals \((a,b)\), \(a<x<b\), or half-intervals \([x,b)\), \(x<b\), \((a,x]\), \(a<x\). The cases of segments \([a,b]\), \([x,b]\), \([a,x]\) are reduced to the previous cases, since the topology on a GO-space is stronger than the topology of linear order. If only open intervals are considered as open neighbourhoods of the point \(x\), then they form the basis of a linear order topology at the point \(x\), and, due to the homogeneity of \(X\), a linear order topology will be set on \(X\). If at least one half-interval \([x,b)\) (respectively \((a,x]\)) is added to the open intervals as open neighbourhood of the point \(x\), then they form the topology base at the point \(x\) of the form \(\{[x,b)|b>x\}\) (respectively \(\{(a,x]|a<x\}\)), and, due to the homogeneity of \(X\), the "arrow" topology \(\tau_{\rightarrow}\) (respectively \(\tau_{\leftarrow}\)) will be set on \(X\). If at least one half-interval \([x,b)\) and at least one half-interval \((a,x]\) are added to the open intervals as open neighbourhoods of the point \(x\), then they form the base of a discrete topology at the point \(x\) and, due to the homogeneity of \(X\), a discrete topology will be set on \(X\). The relations \(\tau\leq\left\{\begin{array}{l}\tau_{\rightarrow}\\ \tau_{\leftarrow}\end{array}\right\}\leq\tau_{d}\) obviously hold. The implication \(\tau=\tau_{d}\Longrightarrow X\) is a discrete homogeneous chain follows from the existence of \(x^{-}<x\) and \(x<x^{+}\) such that \((x^{-},x)\), \((x,x^{+})\) are empty sets. The inverse implication is established at the beginning of the proof. Since in an order-preserving bijection, the images of intervals and half-intervals are intervals and half-intervals, respectively, then the elements of \(\mathrm{Aut}(X)\) are homeomorphisms. **Corollary 1.9**.: (1) _On a discrete homogeneous chain, the topology in which \(X\) is a homogeneous GO-space is unique. \(X\) is discrete_ LOTS_._ (2)_\(\tau_{\rightarrow}<\tau_{d}\Longleftrightarrow\tau_{\leftarrow}<\tau_{d}.\)_ (3)_\(\left\{\begin{array}{l}\tau_{\rightarrow}\\ \tau_{\leftarrow}\end{array}\right\}=\tau_{d}\Longleftrightarrow\tau_{d}=\tau.\)_ (4) _If there is an order-reversing bijection on \(X\), then \((X,\tau_{\rightarrow})\) and \((X,\tau_{\leftarrow})\) are homeomorphic. _ **Proposition 1.10**.: _Let \(X\) be a homogeneous chain._ (1) _On the group \(\mathrm{Aut}(X)\), the topology of pointwise convergence \(\tau_{p}\) is the smallest admissible group topology for the action \(\mathrm{Aut}(X)\curvearrowright(X,\tau)\). The topology \(\tau_{\partial}\) is an admissible group topology and \(\tau_{\partial}\geq\tau_{p}\). If \(X\) is discrete, then \(\tau_{\partial}=\tau_{p}\)._ (2) _On the group \(\mathrm{Aut}(X)\), the permutation topology \(\tau_{\partial}\) is the smallest admissible group topology for actions \(\mathrm{Aut}(X)\curvearrowright(X,\tau_{\rightarrow})\), \(\mathrm{Aut}(X)\curvearrowright(X,\tau_{\leftarrow})\), \(\mathrm{Aut}(X)\curvearrowright(X,\tau_{d})\)._ Proof.: \((\operatorname{Aut}(X),\tau_{\partial})\) is a topological group [28]. The statement (1) of the proposition is proved in [26] and [30]. (2) It is easy to verify that the permutation topology \(\tau_{\partial}\) is an admissible group topology for the actions \(\operatorname{Aut}(X)\curvearrowright(X,\tau_{\to})\), \(\operatorname{Aut}(X)\curvearrowright(X,\tau_{\leftarrow})\), \(\operatorname{Aut}(X)\curvearrowright(X,\tau_{d})\). In the latter case, it is obviously the smallest. Let \(\sigma\) be an admissible group topology, for example, on the group \(\operatorname{Aut}(X,\tau_{\to})\). For any point \(x\) and its neighbourhood \([x,y)\), \(x<y\), there is a neighbourhood \(O\) of the unit of the group, and a neighbourhood of the point \(x\) of the form \([x,x^{\prime})\), \(x<x^{\prime}\), such that \(O[x,x^{\prime})\subset[x,y)\). Then for any homeomorphism \(g\) from the neighbourhood \(O\cap O^{-1}\) of the unit of the group we have \(g(x)\in[x,y)\) and \(g^{-1}(x)\in[x,y)\). If, for example, \(g(x)=z>x\), then \(g^{-1}(x)<g^{-1}(z)=x\). Hence, \(g(x)=x\) for any \(g\in O\cap O^{-1}\) and \(O\cap O^{-1}\subset\operatorname{St}_{x}\). Consequently, \(\sigma\geq\tau_{\partial}\) and \(\tau_{\partial}\) is the smallest admissible group topology on \(\operatorname{Aut}(X)\) for the action \(\operatorname{Aut}(X)\curvearrowright(X,\tau_{\to})\). **Remark 1.11**.: \((\operatorname{Aut}(X),\tau_{\partial})\) _is a subgroup \((\operatorname{S}(X),\tau_{\partial})\)._ ### \(G\)-spaces All the necessary information can be found in [21]. Under the continuous action \(G\curvearrowright X\) of the topological group \(G\) on the space \(X\), the triple \((G,X,\curvearrowright)\) is called a \(G\)-space (abbreviated \(X\) is a \(G\)-space). The uniformity \(\mathcal{U}\) on \(X\) is called equiuniformity if the action of \(G\curvearrowright X\) is saturated (i.e. any homeomorphism from \(G\) is uniformly continuous) and is bounded (i.e. for any \(u\in\mathcal{U}\) there are \(O\in N_{G}(e)\) and \(v\in\mathcal{U}\) such that the covering \(\{OV|V\in v\}\) refines \(u\)). In this case, \(X\) a \(G\)-Tikhonoff space, the completion of \(X\) with respect to a totally bounded equinuniformity, is \(G\)-compactification or equivariant compactification of \(X\), and there exists \(\beta_{G}X\) -- the maximal \(G\)-compactification of \(X\) which corresponds to the maximal totally bounded equinuniformity. ### Maximal \(G\)-compactifications of spaces of ultrahomogeneous chains Let \(X\) be an ultrahomogeneous chain, \(\operatorname{Aut}(X)\) its automorphism group. The base of the maximal equiniformity \(\mathcal{U}_{X}^{p}\) for the action \((\operatorname{Aut}(X),\tau_{p})\curvearrowright(X,\tau)\) is formed by finite coverings \[\{Ox:x\in X\}=(\leftarrow,y_{2})\cup(y_{1},y_{2})\cup(y_{1},y_{4})\cup\ldots \cup(y_{2n-3},y_{2n})\cup(y_{2n-1},y_{2n})\cup(y_{2n-1},\rightarrow),\] for \(y_{1}<x_{1}<y_{2}<\ldots<y_{2n-1}<x_{n}<y_{2n}\), \(n\in\mathbb{N}\), \(O=[x_{1},(y_{1},y_{2})]\cap\ldots\cap[x_{n},(y_{2n-1},y_{2n})]\) (the corresponding diagonal entourage \(\operatorname{U}_{O}\)), \(\beta_{p}X\) is the maximal \(G\)-compactification of \(X\) (completion of \(X\) with respect to uniformity \(\mathcal{U}_{X}^{p}\)[16, Theorem 3]). According to the description in [8], the smallest linearly ordered compactification \(m(X,\tau)\)[15] is generated by replacing each gap in \(X\) by a point with a natural continuation of the order. At the same time, \(m(X,\tau)\) is the only linearly ordered compactification to which the action of \(\operatorname{Aut}(X)\) is continuously extended (discontinuity if the gap is replaced by two points); the action \((\operatorname{Aut}(X),\tau_{p})\curvearrowright m(X,\tau)\) is continuous, \(m(X,\tau)\) is connected. The base of the maximal equinuniformity \(\mathcal{U}_{X}^{\partial}\) for the action \((\operatorname{Aut}(X),\tau_{\partial})\curvearrowright(X,\tau_{d})\) is formed by the finite disjoint coverings \[\{\operatorname{St}_{x_{1},\ldots,x_{n}}x:x\in X\}=(\leftarrow,x_{1})\cup\{x_{ 1}\}\cup(x_{1},x_{2})\cup\{x_{2}\}\cup\ldots\cup(x_{n-1},x_{n})\cup\{x_{n}\} \cup(x_{n},\rightarrow),\] \(x_{1}<\ldots<x_{n}\in X\), (the corresponding entourage of the diagonal \(\mathrm{U}_{x_{1},\ldots,x_{n}}\)), \(\beta_{\partial}X\) is the maximal \(G\)-compactification of \(X\) (completion of \(X\) with respect to the uniformity \(\mathcal{U}_{X}^{\partial}\)[16, Theorem 3]). A discrete space \((X,\tau_{d})\) is a GO-space. There is the smallest LOTS \[(X,\tau_{d})\otimes_{\ell}\{-1,0,1\},\] in which \(X\) is a dense subspace [23], and \((X,\tau_{d})\otimes_{\ell}\{-1,0,1\}\) is naturally embedded in any other linearly ordered extension of \(X\), in which \(X\) is dense. In this case, the action \(\left(\mathrm{Aut}(X),\tau_{p}\right)\curvearrowright\left((X,\tau_{d})\otimes _{\ell}\{-1,0,1\}\right)\) is continuous. The smallest linearly ordered compactification of \((X,\tau_{d})\) is \(m\big{(}(X,\tau_{d})\otimes_{\ell}\{-1,0,1\}\big{)}\), which is zero-dimensional. **Lemma 1.12**.: (1)_\(\beta_{p}X=m(X,\tau)\),_ (2)_\(\beta_{\partial}X=m\big{(}(X,\tau_{d})\otimes_{\ell}\{-1,0,1\}\big{)}:=m(X, \tau_{d})\)._ Proof.: (1) In any open covering of a connected linearly ordered compactum \(m(X,\tau)\), one can refine a finite covering \(\Omega\) from the intervals \(I_{1}=(\leftarrow,b_{1}),\ldots,I_{k}=(a_{k},\rightarrow)\), such that \(a_{2}<b_{1}<a_{3}<b_{2}<\ldots<a_{k}<b_{k-1}\in X\) (it is enough to choose a minimal system from the intervals in any covering in the sense that any of its subsystems is not a covering). Then in \(\Omega\wedge X\), the trace of \(\Omega\) on \(X\), the covering \[(\leftarrow,b_{1})\cup(a_{2},b_{1})\cup(a_{2},b_{2})\cup\ldots\cup(a_{k-1},b_ {k-1})\cup(a_{k},b_{k-1})\cup(a_{k},\rightarrow),\] which belongs to \(\mathcal{U}_{X}^{p}\) is refined, if an arbitrary point is selected in the intervals \((a_{j},b_{j-1})\), \(j=2,\ldots,k\). Since any covering of \(\mathcal{U}_{X}^{p}\) continues till the covering of \(m(X,\tau)\), then \(\beta_{p}X=m(X,\tau)\). (2) As in the case of (1), in any open covering of a zero-dimensional compactum \(m\big{(}(X,\tau_{d})\otimes_{\ell}\{-1,0,1\}\big{)}\) it is possible to refine a finite covering of open-closed intervals. By introducing the order on the intervals (at their left ends), we obtain a disjoint covering from the open-closed intervals \(I_{1}=(\leftarrow,b_{1}),\ldots,I_{k}=(b_{k},\rightarrow)\), subtracting sequentially from the \(j\) interval the union of the previous ones. Let us correct the latter covering by removing endpoints of the form \((\{x\},0)\) from the intervals (if any), and adding them as single-point open-closed intervals to the corrected interval system to obtain an open covering \(\Omega\) of the space \(m\big{(}(X,\tau_{d})\otimes_{\ell}\{-1,0,1\}\big{)}\). Then \(\Omega\wedge X\), its trace on \(X\), obviously coincides with the covering of \(\mathcal{U}_{X}^{\partial}\). Since any covering of \(\mathcal{U}_{X}^{\partial}\) continues to cover of \(m\big{(}(X,\tau_{d})\otimes_{\ell}\{-1,0,1\}\big{)}\), then \(\beta_{\partial}X=m(X,\tau_{d})\). **Remark 1.13**.: _It is possible to prove_ Lemma 1.12_, using the uniqueness of the linearly ordered \(G\)-compactification by showing that the proximity corresponding to the maximal equiuniformities on \((X,\tau)\) and \(\big{(}(X,\tau_{d})\otimes_{\ell}\{-1,0,1\},\tau\big{)}\) are ordered proximities, see, for example, [22]._ ## 2 The Ellis construction and equiuniformities on a transformation group We assume that the topology of pointwise convergence is an admissible group topology on the group \(G\) of homeomorphisms of the space \(X\) acting effectively. The homeomorphisms of the group \(G\) as continuous maps can be identified with the points of the space \(X^{X}\) (of degree \(X\)). Namely, the injective mapping \(\imath:G\to X^{X}\) \(\imath(g)=(g(x))_{x\in X}\), is defined. The topology on \(X^{X}\) is initial with respect to projections on factors and its restriction on \(\imath(G)\) is the topology of pointwise convergence on \(G\). On \(X^{X}\), as on the product of copies of \(X\) with the action \(\alpha:G\times X\to X\) of the group \(G\), the continuous action of the group \(G\) is defined: \(\alpha_{\Delta}:G\times X^{X}\to X^{X},\alpha_{\Delta}(g,(t_{x})_{x\in X})=(gt_ {x})_{x\in X}\). In this case, \(\imath\) is an equivariant embedding of \(G\) into \(X^{X}\) (the group \(G\) is considered a \(G\)-space with its action onto itself by multiplication on the left). Suppose that \(\mathcal{U}_{X}\) is the equiuniformity on the space \(X\), \(\mathcal{U}_{\Pi}\) is the equiuniformity on \(X^{X}\) initial with respect to projections on factors, \(\mathcal{U}\) is the restriction of the equiuniformity \(\mathcal{U}_{\Pi}\) on \(G=\imath(G)\). Closure of \(\imath(G)\) in completion of \(X^{X}\) with respect to \(\mathcal{U}_{\Pi}\) is completion of \(\imath(G)\) with respect to the uniformity \(\mathcal{U}\). The base of the equiuniformity \(\mathcal{U}\) is formed by coverings \[\{U_{x_{1},\ldots,x_{n}:g;\mathrm{U}}=\{h\in G:(g(x_{k}),h(x_{k}))\in\mathrm{U },k=1,\ldots,n\}:g\in G\},\] where \(x_{1},\ldots,x_{n}\in X\), \(n\in\mathbb{N}\), \(\mathrm{U}\) is the entourage of the equiuniformity \(\mathcal{U}_{X}\). The family \(\mathcal{K}\) of stabilizers \[\mathrm{St}_{x_{1},\ldots,x_{n}}=\bigcap\{\mathrm{St}_{x_{k}}:k=1,\ldots,n\}\] of points \(x_{1},\ldots,x_{n}\in X\), \(n\in\mathbb{N}\), is a directed family \(\mathcal{K}\) of small subgroups (\(H^{\prime}\leq H\Longleftrightarrow H^{\prime}\subset H\)) of the group \(G\), which defines the equiuniformity \(R_{\mathcal{K}}\) on \(G\), whose base are coverings \(\{OgH:g\in G\}\), \(O\in N_{G}(e)\), \(H\in\mathcal{K}\); \(L\wedge R\subset R_{\mathcal{K}}\subset R\)[18, SS4]. An explicit type of coverings from the (possible) base is \[\{O_{x_{1},\ldots,x_{n};\mathrm{U}}g\mathrm{St}_{x_{1},\ldots,x_{n}}:g\in G\},\] where \(x_{1},\ldots,x_{n}\in X\), \(n\in\mathbb{N}\), \(O_{x_{1},\ldots,x_{n};\mathrm{U}}=\{h\in G:(x_{k},h(x_{k}))\in\mathrm{U},(x_{ k},h^{-1}(x_{k}))\in\mathrm{U},k=1,\ldots,n\}\), \(\mathrm{U}\) is the entourage of the uniformity \(\mathcal{U}_{X}\). Note that \(O_{x_{1},\ldots,x_{n};\mathrm{U}}^{-1}=O_{x_{1},\ldots,x_{n};\mathrm{U}}\). If the uniformity \(R_{\mathcal{K}}\) is totally bounded, then the group \(G\) is Roelcke precompact [18, Corollary 4.5]. The following theorem generalizes Theorem 1 from [30]. **Theorem 2.1**.: (1)_\(\mathcal{U}\subset R_{\mathcal{K}}\)._ (2) _Let there be an entourage \(\mathrm{V}=\mathrm{V}(x_{1},\ldots,x_{n};\mathrm{U})\in\mathcal{U}_{X}\) for any points \(x_{1},\ldots,x_{n}\in X\), \(n\in\mathbb{N}\), and for any entourage \(\mathrm{U}\in\mathcal{U}_{X}\) such that the condition is met: if for \(g,h\in G\)\((g(x_{k}),h(x_{k}))\in\mathrm{V}\) holds, \(k=1,\ldots,n\), then there exists \(g^{\prime}\in g\mathrm{St}_{x_{1},\ldots,x_{n}}\) such that \(h\in O_{x_{1},\ldots,x_{n};\mathrm{U}}g^{\prime}\). Then \(\mathcal{U}=R_{\mathcal{K}}\)._ Proof.: (1) Any finite subproduct \(X^{n}=X^{\{x_{1},\ldots,x_{n}\}}\) of the product \(X^{X}\) is a \(G\)-space, the projection \(\pi_{n}:X^{X}\to X^{n}\) is an equivariant mapping. The equiuniformity on \(X^{n}\) is the product of equiuniformities \(\mathcal{U}_{X}\). Therefore, in any uniform covering \(u\) of any subproduct \(X^{n}\) it is possible to refine a covering of the form \(\{Ox:x\in X^{n}\}\) for some \(O\in N_{G}(e)\). Then in the covering \(\pi_{n}^{-1}u\wedge G=\{\pi_{n}^{-1}U\cap G:U\in u\}\) the covering \(\{\pi_{n}^{-1}(Ox)=O\pi_{n}^{-1}x:x\in X^{n}\}\wedge G\) is refined. For \(x=(g(x_{1}),\ldots,g(x_{n}))\in X^{n}\)\(O\pi_{n}^{-1}x=Og\mathrm{St}_{x_{1},\ldots,x_{n}}\), therefore \(\{O\pi_{n}^{-1}x:x\in X^{n}\}\wedge G=\{Og\mathrm{St}_{x_{1},\ldots,x_{n}}:g\in G\} \in R_{\mathcal{K}}\). It remains to note that every covering from \(\mathcal{U}\) has the form \(\pi_{n}^{-1}u\wedge G\) by virtue of the definition of \(\mathcal{U}=\mathcal{U}_{\Pi}|_{\imath(G)}\) and the initially of \(\mathcal{U}_{\Pi}\) with respect to projections \(\pi_{n}\). (2) To prove the equality \(\mathcal{U}=R_{\mathcal{K}}\), by virtue of (1), it is sufficient to show that in the covering \[\{O_{x_{1},\ldots,x_{n};\mathrm{U}}g\mathrm{St}_{x_{1},\ldots,x_{n}}:g\in G\},\] where \(x_{1},\ldots,x_{n}\in X\), \(n\in\mathbb{N}\), \(O_{x_{1},\ldots,x_{n};\mathrm{U}}=\{h\in G:(x_{k},h(x_{k}))\in\mathrm{U},(x_{k },h^{-1}(x_{k}))\in\mathrm{U},k=1,\ldots,n\}\), \(\mathrm{U}\in\mathcal{U}_{X}\), it is possible to refine a covering \[\{U_{x_{1},\ldots,x_{n};g;\mathrm{V}}=\{h\in G:(g(x_{k}),h(x_{k}))\in\mathrm{V },k=1,\ldots,n\}:g\in G\},\] for some \(\mathrm{V}\in\mathcal{U}_{X}\). By the condition for \(x_{1},\ldots,x_{n}\in X\) and \(\mathrm{U}\in\mathcal{U}_{X}\) there exists \(\mathrm{V}\in\mathcal{U}_{X}\) such that if for \(g,h\in G\)\((g(x_{k}),h(x_{k}))\in\mathrm{V}\), \(k=1,\ldots,n\), then there exists \(g^{\prime}\in g\mathrm{St}_{x_{1},\ldots,x_{n}}\) such that \(h\in O_{x_{1},\ldots,x_{n};\mathrm{U}}g^{\prime}\). Then for any \(h\in U_{x_{1},\ldots,x_{n};g;\mathrm{V}}\) and any fixed \(g\in G\) \[h\in O_{x_{1},\ldots,x_{n};\mathrm{U}}g\mathrm{St}_{x_{1},\ldots,x_{n}}\] and, therefore, \(U_{x_{1},\ldots,x_{n};g;\mathrm{V}}\subset O_{x_{1},\ldots,x_{n};\mathrm{U}}g \mathrm{St}_{x_{1},\ldots,x_{n}}\), i.e. the covering \(\{U_{x_{1},\ldots,x_{n};g;\mathrm{V}}:g\in G\}\) is refined in the covering of \(\{O_{x_{1},\ldots,x_{n};\mathrm{U}}g\mathrm{St}_{x_{1},\ldots,x_{n}}:g\in G\}\). **Corollary 2.2**.: _If \(\mathcal{U}=R_{\mathcal{K}}\) and \(\mathcal{U}_{X}\) is a totally bounded equiuniformity on \(X\), then the group \(G\) is Roelcke precompact._ Proof.: According to [18, Theorem 4.2.], \(L\wedge R\subset R_{\mathcal{K}}\). Therefore, from the total boundedness of \(\mathcal{U}_{X}\) (and thus \(\mathcal{U}\)) and the equality \(\mathcal{U}=R_{\mathcal{K}}\), it follows that the uniformity \(L\wedge R\) is totally bounded. **Remark 2.3**.: _The above construction of building extensions of transformation groups is used in [30] for the group of homeomorphisms of the compactum \(K\) in the topology of pointwise convergence. In this case, the only uniformity on \(K\) is the equiuniformity. The resulting compactification is an enveloping Ellis semigroup [6]._ ## 3 The Roelcke precompactness of subgroups \((\mathrm{S}(X),\tau_{\partial})\) Let \(G\) be a subgroup of the permutation group \((\mathrm{S}(X),\tau_{\partial})\) of the discrete infinite space \(X\). The base of the neighbourhoods of the unit of \(G\) is formed by open-closed subgroups \[\mathrm{St}_{x_{1},\ldots,x_{n}},x_{1},\ldots,x_{n}\in X\] and \(G\) is non-Archimedean. The base of the maximal equiuniformity \(\mathcal{U}_{X}\) is formed by disjoint coverings of sets \[\{\mathrm{St}_{x_{1},\ldots,x_{n}}x:x\in X\},x_{1},\ldots,x_{n}\in X\] (for the corresponding diagonal entourage \(\mathrm{U}_{x_{1},\ldots,x_{n}}\)) see, for example, [5], \(\mathcal{U}\) is the equiuniformity on \(G\), constructed in SS2. The base of the Roelcke uniformity \(L\wedge R\) on \(G\) is formed by coverings \[\{\mathrm{St}_{x_{1},\ldots,x_{n}}g\mathrm{St}_{x_{1},\ldots,x_{n}}:g\in G\}, x_{1},\ldots,x_{n}\in X,\] which form the uniformity base of \(R_{\mathcal{K}}\), constructed from a family of small subgroups \[\mathcal{K}=\{\mathrm{St}_{x_{1},\ldots,x_{n}}|x_{1},\ldots,x_{n}\in X\}.\] Thus we have **Proposition 3.1**.: \(L\wedge R=R_{\mathbb{X}}\)_. \(\square\)_ **Definition 3.2**.: _The action of the group \(G\) on the discrete space \(X\) is oligomorphic if the correctly defined action \(G\curvearrowright X^{n}\), \(g(x_{1},\dots,x_{n})=(gx_{1},\dots,gx_{n})\), has a finite number of orbits, \(n\in\mathbb{N}\)._ **Theorem 3.3**.: _For the action \(G\curvearrowright X\) on the discrete space \(X\), the following conditions are equivalent:_ (1) _the maximal equinuniformity \(\mathcal{U}_{X}\) on \(X\) is totally bounded;_ (2) _the action of the group \(G\) is oligomorphic;_ (3) _the maximal equinuniformity \(\mathcal{U}_{X^{n}}\) on \(X^{n}\) is totally bounded for the action \(G\curvearrowright X^{n}\), \(n\in\mathbb{N}\)._ _If one of the equivalent conditions (1)-(3) is met, then the group \(G\) is Roelcke precompact._ Proof.: (1) \(\Longrightarrow\) (2). We use the proof by induction. The maximal equinuniformity \(\mathcal{U}_{X}\) is totally bounded by the condition. So the action \(G\curvearrowright X\) has a finite number of orbits. Let the action \(G\curvearrowright X^{n}\), \(n\in\mathbb{N}\), has a finite number of orbits: \(Y_{1},\dots,Y_{k}\), \(y_{1}\in Y_{1},\dots,y_{k}\in Y_{k}\). Due to the total boundedness of \(\mathcal{U}_{X}\) for any \(j=1,\dots,k\) the action \(\mathrm{St}_{y_{j}}\) on \(X\) has a finite number of orbits: \(Z_{j1},\dots,Z_{jm}\), \(z_{1}\in Z_{j1},\dots,z_{m}\in Z_{jm}\). We show that the orbits of the action \(G\curvearrowright X^{n+1}\) are the sets \(Y_{j}\times Z_{ji}\), \(j=1,\dots,k\), \(i=1,\dots,m\). For the point \((y^{\prime},x^{\prime})\in X^{n}\times X\) let \(y^{\prime}\in Y_{j}\). There exists \(g\in G\) such that \(g(y_{j})=y^{\prime}\) (under the action \(G\curvearrowright X^{n}\)). Let \(g^{-1}(y^{\prime},x^{\prime})=(y_{j},x)\) (under the action \(G\curvearrowright X^{n+1}\)). There exist \(z_{i}\in X\) and \(h\in\mathrm{St}_{y_{j}}\) such that \(h(z_{i})=x\). Then \(gh(y_{j},z_{i})=(y^{\prime},x^{\prime})\). (2) \(\Longrightarrow\) (1). Let the action of the group \(G\) be oligomorphic, and the point \(x=(x_{1},\dots,x_{n})\in X^{n}\). If the orbits of action \(G\curvearrowright X^{n+1}\) are the sets \(Y_{1},\dots,Y_{k}\), then the orbits of action of the group \(\mathrm{St}_{x}\) on \(X=\{x\}\times X\) are the sets \(\{x\}\times X\cap Y_{j}\), \(j=1,\dots,k\). Thus, the uniformity \(\mathcal{U}_{X}\) is totally bounded. (2) \(\Longrightarrow\) (3). Since the unit neighbourhood base of the group \(G\) is formed by subgroups of point stabilizers, and point stabilizers for actions \(G\curvearrowright X^{n}\), \(n\in\mathbb{N}\), allow natural identification (\(\mathrm{St}_{x_{1},\dots,x_{m}}\), \(x_{1},\dots,x_{m}\in X^{n}\), coincides with the stabilizer of the coordinates of the points \(x_{1},\dots,x_{m}\) under the action of \(G\curvearrowright X\)), then, first, the permutation topologies defined by the actions on \(G\) coincide and, second, the maximal equinuniformity under the action of the subgroup \(\mathrm{St}_{y_{1},\dots,y_{m}}\) on \(X\) is totally bounded. From the equivalence of conditions (1) and (2), the action of the subgroup \(\mathrm{St}_{x_{1},\dots,x_{m}}\) is oligomorphic and its action on \(X^{n}\) has a finite number of orbits. Thus, the maximal equinuniformity \(\mathcal{U}_{X^{n}}\) on \(X^{n}\) under the action of \(G\curvearrowright X^{n}\) is totally bounded. The implication of (3) \(\Longrightarrow\)(1) is obvious. To prove the last statement of the theorem, it is sufficient to show that any covering of \(G\) of the form \[\{\mathrm{St}_{x_{1},\dots,x_{n}}g\mathrm{St}_{x_{1},\dots,x_{n}}|g\in G\},x_ {1},\dots,x_{n}\in X\] has a finite subcovering. Denote \(x=(x_{1},\dots,x_{n})\in X^{n}\), \(\mathrm{St}_{x}=\mathrm{St}_{x_{1},\dots,x_{n}}\). It follows from condition (3) that for the subset \(\{x\}\times Gx\) of the fiber \(\{x\}\times X^{n}\) of the product \(X^{n}\times X^{n}\) there exist \(g_{1},\dots,g_{m}\in G\) such that \[\{x\}\times Gx\subset\{x\}\times\big{(}\bigcup\{\mathrm{St}_{x}g_{i}x|i=1, \dots,m\}\big{)}.\] Then for any \(h\in G\) we have \[(x,hx)=g(x,g_{i}x)=(gx,gg_{i}x)\] for some \(g\in\mathrm{St}_{x}\), and \(i\in\{1,\ldots,m\}\). Hence, \(g_{i}^{-1}g^{-1}h\in\mathrm{St}_{x}\), and \(h\in\mathrm{St}_{x}g_{i}\mathrm{St}_{x}\). **Theorem 3.4**.: _For the action \(G\curvearrowright X\) on the discrete space \(X\), the following conditions are equivalent:_ \((1)\) _the group \(G\) is Roelcke precompact;_ \((2)\) _the maximal equiniformity \(\mathcal{U}_{Y}\) on \(Y\) is totally bounded for the action \(G\curvearrowright Y\) on any invariant subset \(Y\subset X\), having a finite number of orbits;_ \((3)\) _the restriction of action of the group \(G\) on any invariant subset \(Y\subset X\) having a finite number of orbits is oligomorphic;_ \((4)\) _the maximal equiniformity \(\mathcal{U}_{Y^{n}}\) on \(Y^{n}\) is totally bounded for the action \(G\curvearrowright Y^{n}\), \(n\in\mathbb{N}\), where \(Y\subset X\) is an invariant subset having a finite number orbits._ Proof.: \((1)\Longrightarrow(2)\). Take one point from each orbit of the action \(G\curvearrowright Y\): \(y_{1},\ldots,y_{k}\). For an arbitrary neighbourhood \(O\in N_{G}(e)\) there is a neighbourhood of the form \(V=\mathrm{St}_{x_{1},\ldots,x_{m},y_{1},\ldots,y_{k}}\subset O\). Due to the Roelcke precompactness of the group \(G\), there exists a finite set \(g_{1},\ldots,g_{n}\in G\) such that \(\bigcup\{Vg_{j}V|j=1,\ldots,n\}=G\). Then for any \(i=1,\ldots,k\) \[\bigcup\{Vg_{j}Vy_{i}|j=1,\ldots,n\}=Gy_{i}\text{ and }\bigcup\{Og_{j}y_{i}|j=1,\ldots,n,i=1,\ldots,k\}=Y.\] That is, any covering \(\{Oy|y\in Y\}\) has a finite subcovering and the uniformity \(\mathcal{U}_{Y}\) is totally bounded. The equivalence of conditions \((2)\), \((3)\) and \((4)\) is proved in Theorem 3.3. \((3)\Longrightarrow(1)\). Let \(Y=\bigcup\{Gx_{j}|j=1,\ldots,k\}\subset X\) be an invariant subset having a finite number of orbits for the action \(G\curvearrowright Y\). The kernel \(N\) of the action \(G\curvearrowright Y\) is a closed normal subgroup of \(G\). The effective action of the factor group \(G/N\curvearrowright Y\) is correctly defined. If we consider the group \(H_{Y}=G/N\) in the permutation topology, then the action \(H_{Y}\curvearrowright Y\) and the natural homomorphism \(\varphi_{Y}:G\to H_{Y}\) are continuous, the maximal equiniformities \(\mathcal{U}|_{X}\) and \(\mathcal{U}_{Y}\), generated by the action of \(G\) and \(H_{Y}\) on \(Y\), respectively, coincide. The topological group \(H_{Y}\) is Roelcke precompact by Theorem 3.3. The family of invariant subsets of \(X\) having a finite number of orbits for the action \(G\curvearrowright Y\) forms an inclusion-directed set. If \(Y^{\prime}\subset Y\), then the homomorphism \(\varphi_{YY^{\prime}}:H_{Y}\to H_{Y^{\prime}}\) (factorization by the action kernel) is defined, for which \(\varphi_{YY^{\prime}}\circ\varphi_{Y}=\varphi_{Y^{\prime}}\). Thus, the inverse spectrum \(\{H_{Y},\varphi_{YY^{\prime}},Y\}\) from Roelcke precompact groups and homomorphisms is determined. Its inverse limit is a Roelcke precompact group (Fact \(1(5)\)). Since the action of \(G\) is effective, the family of surjective homomorphisms \(\varphi_{Y}:G\to H_{Y}\) is a separating (points and closed sets) family of maps \((\mathrm{St}_{x_{1},\ldots,x_{n}}\) under the action \(G\curvearrowright X\) contains the kernel of the action \(G\) on \(Y=\bigcup\{Gx_{i}|i=1,\ldots,n\}\) and the prototype of \(\mathrm{St}_{x_{1},\ldots,x_{n}}\) under the action \(H_{Y}\curvearrowright Y\), \(x_{1},\ldots,x_{n}\in Y\)). Thus, \(G\) is a dense subgroup of the inverse limit \(\{H_{Y},\varphi_{YY^{\prime}},Y\}\), and is Roelcke precompact (Fact \(1\) (1)). **Theorem 3.5**.: _Let \(X\) be a simple chain._ \((1)\) _If \(X\) is rigid, then the group \((\mathrm{Aut}(X),\tau_{p})\)\((\)and hence \((\mathrm{Aut}(X),\tau_{\partial}))\) is not Roelcke precompact._ \((2)\) _If \(X\) is ultrahomogeneous, then the group \((\mathrm{Aut}(X),\tau_{\partial})(\)and hence \((\mathrm{Aut}(X),\tau_{p}))\) is Roelcke precompact._ Proof.: \((1)\) By Theorem 1.7\(X\) is an (unbounded) subgroup of the abelian group \(\mathbb{R}\). If \(X\) is discrete, then \(X\) is isomorphic to \(\mathbb{Z}\), the group \(\mathrm{Aut}(X)\) in the topology \(\tau_{\partial}=\tau_{p}\) on \(\mathrm{Aut}(X)\) is isomorphic to \(\mathbb{Z}\) and is not Roelcke precompact. If \(X\) is dense, then \(X\) is a dense unbounded subset of \(\mathbb{R}\), the group \(\mathrm{Aut}(X)\) in the topology \(\tau_{p}\) is a dense unbounded subgroup of the abelian topological group \(\mathbb{R}\) on which all group uniformities coincide and are not totally bounded. Hence \((\mathrm{Aut}(X),\tau_{p})\) (and by Corollary 1.2 \((\mathrm{Aut}(X),\tau_{\partial})\)) is not Roelcke precompact. (2) The base of the maximal equiuniformity \(\mathcal{U}_{\partial}\) on the discrete space \((X,\tau_{d})\) under the action \((\mathrm{Aut}(X),\tau_{\partial})\curvearrowright(X,\tau_{d})\) is formed by finite coverings \[(\leftarrow,x_{1})\cup\{x_{1}\}\cup(x_{1},x_{2})\cup\{x_{2}\}\cup\cdots\cup(x_ {n-1},x_{n})\cup\{x_{n}\}\cup(x_{n},\rightarrow),x_{1}<\ldots<x_{n},n\in \mathbb{N},\] and the uniformity \(\mathcal{U}_{\partial}\) is totally bounded. By Theorem 3.3, the group \((\mathrm{Aut}(X),\tau_{\partial})\) is Roelcke precompact. By Corollary 1.2 the group \((\mathrm{Aut}(X),\tau_{p})\) is Roelcke precompact. From Proposition 1.10 we have **Corollary 3.6**.: _Let \(X\) be a simple chain._ (1)_\(X\) is rigid \(\Longleftrightarrow\) the group \(\mathrm{Aut}(X)\) is not Roelcke precompact in any admissible group topology for its action on the corresponding \(X\) homogeneous \(\mathrm{GO}\)-spaces._ (2)_\(X\) is ultrahomogeneous \(\Longleftrightarrow\) the group \((\mathrm{Aut}(X),\tau_{\partial})\) is Roelcke precompact \(\Longleftrightarrow\) the group \((\mathrm{Aut}(X),\tau_{p})\) is Roelcke precompact._ (3) _The group \((\mathrm{Aut}(X),\tau_{\partial})\) is Roelcke precompact iff the group \((\mathrm{Aut}(X),\tau_{p})\) is Roelcke precompact. _ **Example 3.7**.: (1)_\(\mathbb{Z}\) is a rigid set. The group \(\mathrm{Aut}(\mathbb{Z})\) is isomorphic to \(\mathbb{Z}\), is \(o\)-primitive, uniquely transitive, and by Corollary 3.6 is not Roelcke precompact in a discrete topology equal to \(\tau_{p}=\tau_{\partial}\). (2)_\(\mathbb{Q},\mathbb{P},\mathbb{R}\), \((0,1)\), \(\mathbb{Z}\otimes_{\ell}\mathbb{Q}\) (isomorphic to \(\mathbb{Q}\)) and \(\mathbb{Z}\otimes_{\ell}\mathbb{P}\) (isomorphic to \(\mathbb{P}\)) are ultrahomogeneous sets. Their automorphism groups by Corollary 3.6 are Roelcke precompact in the topologies \(\tau_{\partial}\) and \(\tau_{p}\)._ (3)_\(\mathcal{L}=[0,\omega_{1})\otimes_{\ell}[0,1)\) is a long ray, \(\mathcal{L}_{-}\) is a long ray \(\mathcal{L}\) with reverse linear ordering._ It is easy to check that \(L=\mathcal{L}\setminus\{(0,0)\}\subset\mathcal{L}\), \(L_{-}=\mathcal{L}_{-}\setminus\{(0,0)\}\subset\mathcal{L}_{-}\) and \(\tilde{L}=L_{-}\Diamond\{0\}\Diamond L\) are ultrahomogeneous sets and, by Corollary 3.6, groups of their automorphisms \(\mathrm{Aut}(\star)\) are Roelcke precompact in the topologies \(\tau_{\partial}\) and \(\tau_{p}\) (see. [30, SS3, p. 3.3] for the topology of pointwise convergence). (4) The Sorgenfrey line \(\mathbb{S}\) is an ultrahomogeneous chain that is a GO-space. By Proposition 1.10, the permutation topology \(\tau_{\partial}\) is the smallest admissible group topology on the group \(\mathrm{Aut}(\mathbb{S})\), and the group \((\mathrm{Aut}(\mathbb{S}),\tau_{\partial})\) is Roelcke precompact by Corollary 3.6. (5) For the group \(\mathrm{Aut}(\mathbf{D})\) of LOTS "two arrows" \(\mathbf{D}=\{(0,1)\}\Diamond\big{(}(0,1)\otimes_{\ell}(0,1)\big{)}\Diamond\{(1,0)\}\), the smallest admissible group topology is the topology of pointwise convergence \(\tau_{p}\). \(\tau_{p}=\tau_{\partial}\), since \[\mathrm{St}_{(x,i)}=[(x,1),[(x,1),\rightarrow)]\bigcap[(x,0),(\leftarrow,(x,0) ]],x\neq 0,1,i=0,1.\] It is easy to check that the maximal equiuniformity under the action \((\mathrm{Aut}(\mathbf{D}),\tau_{\partial})\curvearrowright(\mathbf{D},\tau_{d})\) is totally bounded. Hence by Theorem 3.3 the group \((\mathrm{Aut}(\mathbf{D}),\tau_{\partial})\) is Roelcke precompact. A different approach is possible. LOTS \(D=(0,1)\otimes_{\ell}\{-1,1\}\) is a subspace of \(\mathbf{D}\) and the automorphism groups \(\mathrm{Aut}(D)\) and \(\mathrm{Aut}(\mathbf{D})\) in permutation topologies are topologically isomorphic. It can be shown that the group \((\mathrm{Aut}(D),\tau_{0})\) (\(\tau_{\partial}\) is the smallest admissible group topology) is topologically isomorphic to the group \((\mathrm{Aut}((0,1),\tau_{d}),\tau_{\partial})\). Hence, the group \((\mathrm{Aut}(\mathbf{D}),\tau_{\partial})\) is Roelcke precompact (point (2)). (6) The group \(\mathrm{Aut}(\mathbb{M})\) of the GO-space the "Michael line" \(\mathbb{M}\) is naturally identified with the automorphism groups \(\mathrm{Aut}(\mathbb{P})\) and \(\mathrm{Aut}(\mathbb{Q})\) of invariant subsets: irrational numbers \(\mathbb{P}\) in discrete topology and rational numbers \(\mathbb{Q}\) in linear order topology, respectively. The smallest admissible group topology on \(\mathrm{Aut}(\mathbb{P})\) is the permutation topology \(\tau_{\partial M}\), the smallest admissible group topology on \(\mathrm{Aut}(\mathbb{Q})\) is the topology of pointwise convergence \(\tau_{pM}\). It is easy to check that \(\tau_{\partial M}\geq\tau_{pM}\). Therefore, the smallest admissible group topology on \(\mathrm{Aut}(\mathbb{M})\) will be \(\tau_{\partial M}\), in which \(\mathrm{Aut}(\mathbb{M})\) is Roelcke precompact (point (2)). (7) The group \(\mathrm{Aut}(\mathbf{K})\) of the lexicographically ordered square \(\mathbf{K}\) is Roelcke precompact in the topologies \(\tau_{\partial}\) and \(\tau_{p}\), since it is easy to check that the maximal equiuniformity under the action \((\mathrm{Aut}(\mathbf{K}),\tau_{\partial})\curvearrowright(\mathbf{K},\tau_{d})\) is totally bounded. A different approach will be presented in SS5. **Remark 3.8**.: In Theorem 3.5 and Corollary 3.6 the automorphism group of an ultrahomogeneous simple set can be replaced by any of its subgroups acting in an ultratransitive way. In particular, this is due to the fact that such subgroups are dense subgroups of the automorphism group in the permutation topology. ## 4 The Roelcke compactifications of subgroups of \(\mathrm{S}(X)\) **I.** Let \(G\) be a subgroup of the permutation group \((\mathrm{S}(X),\tau_{\partial})\) of the discrete infinite space \(X\). **Theorem 4.1**.: (1)_\(L\wedge R=\mathcal{U}\), if for points \(x_{1},\ldots,x_{n}\in X\), \(n\in\mathbb{N}\), and any \(g,h\in G\) such that \(h(x_{k})\in\mathrm{St}_{x_{1},\ldots,x_{n}}g(x_{k})\), \(k=1,\ldots,n\), there exist \(f\in\mathrm{St}_{x_{1},\ldots,x_{n}}\) and \(g^{\prime}\in g\mathrm{St}_{x_{1},\ldots,x_{n}}\) such that \(h=f\circ g^{\prime}\)._ (2) _Let_ \(L\wedge R=\mathcal{U}\) _and the uniformity_ \(\mathcal{U}_{X}\) _on_ \(X\) _is totally bounded. Then_ \(G\) _is Roelcke precompact and the Roelcke compactification of_ \(G\) _is the closure of_ \(\imath(G)=G\) _in_ \((\beta_{G}X)^{X}\)_._ (3) _Let_ \(L\wedge R=\mathcal{U}\)_, the uniformity_ \(\mathcal{U}_{X}\) _on_ \(X\) _is totally bounded and for the extended continuous action_ \(G\curvearrowright\beta_{G}X\) _the topology of pointwise convergence on_ \(G\) _coincides with the original permutation topology on_ \(G\)_. Then_ \(G\) _is Roelcke precompact and the Roelcke compactification is the enveloping Ellis semigroup_ \((\)_the closure of_ \(\jmath(G)=G\) _in_ \((\beta_{G}X)^{\beta_{G}X})\)_._ Proof.: Since \(\mathcal{U}\) is an equiuniformity on \(G\) and \(L\wedge R=R\chi\) (Proposition 3.1), then to prove (1), by Theorem 2.1, it is necessary to show that \(L\wedge R\subset\mathcal{U}\). For any covering \(\{\mathrm{St}_{x_{1},\ldots,x_{n}}g\mathrm{St}_{x_{1},\ldots,x_{n}}:g\in G\}, x_{1},\ldots,x_{n}\in X\), consider the covering \[\{U_{x_{1},\ldots,x_{n};g;\mathrm{U}_{x_{1},\ldots,x_{n}}}=\{h\in G:(g(x_{k}), h(x_{k}))\in\mathrm{U}_{x_{1},\ldots,x_{n}},k=1,\ldots,n\}:g\in G\},\] \(\mathrm{U}_{x_{1},\ldots,x_{n}}\) is the diagonal entourage corresponding to the covering \(\{\mathrm{St}_{x_{1},\ldots,x_{n}}x:x\in X\}\) of the space \(X\). If \(h\in U_{x_{1},\ldots,x_{n};g;\mathrm{U}_{x_{1},\ldots,x_{n}}}\), then \((g(x_{k}),h(x_{k}))\in\mathrm{U}_{x_{1},\ldots,x_{n}}\) iff \(h(x_{k})\in\mathrm{St}_{x_{1},\ldots,x_{n}}g(x_{k})\), \(k=1,\ldots,n\). By the condition of the theorem, there exist \(f\in\mathrm{St}_{x_{1},\ldots,x_{n}}\) and \(g^{\prime}\in g\mathrm{St}_{x_{1},\ldots,x_{n}}\) such that \(h=f\circ g^{\prime}\Longrightarrow h\in\mathrm{St}_{x_{1},\ldots,x_{n}}g \mathrm{St}_{x_{1},\ldots,x_{n}}\Longrightarrow U_{x_{1},\ldots,x_{n};g;\mathrm{U }}\subset\mathrm{St}_{x_{1},\ldots,x_{n}}g\mathrm{St}_{x_{1},\ldots,x_{n}} \Longrightarrow\) the covering \(\{U_{x_{1},\ldots,x_{n};g;\mathrm{U}}:g\in G\}\) is refined in the covering \(\{\mathrm{St}_{x_{1},\ldots,x_{n}}g\mathrm{St}_{x_{1},\ldots,x_{n}}:g\in G\}\)\(\Longrightarrow L\wedge R\subset\mathcal{U}\). Additionally, we note that \[\{\mathrm{St}_{x_{1},\ldots,x_{n}}g\mathrm{St}_{x_{1},\ldots,x_{n}}:g\in G\}= \{U_{x_{1},\ldots,x_{n};g;\mathrm{U}_{x_{1},\ldots,x_{n}}}:g\in G\}.\] (2) If the uniformity \(\mathcal{U}_{X}\) is totally bounded then \(\mathcal{U}\) is totally bounded \(\stackrel{{ L\wedge R=\mathcal{U}}}{{\Longrightarrow}}G\) is Roelcke precompact. Since the completion \(\tilde{X}^{\mathcal{U}_{X}}\) is \(\beta_{G}X\), \(L\wedge R=\mathcal{U}\), then from setting a totally bounded uniformity \(\mathcal{U}_{\Pi}\) on the product \(X^{X}\) and the equality \(\mathcal{U}=\mathcal{U}_{\Pi}|_{\mathcal{U}(G)}\) the last statement in (2) follows. (3) \(G\) is Roelcke precompact by (2). Since \(L\wedge R=\mathcal{U}\), \(G\hookrightarrow X^{X}\subset(\beta_{G}X)^{X}\) and \((\beta_{G}X)^{X}\) is the completion of \(X^{X}\) with respect to \(\mathcal{U}\) we have: the restriction of the unique uniformity on \((\beta_{G}X)^{X}\) to \(G\subset(\beta_{G}X)^{X}\) coincides with \(L\wedge R\). The continuous action \(G\curvearrowright X\) is extended to the continuous action \(G\curvearrowright\beta_{G}X\) and the topology of pointwise convergence on \(G\) coincides with the original permutation topology on \(G\). Thus, an embedding \(\jmath\) of \(G\) into \((\beta_{G}X)^{\beta_{G}X}\) is defined and the restriction of the unique uniformity on \((\beta_{G}X)^{\beta_{G}X}\) to \(\jmath(G)\) is weaker than \(L\wedge R\) (Theorem 2.1 (1) and Proposition 3.1). From the commutative diagram where \(\mathrm{pr}\) is uniformly continuous it follows that the restriction of the unique uniformity on \((\beta_{G}X)^{\beta_{G}X}\) to \(\jmath(G)\) coincides with \(L\wedge R\). Hence, the closure of \(\jmath(G)=G\) in \((\beta_{G}X)^{\beta_{G}X}\) is the enveloping Ellis semigroup which coincides with the Roelcke compactification of \(G\). **Corollary 4.2**.: _If the action of the group \(G\) on the discrete space \(X\) is ultratransitive, then_ (1) _the uniformity \(\mathcal{U}_{X}\) is totally bounded;_ (2) _the completion \(\tilde{X}^{\mathcal{U}_{X}}\) is the one-point Alexandroff compactification \(\alpha X\);_ (3)_\(L\wedge R=\mathcal{U}\);_ (4) _the group \(G\) is Roelcke precompact;_ (5) _the Roelcke compactification of \(G\) is the enveloping Ellis semigroup \((\)the closure of \(\jmath(G)=G\) in \((\alpha X)^{\alpha X})\) and coincides with the Roelcke compactification of \(\mathrm{S}(X)\)._ Proof.: (1) Due to the ultratransivity of the action, the base of the equinuniformity \(\mathcal{U}_{X}\) is formed by the disjoint coverings \[\{\mathrm{St}_{x_{1},\ldots,x_{n}}x:x\in X\}=\{\{x_{1}\},\ldots,\{x_{n}\},X \setminus\{x_{1},\ldots,x_{n}\}\},\] \(x_{1},\ldots,x_{n}\in X\), of \(n+1\) sets, \(n\in\mathbb{N}\), and \(\mathcal{U}_{X}\) is a totally bounded equinuniformity. (2) As a base for the uniformity on the compactum \(\alpha X\), one can choose coverings \[\{\{x_{1}\},\ldots,\{x_{n}\},\alpha X\setminus\{x_{1},\ldots,x_{n}\}\},\] \(x_{1},\ldots,x_{n}\in X\), \(n\in\mathbb{N}\). \(\{\{x_{1}\},\ldots,\{x_{n}\},\alpha X\setminus\{x_{1},\ldots,x_{n}\}\}\wedge X\) is the base of the uniformity \(\mathcal{U}_{X}\). Hence \(\alpha X=\tilde{X}^{\mathcal{U}_{X}}\). To prove (3), we check if the condition of point (1) of Theorem 4.1 is met. For points \(x_{1},\ldots,x_{n}\in X\), \(n\in\mathbb{N}\), and any \(g,h\in G\) such that \(h(x_{k})\in\mathrm{St}_{x_{1},\ldots,x_{n}}g(x_{k})\), \(k=1,\ldots,n\), we consider, without loss of generality, the following. If there exists a point \(x_{k}\) such that \(g(x_{k})=x_{m(k)}\), \(m(k)\leq n\), then let \(x_{1},\ldots,x_{p}\), \(p\leq n\), be all points for which \(g(x_{k})=x_{m(k)}\), \(m(k)\leq n\). Then \(h(x_{k})=x_{m(k)}\), \(k=1,\ldots,p\). If \(p\neq n\), then \(g(x_{p+l})\not\in\{x_{1},\ldots,x_{n}\}\), \(h(x_{p+l})\not\in\{x_{1},\ldots,x_{n}\}\), \(l=1,\ldots,n-p\). If \(p=n\), then it follows from the ultratransivity of the action that there exists \(f\in G\) such that the points \(x_{1},\ldots,x_{n}\) are mapped respectively to the points \(x_{1},\ldots,x_{n}\). If \(0<p<n\), then it follows from the ultratransivity of the action that there exists \(f\in G\) such that the points \(x_{1},\ldots,x_{n},g(x_{p+1}),\ldots,g(x_{n})\) are mapped respectively to the points \(x_{1},\ldots,x_{n},h(x_{p+1}),\ldots,h(x_{n})\). If \(p=0\), then it follows from the ultratransivity of the action that there exists \(f\in G\) such that the points \(x_{1},\ldots,x_{n},g(x_{1}),\ldots,g(x_{n})\) are mapped respectively to the points \(x_{1},\ldots,x_{n},h(x_{1}),\ldots,h(x_{n})\). In all cases \[f\in\mathrm{St}_{x_{1},\ldots,x_{n}}.\] At the same time \[h(x_{k})=f(g(x_{k})),k=1,\ldots,n.\] Hence \(h^{-1}fg\in\mathrm{St}_{x_{1},\ldots,x_{n}}\Longrightarrow fg\in h\mathrm{St} _{x_{1},\ldots,x_{n}}\Longrightarrow h\in fg\mathrm{St}_{x_{1},\ldots,x_{n}}\) and there is \(g^{\prime}\in g\mathrm{St}_{x_{1},\ldots,x_{n}}\) such that \(h=fg^{\prime}\). By Theorem 2.1\(L\wedge R=\mathcal{U}\). (1), (3) and p. (2) of Theorem 3.5 imply (4). (2), (3), p. (3) of Theorem 4.1 (which can be easely verified) and Fact 2 (3) imply (5). **II.** Let \(X\) be an ultrahomogeneous chain, \(\mathrm{Aut}(X)\) is its automorphism group. \(\mathcal{U}^{p}_{X}\)is the maximal equiuniformity for the action \((\mathrm{Aut}(X),\tau_{p})\curvearrowright(X,\tau)\), \(\beta_{p}X=m(X,\tau)\) is the maximal \(G\)-compactification of \(X\) by Lemma 1.12(1) (completion of \(X\) with respect to the uniformity \(\mathcal{U}^{p}_{X}\)[16, Theorem 3]), \(\mathcal{U}^{p}\) is a totally bounded equiuniformity on \(G\subset X^{X}\), constructed in SS2. \(\mathcal{U}^{\partial}_{X}\) is the maximal equiuniformity for the action \((\mathrm{Aut}(X),\tau_{\partial})\curvearrowright(X,\tau_{d})\), \(\beta_{\partial}X=m(X,\tau_{d})\) is the maximal \(G\)-compactification of \(X\) by Lemma 1.12(2) (completion of \(X\) with respect to the uniformity \(\mathcal{U}^{\partial}_{X}\)[16, Theorem 3]), \(\mathcal{U}^{\partial}\) is a totally bounded equiuniformity on \(G\subset X^{X}\), constructed in SS2. \(R_{\mathcal{K}}\) is a uniformity on \(\mathrm{Aut}(X)\), constructed from a family of small subgroups \(\mathcal{K}=\{\mathrm{St}_{x_{1},\ldots,x_{n}}|x_{1},\ldots,x_{n}\in X\}\). **Corollary 4.3**.: _Let \(X\) be an ultrahomogeneous chain. Then_ (1) _for the group \((\mathrm{Aut}(X),\tau_{\partial})\)\(L\wedge R=\mathcal{U}^{\partial}\), the Roelcke compactification of \(G=(\mathrm{Aut}(X),\tau_{\partial})\) is the enveloping Ellis semigroup_ (_the closure of \(\jmath(G)=G\) in \((\beta_{\partial}X)^{\beta_{\partial}X}\)_)_._ (2) _for the group \((\mathrm{Aut}(X),\tau_{p})\)\(R_{\mathcal{K}}=\mathcal{U}^{p}\), the completion of the group \(G=(\mathrm{Aut}(X),\tau_{p})\) with respect to the uniformity \(R_{\mathcal{K}}\) is the enveloping Ellis semigroup_ (_the closure of \(\jmath(G)=G\) in \((\beta_{p}X)^{\beta_{p}X}\)_)_._ Proof.: (1) By Theorem 4.1, at first we establish the equality \(L\wedge R=\mathcal{U}^{\partial}\), i.e. check if the condition (1) of Theorem 4.1 is met. Let \(x_{1}<\ldots<x_{n}\in X\), \(n\in\mathbb{N}\), and \(g,h\in G\) be such that \(h(x_{k})\in\mathrm{St}_{x_{1},\ldots,x_{n}}g(x_{k})\), \(k=1,\ldots,n\). The covering \(\{\mathrm{St}_{x_{1},\ldots,x_{n}}x|x\in X\}\) has the form \[(\leftarrow,x_{1})\cup\{x_{1}\}\cup(x_{1},x_{2})\cup\{x_{2}\}\cup\cdots\cup(x_ {n-1},x_{n})\cup\{x_{n}\}\cup(x_{n},\rightarrow),x_{1}<\ldots<x_{n},\] and for any \(k=1,\ldots,n\) the points \(h(x_{k}),g(x_{k})\) belong to one of its disjoint elements. Since \[g(x_{1})<\ldots<g(x_{n})\text{ and }h(x_{1})<\ldots<h(x_{n}),\] then arranging in order the points \(x_{1}<\ldots<x_{n}\) and \(g(x_{1})<\ldots<g(x_{n})\), \(x_{1}<\ldots<x_{n}\) and \(h(x_{1})<\ldots<h(x_{n})\), we will get ordered sets of points \(P_{g}\) and \(P_{h}\) for which first, \(x_{j}\leq g(x_{k})\leq x_{j+1}\Longleftrightarrow x_{j}\leq h(x_{k})\leq x_{j+1}\), \(k=1,\ldots,n\), and second, \(x_{j}<g(x_{k})<g(x_{m})<x_{j+1}\Longleftrightarrow x_{j}<h(x_{k})<h(x_{m})<x_ {j+1}\), \(k<m\), \(k,m=1,\ldots,n\). The ultratransivity of the action implies the existence of \(f\in\operatorname{Aut}(X)\) such that \(f(P_{g})=P_{h}\). Then \(f\in\operatorname{St}_{x_{1},\ldots,x_{n}}\) and \(h=f\circ g\). To finish the proof we refer to Lemma 1.12 (2) and [30, Theorem 2, Proposition 4] for the fulfillment of conditions of Theorem 4.1 (3). (2) In order to establish the equality \(R_{\mathcal{X}}=\mathcal{U}^{p}\) it is enough to check if the condition (2) of Theorem 2.1 is met. For the diagonal entourage \(U\in\mathcal{U}^{p}_{X}\) let the diagonal entourage \(V\in\mathcal{U}^{p}_{X}\) be such that \(2V\subset U\); \(x_{1}<\ldots<x_{n}\in X\); \(g,h\in\operatorname{Aut}(X)\) such that \((g(x_{k}),h(x_{k}))\in V\), \(k=1,\ldots,n\). Let us construct an automorphism \(g^{\prime}\in g\mathrm{St}_{x_{1},\ldots,x_{n}}\) such that \((h(x),g^{\prime}(x))\in 2V\subset U\) for any point \(x\in X\) (then for any \(x\in X\) (\(h((g^{\prime})^{-1}(x)),g^{\prime}((g^{\prime})^{-1}(x)))=(h(g^{\prime})^{-1}( x),x)\in U\) and from the definition of the topology of pointwise convergence, the automorphism \(h(g^{\prime})^{-1}\) belongs to the neighbourhood of the unit of the group \((\operatorname{Aut}(X),\tau_{p})\), which determines the entourage of the diagonal \(U\), i.e. the condition (2) of Theorem 2.1 is met). The construction of the automorphism \(g^{\prime}\) is carried out on the intervals \[(\leftarrow,x_{1}],[x_{1},x_{2}],\ldots,[x_{n-1},x_{n}],[x_{n},\rightarrow).\] (i) On the interval \((\leftarrow,x_{1}]\) in the case of \(h(x_{1})\geq g(x_{1})\). If \((\leftarrow,h(x_{1}))\in V\), then \(g^{\prime}(x)=g(x)\) for \(x\in(\leftarrow,x_{1}]\). Otherwise, let \(a_{1}<g(x_{1})\), \((a_{1},g(x_{1}))\in V\). We assume \(x_{1}^{-}=h^{-1}(a_{1})\). From the ultratransivity of the action, there exists \(\varphi_{1}\in\operatorname{Aut}(X)\) such that \(\varphi_{1}(x_{1}^{-})=a_{1}\), \(\varphi_{1}(x_{1})=g(x_{1})\). Let us put \[g^{\prime}(x)=\left\{\begin{array}{lcl}h(x)&\mbox{if}&x\in(\leftarrow,x_{1} ^{-}],\\ \varphi_{1}(x)&\mbox{if}&x\in[x_{1}^{-},x_{1}].\end{array}\right.\] In the case of \(h(x_{1})\leq g(x_{1})\). If \((\leftarrow,g(x_{1}))\in V\), then \(g^{\prime}(x)=g(x)\) for \(x\in(\leftarrow,x_{1}]\). Otherwise, let \(a_{1}<h(x_{1})\), \((a_{1},h(x_{1}))\in V\). We assume \(x_{1}^{-}=h^{-1}(a_{1})\). From the ultratransivity of the action, there exists \(\varphi_{1}\in\operatorname{Aut}(X)\) such that \(\varphi_{1}(x_{1}^{-})=a_{1}\), \(\varphi_{1}(x_{1})=g(x_{1})\). Let us put \[g^{\prime}(x)=\left\{\begin{array}{lcl}h(x)&\mbox{if}&x\in(\leftarrow,x_{1} ^{-}],\\ \varphi_{1}(x)&\mbox{if}&x\in[x_{1}^{-},x_{1}].\end{array}\right.\] On the interval \([x_{n},\rightarrow)\) the construction is similar. (ii) On the interval \([x_{k},x_{k+1}]\), \(k=1,\ldots,n-1\). The case of \(h(x_{k})\geq g(k_{k})\), \(h(k_{k+1})\geq g(x_{k+1})\). If \((g(x_{k}),h(x_{k+1}))\in V\), then \(g^{\prime}(x)=g(x)\) for \(x\in[x_{k},x_{k+1}]\). Otherwise, let \(b_{k}>h(x_{k})\), \((b_{k},g(x_{k}))\in V\), \(a_{k+1}<g(x_{k+1})\), \((a_{k+1},h(x_{k+1}))\in V\) (\(b_{k}<a_{k+1}\)). We assume \(x_{k}^{+}=h^{-1}(b_{k})\), \(x_{k+1}^{-}=h^{-1}(a_{k+1})\). From the ultratransivity of the action, there exists \(\varphi_{k}^{-}\in\operatorname{Aut}(X)\) such that \(\varphi_{k}^{-}(x_{k})=g(x_{k})\), \(\varphi_{k}^{-}(x_{k}^{+})=b_{k}\), there exists \(\varphi_{k}^{+}\in\operatorname{Aut}(X)\) such that \(\varphi_{k}^{+}(x_{k+1}^{-})=a_{k+1}\), \(\varphi_{k}^{+}(x_{k})=g(x_{k+1})\). Let us put \[g^{\prime}(x)=\left\{\begin{array}{lcl}\varphi_{k}^{-}&\mbox{if}&x\in[x_{k},x_{k}^{+}],\\ h(x)&\mbox{if}&x\in[x_{k}^{+},x_{k+1}^{-}],\\ \varphi_{k}^{+}&\mbox{if}&x\in[x_{k+1}^{-},x_{k+1}].\end{array}\right.\] In the case of \(h(x_{k})\leq g(x_{k})\), \(h(x_{k+1})\leq g(x_{k+1})\) the construction is similar. In the case of \(h(x_{k})\geq g(x_{k})\), \(h(x_{k+1})\leq g(x_{k+1})\). If \((g(x_{k}),g(x_{k+1}))\in V\), then \(g^{\prime}(x)=g(x)\) for \(x\in[x_{k},x_{k+1}]\). Otherwise, let \(b_{k}>h(x_{k})\), \((b_{k},g(x_{k}))\in V\), \(a_{k+1}<h(x_{k+1})\), \((a_{k+1},g(x_{k+1}))\in V\) (\(b_{k}<a_{k+1}\)). We assume \(x_{k}^{+}=h^{-1}(b_{k})\), \(x_{k+1}^{-}=h^{-1}(a_{k+1})\). From the ultratransivity of the action, there exists \(\varphi_{k}^{-}\in\operatorname{Aut}X\) such that \(\varphi_{k}^{-}(x_{k})=g(x_{k})\), \(\varphi_{k}^{-}(x_{k}^{+})=b_{k}\), there exists \(\varphi_{k}^{+}\in\operatorname{Aut}(X)\) such that \(\varphi_{k}^{+}(x_{k+1}^{-})=a_{k+1}\), \(\varphi_{k}^{+}(x_{k})=g(x_{k+1})\). Let us put \[g^{\prime}(x)=\left\{\begin{array}{lll}\varphi_{k}^{-}&\text{if}&x\in[x_{k},x_{k}^{+}],\\ h(x)&\text{if}&x\in[x_{k}^{+},x_{k+1}^{-}],\\ \varphi_{k}^{+}&\text{if}&x\in[x_{k+1}^{-},x_{k+1}].\end{array}\right.\] In the case of \(h(x_{k})\leq g(x_{k})\), \(h(x_{k+1})\geq g(x_{k+1})\) the construction is similar. It follows from the construction that \((h(x),g^{\prime}(x))\in 2V\subset U\) for any point \(x\in X\). To finish the proof we refer to Lemma 1.12 (1) and [30, Theorem 2, Proposition 4] for the fulfillment of conditions for uniformity \(R_{\mathcal{X}}\) analogous to conditions for \(L\wedge R\) of Theorem 4.1 (3) (the proof is the same as in Theorem 4.1 (3)). **Remark 4.4**.: (1) Since \(L\wedge R\subset R_{\mathcal{X}}\), then under the conditions of Corollary 4.3 (2) the Roelcke compactification of \(G=(\operatorname{Aut}(X),\tau_{p})\) is the image of completion (compactification) of the group \(G=(\operatorname{Aut}(X),\tau_{p})\) with respect to the uniformity \(R_{\mathcal{X}}\) when mapping compactifications. (2) In Corollary 4.3 the automorphism group of an ultrahomogeneous simple set can be replaced by any of its subgroups acting in an ultratransitive way. In particular, this is due to the fact that such subgroups are dense subgroups of the automorphism group in the permutation topology. 5 The Roelcke precompactness of automorphism groups of homogeneous chains different from simple ones Let \(X\) be a homogeneous chain, \(J\) is a proper regular interval. From [25, Theorem 7] it follows that \(J\) is an open interval \((X,\tau)\), the complement to which are disjoint open intervals \(J^{-}=\{t\in X|t<x,\forall x\in J\}\) and \(J^{+}=\{t\in X|t>x,\forall x\in J\}\) (not necessarily homogeneous). By Fact 4(3) \(J\) is a homogeneous chain and the group \(\operatorname{Aut}(J)\) (any automorphism \(J\) extends to an automorphism of \(X\) coinciding with the identical mapping on \(X\setminus J\)) is called a characteristic group of \(J\)[25, p. (3.1)]. Assume \(H=\{g\in\operatorname{Aut}(X)|\forall x\in J,g(x)\in J\}\). **Proposition 5.1**.: (1)_\(H\) is an open subgroup of \((\operatorname{Aut}(X),\tau_{p})\)_(_and hence \((\operatorname{Aut}(X),\tau_{\partial})\)_)_._ (2)_\((H,\tau_{p})=(\operatorname{Aut}(J^{-}),\tau_{p})\times(\operatorname{Aut}(J), \tau_{p})\times(\operatorname{Aut}(J^{+}),\tau_{p}),\)_ \((H,\tau_{\partial})=(\operatorname{Aut}(J^{-}),\tau_{\partial})\times( \operatorname{Aut}(J),\tau_{\partial})\times(\operatorname{Aut}(J^{+}),\tau_ {\partial}),\)_ (3) _for any \(x\in J\)_ \[\operatorname{St}_{x}(\text{under the action $\operatorname{Aut}(X)\curvearrowright X $})=\operatorname{Aut}(J^{-})\times\operatorname{St}_{x}(\text{under the action $\operatorname{Aut}(J)\curvearrowright J$})\times\operatorname{Aut}(J^{+}).\] Proof.: (1) \(H=[x,J]\), where \(x\in J\). So \(H\) is an open subgroup of \((\operatorname{Aut}(X),\tau_{p})\). (2) \(X=J^{-}\lozenge J\lozenge J^{+}\), and any automorphism \(X\) belongs to \(H\) iff it is a combination of automorphisms \(J^{-}\), \(J\) and \(J^{+}\). Thus, \(H=\operatorname{Aut}(J)\times\operatorname{Aut}(J^{-})\times\operatorname{Aut}(J^ {+})\). Since the topology of pointwise convergence is an admissible group topology on the automorphism group of a LOTS [26] and [30] and \(\tau_{p}\leq\tau_{\partial}\), then the topological groups in point (2) are correctly defined. The topological isomorphism of the groups in point (2) is easily verified. (3) follows from (2) and the definition of a regular interval. If \(J\) is a regular interval, then the equivalence relation \[x\sim_{J}y\Longleftrightarrow\forall g\in\operatorname{Aut}(X)\ ((g(x)\in J) \Longrightarrow(g(y)\in J))\] defines a homogeneous chain \(X/J=X/\sim_{J}\)[25, SS4]. **Theorem 5.2**.: _Let \(J\) be the proper regular interval of a homogeneous chain \(X\). Then_ (1) \(\operatorname{Aut}(X)=(\operatorname{Aut}(J))^{X/J}\times\operatorname{Aut} (X/J)\)_._ (2)_\((\operatorname{Aut}(X),\tau_{p})=(\operatorname{Aut}(J),\tau_{p})^{X/J}\ltimes( \operatorname{Aut}(X/J),\tau_{\partial})\)._ (3)_\((\operatorname{Aut}(X),\tau_{\partial})=(\operatorname{Aut}(J),\tau_{\partial}) ^{X/J}\ltimes(\operatorname{Aut}(X/J),\tau_{\partial})\)._ (4)_\((\operatorname{Aut}(X),\tau_{p})/(\operatorname{Aut}(J),\tau_{p})^{X/J}=( \operatorname{Aut}(X),\tau_{\partial})/(\operatorname{Aut}(J),\tau_{\partial} )^{X/J}=(\operatorname{Aut}(X/J),\tau_{\partial})\)._ Proof.: (1) and (2) are a restatement of Theorem 8 from [30]. For a point \(x\in X\) by \(J_{x}\in X/J\), we denote its equivalence class with respect to \(\sim_{J}\), \(J_{x}\) and \(J\) are isomorphic chains, \(p_{x}:(\operatorname{Aut}(J))^{X/J}\to\operatorname{Aut}(J_{x})=\operatorname{ Aut}(J)\) projection of the product on the factor. (3) follows from the coincidence of the subbase neighbourhoods: for \(x\in X\), the neighbourhood \(\operatorname{St}_{x}\) from the subbase (under the action \(\operatorname{Aut}(X)\curvearrowright X\)) coincides with the set \(p_{x}^{-1}\text{St}_{x}(\text{under the action }\operatorname{Aut}(J) \curvearrowright J)\times\text{St}_{J_{x}}\), where \(x\in J_{x}\), from the topology subbase \((\operatorname{Aut}(J),\tau_{\partial})^{X/J}\ltimes(\operatorname{Aut}(X/J), \tau_{\partial})\), and vice versa. (4) corollary of (2) and (3). **Theorem 5.3**.: _Let \(X\) be a homogeneous chain that is not simple. The following conditions are equivalent:_ (1) _the topological group \((\operatorname{Aut}(X),\tau_{p})\)\(\bigl{(}\)respectively \((\operatorname{Aut}(X),\tau_{\partial})\bigr{)}\) is Roelcke precompact,_ (2) _for any proper regular interval \(J\) topological groups \((\operatorname{Aut}(J),\tau_{p})\)\(\bigl{(}\)respectively \((\operatorname{Aut}(J),\tau_{\partial})\bigr{)}\) and \((\operatorname{Aut}(X/J),\tau_{\partial})\) are Roelcke precompact,_ (3) _there exists a proper regular interval \(J\) such that the topological groups \((\operatorname{Aut}(J),\tau_{p})\)\(\bigl{(}\)respectively \((\operatorname{Aut}(J),\tau_{\partial})\bigr{)}\) and \((\operatorname{Aut}(X/J),\tau_{\partial})\) are Roelcke precompact._ Proof.: (1)\(\Longrightarrow\)(2). Point (4) of Theorem 5.2 and point (3) of Fact 1 imply the Roelcke precompactness \((\operatorname{Aut}(X/J),\tau_{\partial})\). Points (1) and (2) of Proposition 5.1 and points (2) and (6) of Fact 1 imply the Roelcke precompactness \((\operatorname{Aut}(J),\tau_{p})\)\(\bigl{(}\)respectively \((\operatorname{Aut}(J),\tau_{\partial})\bigr{)}\). The implication (2)\(\Longrightarrow\)(3) is obvious. (3)\(\Longrightarrow\)(1) follows from point (4) of Fact 1. From Corollary 3.6 and Theorem 5.3 we have **Corollary 5.4**.: _For a homogeneous chain \(X\) having a simple infinite interval, the conditions of the Roelcke precompactness of groups \((\operatorname{Aut}(X),\tau_{p})\) and \((\operatorname{Aut}(X),\tau_{\partial})\) are equivalent._ **Example 5.5**.: (1) _The automorphism group of a lexicographically ordered product of two sets with \(2\)-homogeneous factors is Roelcke precompact in the topologies \(\tau_{p}\) and \(\tau_{\partial}\)._ _In particular, automorphism groups of sets of the form \(X_{1}\otimes_{\ell}X_{2}\), where \(X_{1},X_{2}\in\{\mathbb{Q},\mathbb{P},\mathbb{R},L,\tilde{L}\}\), are Roelcke precompact in topologies \(\tau_{p}\) and \(\tau_{\partial}\)._ _The automorphism group of a lexicographically ordered square \(\mathbf{K}\) is Roelcke precompact in the topologies \(\tau_{p}\) and \(\tau_{\partial}\), since \(\operatorname{Aut}(\mathbf{K})\) is isomorphic to the group \(\operatorname{Aut}(\mathbb{R})\times\big{(}(\operatorname{Aut}(\mathbb{R})^{ \mathbb{R}}\ltimes\operatorname{Aut}(\mathbb{R}))\times\operatorname{Aut}( \mathbb{R})\) and \(\operatorname{Aut}(\mathbb{R})^{\mathbb{R}}\ltimes\operatorname{Aut}( \mathbb{R})\) is the automorphism group of \(\mathbb{R}\otimes_{\ell}\mathbb{R}\)[30, Corollary 5]._ (2) _Automorphism groups of lexicographically ordered products of two sets, where the second factor is a rigid set, are not Roelcke precompact in the topologies \(\tau_{p}\) and \(\tau_{\partial}\)._ _In particular, the automorphism groups of the set \(X\otimes_{\ell}\mathbb{Z}\) are not Roelcke precompact in the topologies \(\tau_{p}\) and \(\tau_{\partial}\)._ _The automorphism group of a homogeneous discrete set [24, Definition 4] is not Roelcke precompact in the topologies \(\tau_{p}\) and \(\tau_{\partial}\)._ **Theorem 5.6**.: _Let there be no simple proper regular intervals in a homogeneous chain \(X\) that is not simple. Then \((\operatorname{Aut}(X),\tau_{p})\) is Roelcke precompact iff the automorphism groups \((\operatorname{Aut}(X/J),\tau_{\partial})\), with \(J\) being regular intervals, are Roelcke precompact._ Proof.: Necessity follows from Theorem 5.3. Sufficiency. For an arbitrary point \(x\in X\), the family \(S\) of all proper regular intervals containing \(x\) is linearly ordered by inclusion [25, p. 4.6]. Their intersection is also a regular interval [25, p. 4.7]. If it is a proper regular interval, then it is simple (a contradiction with the condition of the theorem). Therefore, the intersection is one-point, and the open-closed proper regular intervals form the base of the topology of linear order at the point \(x\). For any \(J\in S\), the following is defined: the quotient map \(f_{J}:X\to X/J\) and the homomorphism \(\varphi_{J}:(\operatorname{Aut}(X),\tau_{p})\to(\operatorname{Aut}(X/J),\tau_ {\partial})\) by point (2) of Theorem 5.2. In this case, the pair of mappings \[(\varphi_{J},f_{J}):((\operatorname{Aut}(X),\tau_{p}),X)\to((\operatorname{ Aut}(X/J),\tau_{\partial}),X/J)\] is an equivariant pair of mappings (natural actions are omitted in the notation, see [17, p. 2.1]). Indeed, \(f_{J}(t)=J_{t}\) is a regular interval containing the point \(t\), \(g(J_{t})=J_{g(t)}\) is a regular interval containing the point \(g(t)\). Hence \[\varphi_{J}(g)(f_{J}(t))=J_{g(t)}=f_{J}(g(t)).\] Note that for \(J^{\prime}>J\)\(f_{J^{\prime}}(J)\) is a regular interval in \(X/J^{\prime}\). Indeed, \(J^{\prime}_{x},J^{\prime}_{y}\in f_{J^{\prime}}(J)\), \(g^{\prime}\in\operatorname{Aut}(X/J)^{\prime}\), \(g^{\prime}(J^{\prime}_{x})\in f_{J^{\prime}}(J)\). Let \(\varphi_{J^{\prime}}(g)=g^{\prime}\), \(x,y\in J\) and \(g(x)\in J\). Then \(g(y)\in J\), \(g^{\prime}(J^{\prime}_{y})=f_{J^{\prime}}(g(y))\in f_{J^{\prime}}(J)\). For \(J^{\prime}>J\), an equivariant pair of mappings is defined \[(\varphi_{J^{\prime}J},f_{J^{\prime}J}):((\operatorname{Aut}(X/J^{\prime}), \tau_{\partial}),X/J^{\prime})\to((\operatorname{Aut}(X/J),\tau_{\partial}),X /J),\] such that \(\varphi_{J^{\prime}J}\circ\varphi_{J^{\prime}}=\varphi_{J}\) (since \((\operatorname{Aut}(X/J^{\prime}),\tau_{\partial})=(\operatorname{Aut}(f_{J^{ \prime}}(J)),\tau_{\partial})^{(X/J^{\prime})/f_{J^{\prime}}(J)}\ltimes( \operatorname{Aut}((X/J^{\prime})/f_{J^{\prime}}(J)),\tau_{\partial})=( \operatorname{Aut}(f_{J^{\prime}}(J)),\tau_{\partial})^{X/J}\ltimes( \operatorname{Aut}(X/J),\tau_{\partial}))\), \(f_{J^{\prime}J}\circ f_{J^{\prime}}=f_{J}\,(X/J=(X/J^{\prime})/f_{J^{\prime}}( J))\). This define the equivariant inverse spectrum \[\big{\{}((\operatorname{Aut}(X/J),\tau_{\partial}),X/J),(\varphi_{J^{\prime}J},f_{J^{\prime}J}),S\big{\}}\] and the group spectrum \(\big{\{}(\mathrm{Aut}(X/J),\tau_{\partial}),\varphi_{J^{\prime}J},S\big{\}}\)[17, p. 2.5], in the inverse limit of which the group \((\mathrm{Aut}(X),\tau_{p})\) is dense, if the family of homomorphisms \(\varphi_{J},J\in S\), separates points and closed sets. Any neighbourhood of the unit of the group \((\mathrm{Aut}(X),\tau_{p})\) has the form \[O=[x_{1},O_{1}]\cap\ldots\cap[x_{n},O_{n}].\] Let the regular interval \(J\in S\) be such that \(J_{x_{k}}\subset O_{k}\), \(k=1,\ldots,n\). Then \(\varphi_{J}^{-1}(\mathrm{St}_{J_{x_{1}}}\cap\ldots\cap\mathrm{St}_{J_{x_{n}}})\subset O\) and \(\mathrm{St}_{J_{x_{1}}}\cap\ldots\cap\mathrm{St}_{J_{x_{n}}}\) is the neighbourhood of the unit of the group \((\mathrm{Aut}(X/J),\tau_{\partial})\). Thus \((\mathrm{Aut}(X),\tau_{p})\) is a dense subgroup of the Roelcke precompact group by point (5) of Fact 1 and is Roelcke precompact by point (1) of Fact 1. **Question.** Let \(X\) be a homogeneous chain. Are the conditions of the Roelcke precompactness of the group \((\mathrm{Aut}(X),\tau_{\partial})\) and the Roelcke precompactness of the group \((\mathrm{Aut}(X),\tau_{p})\) equivalent?
2308.02762
Fixation times on directed graphs
Computing the rate of evolution in spatially structured populations is difficult. A key quantity is the fixation time of a single mutant with relative reproduction rate $r$ which invades a population of residents. We say that the fixation time is "fast" if it is at most a polynomial function in terms of the population size $N$. Here we study fixation times of advantageous mutants ($r>1$) and neutral mutants ($r=1$) on directed graphs, which are those graphs that have at least some one-way connections. We obtain three main results. First, we prove that for any directed graph the fixation time is fast, provided that $r$ is sufficiently large. Second, we construct an efficient algorithm that gives an upper bound for the fixation time for any graph and any $r\ge 1$. Third, we identify a broad class of directed graphs with fast fixation times for any $r\ge 1$. This class includes previously studied amplifiers of selection, such as Superstars and Metafunnels. We also show that on some graphs the fixation time is not a monotonically declining function of $r$; in particular, neutral fixation can occur faster than fixation for small selective advantages.
David A. Brewster, Martin A. Nowak, Josef Tkadlec
2023-08-05T01:44:03Z
http://arxiv.org/abs/2308.02762v2
# Fixation times on directed graphs ###### Abstract Computing the rate of evolution in spatially structured populations can be difficult. A key quantity that describes evolutionary dynamics is the fixation time of a single mutant with relative reproduction rate \(r\geqslant 1\) who invades a population of residents. We say that the fixation time is "fast" if it is at most polynomial in terms of the population size \(N\). In this work, we study fixation times of advantageous mutants (\(r>1\)) and neutral mutants (\(r=1\)) on _directed_ graphs, which are defined as those graphs that have at least some one-way connections. We obtain three main results. First, we prove that for any directed graph the fixation time is fast, provided that \(r\) is sufficiently large. Second, we devise an efficient algorithm that gives an upper bound for the fixation time for any graph and any \(r\geqslant 1\). Third, we identify a broad class of directed graphs with fast fixation times for any \(r\geqslant 1\). This class includes previously studied amplifiers of selection, such as Superstars and Metafunnels. We also show that on some graphs fixation time is not a monotonically declining function of \(r\); in particular, neutral fixation can occur faster than fixation for small selective advantages. Our results have important algorithmic consequences and enable efficient computational exploration of various properties of directed graphs. ## 1 Introduction Evolution is a stochastic process that acts on populations of reproducing individuals. Two main driving forces of evolutionary dynamics are mutation and selection [1, 2, 3]. Mutation generates new variants and selection prunes them. When new mutations are sufficiently rare, the evolutionary dynamics are characterized by the fate of a single new mutant. The mutant can either take over the whole population or become extinct. Even when the mutation grants its bearer a relative fitness advantage \(r\geqslant 1\), it might still go extinct due to random fluctuations [4]. Two key parameters that quantify the fate of the newly occurring mutation are the fixation probability and the fixation time [5, 6]. Spatial structure has profound effects on both the fixation probability and the fixation time. Those effects are studied within the framework of Evolutionary Graph Theory [7, 8, 9]. There, individuals are represented as nodes of a graph (network). The edges (connections) of the graph represent the migration patterns of offspring. The edges can be one-way or two-way. Graphs can represent the well-mixed population, spatial lattices, island sub-populations, or arbitrary complex spatial structures. Previous research investigated population structures with various effects on fixation probability and time [10, 11, 12, 13, 14]. For example, isothermal graphs have the same fixation probability as the well-mixed population [7], suppressors of selection reduce the fixation probability of advantageous mutants [15], and amplifiers of selection enhance the fixation probability of advantageous mutants [16]. Amplifiers are population structures that could potentially accelerate the evolutionary search [17]. Known classes of amplifiers include families such as Stars [18, 19, 20], Comets [21], Superstars [22, 23], or Megastars [24]. Interestingly, the amplification typically comes at a cost of increasing the fixation time [25], sometimes substantially [26]. This is problematic, since when fixation times are extremely long, fixation is not a relevant event any more, and thus the fixation probability alone is not the most representative quantity [27, 28]. It is therefore paramount to understand how the population structure affects the fixation time and, in particular, what are the population structures for which the fixation time is "reasonably fast." Borrowing standard concepts from computer science [29], in this work we say that fixation time is _fast_ if the fixation time is (at most) polynomial in terms of the population size \(N\). Otherwise we say that the fixation time is _long_, and the corresponding population structure is _slow_. Two important known results are: (i) for all _undirected_ graphs the fixation time is fast [30, 31]; and (ii) if some edges are one-way (if the graph is directed), then the fixation time can be exponentially long [32]. The latter result has an important negative consequence: when the fixation time is exponentially long, we know no tool to efficiently simulate the process. Therefore, computing or approximating any relevant quantities for realistic population sizes is in practice infeasible. In this work, we present three positive results that concern fixation times on directed graphs (where some or all edges are one-way). First, we prove that for any directed graph the fixation time is fast, provided that the mutant fitness advantage \(r\) is sufficiently large. Second, we devise an efficient algorithm that gives an upper bound on the fixation time, for any graph and any \(r\geqslant 1\). The bound can be used to estimate how long one needs to run the simulations until they terminate. Third, we identify a broad class of directed graphs for which the fixation times are fast for any \(r\geqslant 1\). This class includes many previously studied amplifiers of selection, such as Superstars and Metafunnels. To conclude, we discuss important algorithmic consequences that enable efficient computational exploration of various properties of directed graphs. ## 2 Model In this section we define the notions we use later, such as the population structure (represented by a graph), the evolutionary dynamics (Moran Birth-death process), and the key quantities (fixation probability and fixation time). ### Population structure The spatial population structure is represented by a graph (network) \(G=(V,E)\), where \(V\) is the set of \(N\) nodes (vertices) labeled \(v_{1},\ldots,v_{N}\) and \(E\) is the set of directed one-way edges (links) connecting pairs of different nodes. A two-way connection between nodes \(u\) and \(v\) is represented by two one-way edges \(u\to v\) and \(v\to u\). We assume that the graph is connected. For any node \(v\), the number of edges incoming to \(v\) is called the indegree (denoted \(\deg^{-}(v)\)), and the number of outgoing edges is called the outdegree (denoted \(\deg^{+}(v)\)). When the two numbers coincide, we call them the degree (denoted \(\deg(v)\)). ### Graph classes We say that a graph is _undirected_ if for every edge \(u\to v\) there is also an edge \(v\to u\) in the opposite direction. Otherwise we say that a graph is _directed_. We say that a graph is _regular_ if all nodes have the same degree, that is, there exists a number \(d\) such that \(\deg^{-}(v)=\deg^{+}(v)=d\) for all nodes \(v\in V\). We say that a graph \(G\) is _Eulerian_ (also known as a circulation) if \(\deg^{-}(v)=\deg^{+}(v)\) for each node \(v\). Finally, in this work we say that a graph is _balanced_ if an equality \[\frac{1}{\deg^{+}(v)}\cdot\sum_{w\in V\colon v\to w\in E}\frac{1}{\deg^{-}(w )}=\frac{1}{\deg^{-}(v)}\cdot\sum_{u\in V\colon u\to v\in E}\frac{1}{\deg^{+} (u)}\] holds for all nodes \(v\). It is straightforward to check that the class of balanced graphs includes the regular graphs and the undirected graphs, as well as other graph classes such as Superstars or Metafunnels [7], see Appendices. Below we will prove that the fixation times on all balanced graphs are fast for any \(r\geqslant 1\). ### Moran Bd process To model the evolutionary dynamics we consider the standard Moran Birth-death process. Each node of the graph is occupied by a single individual. Initially, all individuals are wild-type residents with normalized fitness equal to 1, except for a single individual who is a mutant with relative fitness advantage \(r\geqslant 1\). We assume that the initial mutant inhabits a node selected uniformly at random. Given a graph \(G\) and a relative fitness advantage \(r\geqslant 1\), Moran Birth-death process is a discrete-time stochastic process, where in each step: 1. First (Birth), we select an individual with probability proportional to its fitness. Suppose we selected node \(u\). 2. Second (death), we select a neighbor of \(u\) uniformly at random. Suppose we selected node \(v\). 3. Finally (update), we replace the individual at node \(v\) by a copy of individual at node \(u\). At each time-step, the current _configuration_ is the subset of nodes occupied by mutants. Since the graph is connected, with probability \(1\) we eventually obtain a configuration where either all nodes are mutants (we say that mutants _fixed_), or all nodes are residents (we say that mutants _went extinct_). ### Fixation probability and fixation time The key quantities that we consider in this work are fixation probability and fixation time. Given a graph \(G\), a mutant fitness advantage \(r\geqslant 1\), and a current configuration \(X\) of nodes occupied by mutants, the _fixation probability_\(\rho^{r}(G,X)\) is the probability that starting from \(X\), the mutants eventually fix (as opposed to going extinct). Morever, we define an auxiliary quantity \(\rho_{\min}\) that turns out to be useful later in our results. Formally, given a graph \(G\) and \(r=1\), for \(i=1,\ldots,N\) denote by \(\rho_{i}(G)=\rho^{r=1}(G,\{v_{i}\})\) the fixation probability of a single neutral mutant who initially appears at node \(v_{i}\). We define \(\rho_{\min}(G)=\min_{i}\rho_{i}(G)\) to be the smallest of those \(N\) fixation probabilities. To measure the duration of the process until fixation (or extinction) occurs, different notions are used. The _absorption time_\(\mathrm{AT}^{r}(G,X)\) is the expected number of steps of the Moran Birth-death process until the process terminates, regardless of what is the outcome (mutant fixation or extinction). In contrast, _fixation time_\(\mathrm{T}^{r}(G,X)\) is the expected number of steps averaged over only those evolutionary trajectories that terminate with mutant fixation. Similarly, one can define the _extinction time_\(\mathrm{ET}^{r}(G,X)\) averaging over only those trajectories that terminate with the mutant going extinct. By linearity of expectation, the three quantities are related as \(\mathrm{AT}^{r}(G,X)=\rho^{r}(G,X)\cdot\mathrm{T}^{r}(G,X)+(1-\rho^{r}(G,X)) \cdot\mathrm{ET}^{r}(G,X)\). Our objective in this work is to provide upper bounds on the absorption time and on the fixation time. To that end, given a graph \(G\) and a mutant fitness advantage \(r\geqslant 1\), let \(\mathrm{T}^{r}(G)=\max_{X}\mathrm{T}^{r}(G,X)\) be the largest fixation time among all possible starting configurations \(X\). In the limit of strong selection \(r\to\infty\) we also define \(\mathrm{T}^{\infty}(G)=\lim_{r\to\infty}T^{r}(G)\). This regime is called the ecological scenario [33] and corresponds to new invasive species populating an existing ecosystem. ### Asymptotic notation Recall that a function \(f(N)\) is (at most) _polynomial_ if there exists a positive constant \(c\) such that \(f(N)\leqslant N^{c}\) for all large enough \(N\). Examples of polynomial functions are \(f(N)=\frac{1}{2}N(N+1)\) and \(f_{2}(N)=10\cdot N\log N\), whereas functions such as \(g(N)=1.1^{N}\) and \(g_{2}(N)=2^{\sqrt{N}}\) are not polynomial, since they grow too quickly. In computer science, problems that can be solved using polynomially many elementary computations are considered tractable. In alignment with that, given a population structure \(G\) with \(N\) nodes, we say that fixation time is _fast_ if it is at most polynomial in terms of the population size \(N\). ## 3 Results We present three main types of results. ### Fixation time is fast when selection advantage is strong enough As our first main result, we prove that the fixation time on any directed graph is fast, provided that the mutant fitness advantage \(r\) is large enough. As an illustration, for every \(N=2k\) consider a two-layer graph \(\mathrm{TL}_{N}\) depicted in Fig. 1(a). It consists of two layers of \(k-1\) nodes each, a source node \(v_{0}\) and a final node \(v_{2k}\). The grey segments represent two-way edges. Starting from node \(v_{0}\), mutants eventually propagate through the first layer, through the second layer, and to the final node. But while spreading through the first layer, they are under an increased pressure due to the resident nodes in the second layer. As a consequence, the fixation time crucially depends on \(r\). When \(r=1.1\), Fig. 1(b) shows that the fixation time scales exponentially in \(N\) (that is, it is long). In contrast, in the limit of large \(r\) the fixation time scales polynomially in \(N\), that is, it is fast. In general, we can prove the following result about an arbitrary population structure. **Theorem 1**.: _Let \(G_{N}\) be an arbitrary graph on \(N\) nodes. Suppose that \(r\geqslant N^{2}\). Then \(\mathrm{T}^{r}(G_{N})\leqslant N^{4}\)._ Theorem 1 implies that while the fixation time on certain graphs can be long for some values of \(r\), this effect is inevitably transient, and the fixation time becomes fast once \(r\) exceeds a certain threshold. The intuition behind the proof is that if \(r\) is large enough, the size of the mutant subpopulation is always more likely to increase then to decrease regardless of which nodes are currently occupied by mutants. Thus, the evolutionary process can be mapped to a random walk with a constant forward bias. It is known that such biased random walks absorb polynomially quickly. See Appendices for details. An attractive feature of Theorem 1 is that it applies to all directed graphs. An obvious limitation is that the condition \(r\geqslant N^{2}\) is rather stringent. For graphs with certain structural features, we prove that the constraint on \(r\) can be considerably relaxed. Recall that a directed graph is said to be Eulerian (also called a _circulation_) if each node has the same indegree as outdegree. In that case, we refer to the number \(\deg^{-}(v)=\deg^{+}(v)\) simply as a _degree_ of node \(v\). **Theorem 2**.: _Let \(G_{N}\) be an Eulerian graph on \(N\) nodes with smallest degree \(\delta\) and largest degree \(\Delta\). Suppose \(r\geqslant\Delta/\delta\). Then \(\mathrm{T}^{r}(G_{N})\leqslant 4\Delta^{2}N^{8}\)._ We point out two special cases of Theorem 7 (for its proof see Appendices). First, consider any regular graph \(G_{N}\), that is, a graph where all nodes have the same indegree and outdegree equal to \(d\). Then, the graph is Eulerian and we have \(d=\Delta=\delta\), and thus Theorem 7 implies that \(\mathrm{T}^{r}(G_{N})\) is at most a polynomial in \(N\) and \(d\) for any \(r\geqslant 1\). In other words, fixation time on any regular graph is fast for any \(r\geqslant 1\) (we note that this result is known [32]). Second, consider an Eulerian graph that is "close to being regular", in the sense that each node has degree either \(4\) or \(5\). An example of such a graph is a square lattice with several additional long-range connections (and all edges two-way). Then, Theorem 7 implies that the fixation time is fast for every \(r\geqslant 5/4=1.25\). Figure 1: **Fast and long fixation times on a two-layer graph.****a,** For \(N=2k\), a two-layer graph \(\mathrm{TL}_{N}\) consists of a single source node \(v_{1}\), two layers of \(k-1\) nodes each, and a final node \(v_{N}\). The edges within the layers are two-way (grey) and the edges between corresponding nodes of the two layers are one-way (black). Here \(N=12\). As mutants (red nodes) spread from the source node \(v_{1}\) through the bottom layer rightward, they can propagate along only one edge, whereas residents (blue nodes) fight back along multiple edges. **b,** Since the mutant at \(v_{1}\) never goes extinct, eventual mutant fixation is guaranteed, but the timescale crucially depends on the mutant fitness advantage \(r\). When \(r=1.1\), the fixation time \(\mathrm{T}^{r}(\mathrm{TL}_{N},v_{1})\) is exponential in \(N\), whereas when \(r=100\) it is polynomial. (Each data point is an average over \(10^{3}\) simulations.) ### Fixation time for small selective advantage Our transience result shows that for any fixed graph \(G\), the fixation time is fast when \(r\) is sufficiently large. It is natural to hope that perhaps for any fixed graph \(G\) the fixation time is a monotonically decreasing function of \(r\) for \(r\geqslant 1\). However, this is not the case, as shown in Fig. 2. Briefly speaking, the effect responsible for the increase in fixation time when \(r=1+\varepsilon\) is that by increasing the mutant fitness advantage, certain evolutionary trajectories that used to lead to mutant extinction instead lead to mutant fixation. Since those "newly fixating" trajectories might generally take relatively long to fix, the average length of the fixating trajectories can go up. Similarly, the absorption time can also go up as we increase \(r\). Despite the lack of monotonicity, we can show that the fixation time can not go up too much as we increase \(r\). Recall that \(\rho_{\min}(G_{N})=\min\{\rho^{r=1}(G_{N},\{v\})\mid v\in G_{N}\}\) denotes the fixation probability under neutral drift (\(r=1\)), when the initial mutant appears at a node \(v\) with the smallest fixation probability. Note that for any graph with \(N\) nodes we have \(\rho_{\min}(G_{N})\leqslant 1/N\), but \(\rho_{\min}(G_{N})\) could in general be substantially smaller than \(1/N\). Finally, recall that the quantity \(\rho_{\min}(G_{N})\) can be computed efficiently by solving a linear system of \(N\) equations [9, 12, 34]. We can now state our second main result. **Theorem 3**.: _Fix a graph \(G_{N}\) and \(r\geqslant 1\). Then \(\mathrm{T}^{r}(G_{N})\leqslant\left(\frac{N}{\rho_{\min}(G_{N})}\right)^{4}.\)_ We note that Theorem 3 yields an efficiently computable upper bound on \(\mathrm{T}^{r}(G_{N})\). In the next section we elaborate on the computational consequences of this result. In the rest of this section, we give a brief intuition behind the proof of Theorem 3. The proof relies on two ingredients. First, instead of considering the process with mutant fitness advantage \(r\), we consider the neutral process that corresponds to \(r=1\). There, using a martingale argument we show that the fixation time \(\mathrm{T}^{r=1}(G_{N})\) can be bounded from above in terms of the quantity \(\rho_{\min}\). The intuition is that as long as all fixation probabilities are non-negligible, all steps of the stochastic process have substantial magnitude either towards fixation or towards extinction. As a consequence, we are able to argue that either fixation or extinction will occur after not too many steps. All in all, this yields an upper bound on fixation time \(\mathrm{T}^{r=1}(G_{N})\) of the neutral process in terms of the quantity \(\rho_{\min}\). See Appendices for details. Figure 2: **Fixation time is not monotone in \(r\).****a,** In an (undirected) star graph \(S_{4}\) on 4 nodes, one node (center) is connected to three leaf nodes by two-way edges. When the initial mutant appears at a leaf \(v\), the fixation time \(\mathrm{T}^{r}(S_{4},v)\) increases as \(r\) increases from \(r=1\) to roughly \(r=1.023\). Then it starts to decrease. **c,** Normalized fixation time \(\mathrm{T}^{r}(G,v)/\,\mathrm{T}^{r=1}(G,v)\) as a function of \(r\in[1,1.3]\), for all 83 connected graphs \(G\) with 4 nodes, and all four possible mutant starting nodes \(v\). As \(r\) increases, the fixation time goes up for 182 of the possible \(4\cdot 83=332\) initial conditions. The increase is most pronounced for the so-called _lollipop_ graph \(L_{4}\) and a starting node \(u\). In contrast, for the same lollipop graph and a different starting node \(w\), the fixation time decreases the fastest. As our second ingredient, we translate the bound on \(\mathrm{T}^{r=1}(G_{N})\) into a bound on \(\mathrm{T}^{r}(G_{N})\) for any \(r\geqslant 1\). We note that, as indicated in Fig. 2, for a fixed graph \(G_{N}\) the fixation time is in general not a monotonically decreasing function of \(r\). Nevertheless, the two processes can be coupled in a certain specific way, which allows us to argue that while \(\mathrm{T}^{r}(G_{N})\) can be somewhat larger than \(\mathrm{T}^{r=1}(G_{N})\), it can not be substantially larger. In this step, we again use the quantity \(\rho_{\min}\). See Appendices for details. ### Fixation time is fast when the graph is balanced As noted above, Theorem 3 provides an upper bound on the fixation time for any graph \(G_{N}\) and any mutant fitness advantage \(r\geqslant 1\), in terms of the quantity \(\rho_{\min}(G_{N})\). By definition, we have \(0\leqslant\rho_{\min}(G_{N})\leqslant 1/N\). When the quantity \(\rho_{\min}(G_{N})\) is exponentially small, the upper bound from Theorem 3 becomes exponentially large, and thus not particularly interesting. However, for many graphs the quantity \(\rho_{\min}(G_{N})\) turns out to be much larger, namely inversely proportional to a polynomial in \(N\). In those cases, Theorem 3 implies that the fixation time \(\mathrm{T}(G_{N},r)\) is fast for any \(r\geqslant 1\). In particular, as our third main result we prove that this occurs for a broad class of graphs which we call balanced graphs. Formally, we say that a graph \(G_{N}\) is _balanced_ if an equality \[\frac{1}{\deg^{+}(v)}\cdot\sum_{w:v\to w\in E}\frac{1}{\deg^{-}(w)}=\frac{1}{ \deg^{-}(v)}\cdot\sum_{u:(u,v)\in E}\frac{1}{\deg^{+}(u)}\] holds for all nodes \(v\). We note that the family of balanced graphs includes many classes of graphs studied in the context of Moran process in the existing literature, such as the undirected graphs [30], regular (possibly directed) graphs [32], as well as Superstars and Metafunnels [7]. It also includes other classes of directed graphs of general interest, such as cyclic complete multipartite graphs, book graphs, or directed fans [35], see Fig. 3. We have the following theorem. **Theorem 4**.: _Let \(G_{N}\) be a balanced graph. Then:_ 1. \(\rho^{r=1}(G_{N},u)=\frac{1/\deg^{-}(u)}{\sum_{v\in V}1/\deg^{-}(v)}\geqslant 1 /N^{2}\) _for any node_ \(u\)_._ 2. \(\mathrm{T}^{r}(G_{N})\leqslant N^{12}\) _for any_ \(r\geqslant 1\)_._ Theorem 4 implies that the fixation time on all balanced graphs is fast for all \(r\geqslant 1\). Similarly, we can prove that it is fast for Megastars [24] and all \(r\geqslant 1\) (see Appendices). The proof of the first part of Theorem 4 relies on the fact that in the neutral case \(r=1\) the fixation probability is additive. This allows us to reduce the size of the linear system that describes the underlying Markov chain from \(2^{N}\) equations to \(N\) equations. For balanced graphs, this system takes a special form that admits an explicit solution. The second part then follows directly from Theorem 3. See Appendices for details. We note that the second part of Theorem 3 has an important computational consequence. Since the fixation time on any balanced graph is bounded from above for any \(r\geqslant 1\), individual-based simulations of the evolutionary Figure 3: **Types of balanced graphs. The class of balanced graphs includes a, Superstars; b, Cyclic complete multipartite graphs; c, Book graphs; and d, Directed fans. Theorem 4 implies that the fixation time on all those graphs is fast for all \(r\geqslant 1\).** process are guaranteed to terminate quickly with high probability [30]. Any relevant quantities of interest, such as the fixation probability of the mutant with \(r\geqslant 1\), can thus be efficiently approximated to arbitrary precision. In particular, Theorem 3 yields a fully-polynomial randomized approximation scheme (FPRAS) for the fixation probability on balanced graphs with any \(r\geqslant 1\). **Theorem 5**.: _There is a FPRAS for fixation probability on balanced graphs for any \(r\geqslant 1\)._ We note that Theorem 5 applies also to any (not necessarily balanced) graph \(G_{N}\), provided that the quantity \(\rho_{\min}(G_{N})\) is inversely proportional to a polynomial. This is the case e.g. for Megastars. Moreover, when \(\rho_{\min}(G_{N})\) is smaller than that, Theorem 3 still gives an explicit, efficiently computable upper bound on the fixation time that can be used to bound the running time of any individual-based simulations. ### Computational experiments Finally, to further illustrate the scope of our results we run several computational experiments on graphs with small population size \(N\). We use nauty[36] to perform such enumerations. Since already for \(N=6\) there are more than one million non-isomorphic strongly connected directed graphs, we consider \(N=5\). For each of the \(5048\) graphs with \(N=5\) we compute the fixation time and the fixation probability under uniform initialization by solving the underlying Markov chain using numerical methods (Fig. 4). The slowest graph is the (undirected) Star graph. Note that when \(N\) is large the fixation time on a Star graph is known to be proportional to roughly \(N^{2}\)[25]. Among the graphs that do not contain any two-way edges, the slowest graphs are variants of either a fan graph \(F_{N}\), or a vortex graph \(V_{N}\). Since both the fan graphs and the vortex graphs belong to the class of balanced graphs, the fixation time on those graphs is fast for any population size \(N\) and any mutant fitness advantage \(r\geqslant 1\) due to Theorem 4. The fixation time appears to be proportional to roughly \(N^{2}\) (see Fig. 5). Together, those results suggest that even though directed graphs with exponentially long fixation times do exist, in practice most small directed graphs reach fixation reasonably quickly. Figure 4: Fixation time and fixation probability under uniform initialization for all \(5048\) graphs with \(N=5\) nodes, for **a, \(r=1.1\)** and **b, \(r=2\)**. Each graph is represented as a colored dot. The undirected graphs (with all edges two-way) are labeled in blue. The oriented graphs (with no edges two-way) are labeled in green. All other directed graphs are labeled in orange. The slowest graph is the (undirected) Star graph \(S_{5}\). Among the oriented graphs, the slowest graph is the Fan graph \(F_{5}\). ## 4 Discussion Studying the evolutionary dynamics in spatially structured populations is notoriously hard. An important aspect is to understand the fixation time of a newly occurring mutant. When the fixation time is exponentially long, the process is expensive to simulate, and moreover various commonly studied quantities such as fixation probability are largely irrelevant. It is thus paramount to delineate settings in which the fixation time is "relatively fast", as opposed to being "exceedingly long." In this work, we consider spatial structures in which some (or all) edges are one-way. It is known that on such structures the fixation time can be exceedingly long [32]. Nevertheless, here we present three results which indicate that fixation times on spatial structures with one-way edges are often fast. First, we prove that on any population structure the fixation time is fast, provided that the mutant fitness advantage \(r\) exceeds a certain threshold value \(r^{\star}\) (see Theorem 1). In the special case when the population structure is represented by a regular graph, the threshold value simplifies to \(r^{\star}=1\), and we recover a known result that the fixation time on regular graphs is short for all \(r\geqslant 1\)[32]. As another corollary, for any Eulerian graph whose degrees are sandwiched between \(\delta\) and \(\Delta\) we can set \(r^{\star}=\Delta/\delta\) (see Theorem 7). Second, somewhat counter-intuitively we show that fixation time sometimes goes up as we increase \(r\). That is, on certain spatial structures fixation of a neutral mutant occurs faster than fixation of a mutant with a small selective advantage. Nevertheless, we show that the magnitude of this effect can be bounded. In particular, in the spirit of parametrized complexity, given a graph structure \(G_{N}\) we define a certain efficiently computable quantity \(\rho_{\min}(G_{N})\), and we bound the fixation time for any \(r\geqslant 1\) from above using \(\rho_{\min}(G_{N})\) and \(N\) (see Theorem 3). This has important consequences for performing individual-based simulations that typically run the process several times and report an empirical average. The limitation of naive individual-based simulations is that, a priori, it is not clear how much time will be needed until the simulations converge, and deciding to stop the simulations mid-way may bias the empirical average, e.g. by over-representing the evolutionary trajectories that quickly go extinct. Using Theorem 3, this limitation can be circumvented by first efficiently computing an upper bound on the expected fixation time without having to simulate the process even once. Third, we identify a class of population structures show fixation times are fast for any \(r\geqslant 1\). This class is surprisingly broad. To start with, it includes several families of graph that had been studied in the context of Evolutionary Graph Theory earlier, such as Superstars and Metafunnels [7], or directed Fans [35]. Furthermore, the class also includes several other graph families of general interest, such as book graphs or cyclic complete multipartite graphs. Similarly, we prove that the fixation times on Megastars are fast for all \(r\geqslant 1\) too. While the focus of this work is to identify regimes and population structures that lead to fast fixation times, population structures with long fixation times may also bedesirable, e.g. in conservation ecology to maintain high levels of ecological diversity [37, 38, 39, 40, 41]. Figure 5: **Fixation time on slow oriented graphs.****a,** The Fan graph with \(k\) blades has \(N=2k+1\) nodes and \(3k\) one-way edges (here \(k=5\) which yields \(N=11\)). The Vortex graph with batch size \(k\) has \(N=2k+2\) nodes and \(4k\) edges (here \(k=3\) which yields \(N=8\)). **b-c,** For both the Fan graphs and the Vortex graphs the fixation time scales roughly as \(N^{2}\), both for \(r=1.1\) and \(r=100\). (Each data point is an average over 100 simulations.) We note that throughout this work we considered the standard model of Moran process with Birth-death updating. A natural direction for future research is to consider related models, such as those with location-dependent fitness [41, 42, 43] or those with death-Birth updating. It is known that in terms of fixation probabilities the Birth-death and the death-Birth processes behave quite differently [44, 45, 18, 10, 46]. However, in terms of fixation time they are qualitatively similar [47]. A decade ago, a foundational work by Diaz et. al. showed that fixation time on any undirected population structure is fast [30]. This result enabled extensive computational exploration of undirected graphs that later lead to several inspiring research outputs [48, 49, 50, 27, 10]. It is our hope that by enabling computational exploration of population structures with some (or all) one-way connections, this work will serve the same purpose. ## 5 Data and code availability Code for the figures and the computational experiments is available from the Figshare repository: [https://doi.org/10.6084/m9.figshare.23802531](https://doi.org/10.6084/m9.figshare.23802531). ## 6 Acknowledgements We thank Brendan McKay for helpful instructions on using nauty and Salil Vadhan for insightful discussions about random walks on directed graphs. J.T. was supported by Center for Foundations of Modern Computer Science (Charles Univ. project UNCE/SCI/004).
2310.06996
A study of the MAD accretion state across black hole spins for radiatively inefficient accretion flows
The study of Magnetically Arrested Disks (MAD) attract strong interest in recent years, as these disk configurations were found to generate strong jets as observed in many accreting systems. Here, we present the results of 14 general relativistic magnetohydrodynamic(GRMHD) simulations of advection dominated accretion flow in the MAD state across black hole spins, carried with cuHARM. Our main findings are as follows. (i) The jets transport a significant amount of angular momentum to infinity in the form of Maxwell stresses. For positive, high spin, the rate of angular momentum transport is about 5 times larger than for negative spin. This contribution is nearly absent for a non-rotating black hole. (ii) The mass accretion rate and the MAD parameter, both calculated at the horizon, are not correlated. However, their time derivatives are anti-correlated for every spin. (iii) For zero spin, the contribution of the toroidal component of the magnetic field to the magnetic pressure is negligible, while for fast spinning black hole, it is in the same order as the contribution of the radial magnetic component. For high positive spin, the toroidal component even dominates. (iv) For negative spins, the jets are narrower than their positive spin counterparts, while their fluctuations are larger. The weak jet from the non-rotating black hole is the widest with the smallest fluctuations. Our results highlight the complex, non-linear connection between the black hole spin and the resulting disk and jet properties in the MAD regime.
G. -Q. Zhang, D. Bégué, A. Pe'er, B. -B. Zhang
2023-10-10T20:27:09Z
http://arxiv.org/abs/2310.06996v1
A study of the MAD accretion state across black hole spins for radiatively inefficient accretion flows. ###### Abstract The study of Magnetically Arrested Disks (MAD) attract strong interest in recent years, as these disk configurations were found to generate strong jets as observed in many accreting systems. Here, we present the results of 14 general relativistic magnetohydrodynamic (GRMHD) simulations of advection dominated accretion flow in the MAD state across black hole spins, carried with cuHARM. Our main findings are as follows. (i) The jets transport a significant amount of angular momentum to infinity in the form of Maxwell stresses. For positive, high spin, the rate of angular momentum transport is about 5 times larger than for negative spin. This contribution is nearly absent for a non-rotating black hole. (ii) The mass accretion rate and the MAD parameter, both calculated at the horizon, are not correlated. However, their time derivatives are anti-correlated for every spin. (iii) For zero spin, the contribution of the toroidal component of the magnetic field to the magnetic pressure is negligible, while for fast spinning black hole, it is in the same order as the contribution of the radial magnetic component. For high positive spin, the toroidal component even dominates. (iv) For negative spins, the jets are narrower than their positive spin counterparts, while their fluctuations are larger. The weak jet from the non-rotating black hole is the widest with the smallest fluctuations. Our results highlight the complex, non-linear connection between the black hole spin and the resulting disk and jet properties in the MAD regime. Accretion - Magnetohydrodynamics - Black hole physics - Computational methods 0000-0002-8870-788X]G.-Q. Zhang ## 1 Introduction Accretion disks are ubiquitous in many astronomical objects, such as active galactic nuclei (AGNs) and X-ray binaries. The structure of the accretion disk mainly depends on the accretion rate. At high accretion rate, close to the Eddington limit, the disks are typically geometrically thin and optically thick, and the models from Novikov & Thorne (1973) and Shakura & Sunyaev (1973) are thought to accurately describe their physics. When the accretion rate is much lower than the Eddington accretion rate, the cooling time becomes longer than the accretion time, leading to a radiatively inefficient accretion flow (RIAF), and the disk becomes geometrically thick and optically thin. In this regime, there are several theoretical disk models, such as the Advection-dominated accretion flow (ADAF, Narayan & Yi 1994, 1995; Abramowicz et al. 1995; Yuan & Narayan 2014). The estimated low luminosity of Sagittarius A\({}^{\star}\) as well as the black hole at the center of the M87 galaxy compared to the Eddington luminosity suggests that these black holes accrete in the form of an ADAFs (Yuan et al. 2002). It is widely believed that the structure of an accretion flow consists of a turbulent accretion disk, a bipolar jet and a magnetized wind (see e.g. McKinney & Gammie 2004; De Villiers et al. 2003). The details of this structure strongly depend on the configuration and the strength of the magnetic field inside and outside the disk. The magnetic fields in the disk, either advected from large distances or created in situ by the dynamo effect, are amplified by the magnetorotational instability (MRI, Balbus & Hawley 1991a, 1998a). They ultimately drive angular momentum transport, regulate accretion and produce a bipolar, strongly magnetized jet. There are two distinct modes of accretion, depending on the magnetic fields surrounding the black hole, which ultimately lead to two different disk configurations. In the Standard And Normal Evolution (hereinafter SANE, Narayan et al. 2012; Sadowski et al. 2013), the magnetic field pressure is not strong and the accretion process is smooth. The accretion disk, although turbulent, extends nearly evenly up to the horizon. In this accretion mode, angular momentum is transported mostly radially inside the disk by MRI (Chatterjee and Narayan, 2022). The second type of disk is termed Magnetically Arrested Disk (MAD, Bisnovatyi-Kogan and Ruzmaikin, 1974; Narayan et al., 2003; Igumenshchev, 2008). In this model, the magnetic flux accumulates near the horizon until it saturates. In fact, the accumulated magnetic field becomes so strong close to the black hole that it can change the dynamics of the in-falling matter, thereby regulating the accretion. It was found in 2D simulations that the accretion can be nearly fully stopped by the magnetic pressure, and then resumes following the reconnection of the magnetic field lines at the equator (see, e.g. Chashkina et al., 2021). This picture however only partially holds in 3D simulations: accretion continuously proceeds, via the development of non-axisymmetric instabilities (Spruit et al., 1995; Begelman et al., 2022), with the in-falling gas being shaped into filaments by the strong magnetic field (see e.g. Figure 1 of Xie and Zdziarski (2019), and Wong et al. (2021)). The MAD state attracted increasingly more attention in the past few years, following the observation of the closest region to the super-massive black hole in M87 and Sagittarius A\({}^{*}\) by the Event Horizon Telescope (EHT) collaboration (Event Horizon Telescope Collaboration et al., 2019, 2022). By comparing the images taken in the radio band to post-processed GRMHD simulations, it was determined that the accretion should operate in the MAD state for those two black holes (Event Horizon Telescope Collaboration et al., 2021, 2022). Yuan et al. (2022) independently arrived at the same conclusion for M87 by studying rotation measurements. Moreover, several studies, including Dexter et al. (2020); Porth et al. (2021); Ripperda et al. (2022); Scepi et al. (2022), proposed a model in which the flares observed in Sagittarius A\({}^{*}\) by the GRAVITY experiment (GRAVITY Collaboration et al., 2018, 2020) have their origin in the magnetic flux eruptions, characteristics of the MAD state (Igumenshchev, 2008). It is clear that the MAD state is ubiquitous at low accretion rate, and better understanding its properties will shed light on key observations of black holes and their physics. An important characteristics of the MAD state is the saturated magnetic flux at the horizon. Since numerical studies of disk evolution depend on assumed initial conditions, in order to numerically study these systems one has to use an appropriate initial magnetic field configuration. Tchekhovskoy et al. (2011) found such a configuration that results in the transport of a large amount of magnetic flux to the horizon. Using normalized black hole spin parameter \(a=0.99\), they found that the disk would be in the MAD state when the MAD parameter reaches \(\Phi_{B}\sim\)50. Here and below, the MAD parameter is defined as the ratio of the magnetic flux to the square root of the mass accretion rate at the horizon. Later, Tchekhovskoy and McKinney (2012) performed two simulations with \(a=0.9\) and \(a=-0.9\) and found that the MAD parameter of the retrograde disk is about 30, smaller than that of prograde disk. Narayan et al. (2022) studied the dependence on the spin of the MAD parameter, confirmed the results from Tchekhovskoy and McKinney (2012) and also found that the maximal value of the MAD parameter (for a given initial magnetic configuration) is, in fact, reached for a black hole spin \(a\simeq 0.5\). The MAD parameter \(\Phi_{B}\) is an important quantity in quantifying both the disk structure and the emerging jet. It was found that disks in the MAD state around rotating black holes launch strong and powerful jets via the Blandford and Znajek (1977) mechanism (see e.g. Tchekhovskoy et al., 2011). The emerging power of the magnetized jet is proportional to the square of the MAD parameter multiplied by the mass accretion rate (Tchekhovskoy et al., 2011). It is therefore important to constrain the accretion parameters and mechanisms which determine and regulate the value and the duty cycle of the MAD parameter. The strong magnetic field during the MAD state pushes out the gas and stops, or at least regulates the dynamics of the in-falling of matter. Therefore, one may naively expect that an increase of the magnetic flux should result in the decrease of the mass accretion rate, namely, that they are anti-correlated. An anti-correlation is also expected if accretion proceeds by interchange instability, as matter replaces a highly magnetized region closer to the black hole resulting in a drop in the MAD parameter (Porth et al., 2021). However, Porth et al. (2021) did not find a correlation or an anti-correlation between the mass accretion rate \(\dot{M}\) and \(\Phi_{B}\). Here, we expand their results for all black hole spins: as we show below, none of our simulations show a correlation or an anti-correlation between \(\dot{M}\) and \(\Phi_{B}\). On the other hand, we do find an anti-correlation between their time derivatives. A complete understanding of the interplay between accretion and saturated magnetic field close to the horizon and how it impacts the structure and power of the disk, jet and wind, as well as their evolution is still missing. Narayan et al. (2022) found that prograde disks have wider jets and that the shape itself depends on the spin of the black hole. The structure of the jet and the disk mainly depends on the magnetic field pressure and the gas pressure inside the disks and jets. Begelman et al. (2022), using the simulation in the MAD regime around a rotating BH with \(a=0.9375\) from Dexter et al. (2020, 2020), proposed that the disk properties in this regime are due to a dynamically important toroidal field in the close vicinity of the black hole. However, Chatterjee & Narayan (2022) studying accretion in the case of a non-rotating black hole, showed that the radial magnetic field \(b^{r}\) is actually stronger than the toroidal magnetic field at small radii. This apparent inconsistency needs to be resolved, even though its origin, namely the black hole spin, is trivially identified. Here, we find that, close to the BH horizon, the toroidal component of the magnetic field, \(b^{\phi}\) is increasing with the absolute value of the BH spin, \(|a|\), being dynamically unimportant for \(a=0\), but dominant for \(|a|\to 1\). It is generally thought that the strong magnetic field close to the black hole suppresses the development of MRI, as the disk height is smaller than the wavelength of the most unstable vertical mode (McKinney et al., 2012; Marshall et al., 2018; White et al., 2019). This, in turn, affects the rate of angular momentum transport inside the disk, which details are still poorly understood in the MAD state. Recently, Begelman et al. (2022) argued that MRI is actually not suppressed closest to the black-hole by considering non-axisymmetric modes (Das et al., 2018). On the other hand, Chatterjee & Narayan (2022) demonstrated that a for non-rotating black hole, angular momentum is transported predominantly by magnetic flux eruptions, characteristics of a disk in the MAD state. It remains to understand if the contribution of this process still dominates the transport of angular momentum over MRI in the case of a rotating black hole. In addition, angular momentum from the disk is transported by the emerging winds above and below it. An additional source of angular momentum from an accretion disk system is the jet, in the form of Maxwell stresses. This contribution to the angular momentum originates mainly from the black hole, rather than the disk, and as such acts to spin down the black hole (Chatterjee & Narayan, 2022). As we show here, this transport can significantly contribute to the amount of angular momentum deposited to the external medium. Since the power of the jet depends on the BH spin, this also constrains the cosmic period over which the system is active. The net rate of angular momentum has a strong dependence on the spin and is the largest for prograde disks. The shape of the jet itself also depends on the BH spin. It is determined by the balance between the internal and external stresses. As explained above, the main source of stresses are the magnetic field components, which, in turn depend on the BH spin. Furthermore, the BH spin not only affects the time-average jet shape, but also fluctuations around it. As we show here, retrograde disks produce the narrowest jets with the largest fluctuations, while non-spinning BHs produce the widest jets with the narrowest fluctuations. Given the complexity of accreting systems, many previous works used general relativistic (eventually radiative and two temperatures) magnetohydrodynamic (GRMHD) simulations to investigate the properties and evolutionary process of MAD disks (Narayan et al., 2012; Sadowski et al., 2013; Porth et al., 2017; White et al., 2020; Porth et al., 2021; Begelman et al., 2022; Narayan et al., 2022; Chatterjee & Narayan, 2022). In the past two decades, with the rapid increase in compute capability, these simulations have become increasingly popular and practical (Gammie et al., 2003; Anninos et al., 2005; Stone et al., 2008; Noble et al., 2006; Porth et al., 2017; Tchekhovskoy, 2019; Liska et al., 2022; Begue et al., 2023). In addition, general-purpose graphics processing units (GPU) started to be used to accelerate fluid simulations in recent years, as they are particularly well-suited to be ran on GPUs. As a result, several GRMHD codes can now use GPU accelerators, see e.g. Chandra et al. (2017); Liska et al. (2020); Begue et al. (2023); Shankar et al. (2022). _grim_ uses the library ArrowFire to achieve GPU compatibility (Chandra et al., 2017). Liska et al. (2020) developed H-AMR with openMP, MPI and CUDA. Building on HARMPI, our group developed a new GPU-accelerated GRMHD code, cuHARM, which uses openMP and cuda (Begue et al., 2023). This code is thoroughly optimized for maximal harness of the power available in NVidia GPUs, with more than 50% computation efficiency on NVidia A100 cards. For the results presented here, the simulations are made on a single multi-GPU workstation. In this paper, using our GRMHD code cuHARM, we study the role played by the magnetic field in the MAD state for different black hole spins, and its effect on the structure of the disk and the jet. For this, we present several simulations with different initial magnetic field strengths and black hole spins. This paper is organized as follows. In Section 2, we present the setup of our simulations and introduce the numerical diagnostics used in our analysis. We discuss the dynamics of the accretion disk system in our simulations in Section 3. In particular, after specifying the inflow and outflow equilibrium radius and the time evolution of \(\dot{M}\) and \(\phi_{B}\) in section 3.1 and 3.2 respectively, we study (i) the absence of correlation between the mass accretion rate and the MAD parameter, and introduce the anti-correlation between their time derivative in section 3.3; (ii) the shape of the jet as a function of spin in section 3.7, finding that the retro-grade disks are narrower than their corresponding prograde disks; and (iii) the component-wise contributions of the magnetic pressure to underline the differences between a spinning and a non-spinning black-hole in section 3.8, where the toroidal component is found to be sub-dominant for \(a=0\) but similar to the radial component for large \(|\mathrm{a}|\). In section 4, we discuss the transport of angular momentum for our simulations with spin \(a=-0.94\), \(a=0\) and \(a=0.94\), underlying the differences between each black hole spins. The summary and conclusions of the paper are given in Section 5. ## 2 Simulations We perform several simulations with cuHARM (Begue et al., 2023), which uses the finite volume method to numerically solve the conservative GRMHD equations (for reviews, see _e.g._Marti and Muller, 2003; Font, 2008; Rezzolla and Zanotti, 2013). The code is written in CUDA-C and openMP, and all calculations of cuHARM are accelerated by GPU (only the data transfer and exports are powered by CPU). To perform the simulations which results are presented in this article, we use an Nvidia DGX-V100 server with 8 Nvidia V100 GPUs. ### Initial setup In this paper, we study on the accretion flows in the MAD state around a spinning black hole. Our simulations begin with the stationary axisymmetric torus described by Fishbone and Moncrief (1976). We set the gas adiabatic index to \(\Gamma=14/9\), and consider an initially large disk with \(r_{\mathrm{in}}=20r_{g}\) and \(r_{\mathrm{max}}=41r_{g}\), where \(r_{\mathrm{in}}\) is the inner boundary of the disk, \(r_{\mathrm{max}}\) is the radius at which the pressure reaches its maximum, and \(r_{g}\) is the gravitational radius. The matter and internal energy densities are normalized such that, for the initial disk, the maximum matter density \(\rho\) in the entire disk is normalized to \(\rho_{\mathrm{max}}=1\). The internal energy density is scaled accordingly. Since the initial torus is in equilibrium, it does not spontaneously evolve. We therefore add small random perturbations (set to 4%) to the internal energy density \(u\) as the seed of instabilities, which will promote accretion. This initial torus is in full hydrodynamic equilibrium and, as such, it does not contain any magnetic field. We introduce a purely poloidal subdominant magnetic field defined by the vector potential \(\mathbf{A}\), such that \(A_{r}=A_{\theta}=0\) and \[A_{\phi}=\max\left[0,\left(\frac{\rho}{\rho_{\mathrm{max}}}\right)\left(\frac {r}{r_{\mathrm{in}}}\sin\theta\right)^{3}\exp\left(-\frac{r}{400}\right)-0.2 \right], \tag{1}\] which has been previously employed in, _e.g._, Wong et al. (2021); Narayan et al. (2022). Here \(r\), \(\theta\) and \(\phi\) are the horizon penetrating spherical Kerr-Schild coordinates. The corresponding magnetic field is initially a single loop confined to the disk. The magnitude of the magnetic field is further normalised by the parameter \(\beta_{0}=p_{\mathrm{gas,max}}/p_{\mathrm{b,max}}\gg 1\), where \(p_{\mathrm{gas,max}}\) is the maximum gas pressure, \(p_{\mathrm{b,max}}=b^{2}/2\) is the maximum of the magnetic field pressure, and \(b=b^{\mu}b_{\mu}\) is the norm of the 4-vector magnetic field, see Section 2.3 below. This expression of the magnetic field is designed to ensure that enough magnetic flux can be transported to the black hole throughout the course of the simulation and "saturates" its magnetosphere; see further discussion in Tchekhovskoy et al. (2011). We conduct a series of simulations with different black hole spins, \(a\in\{-0.985,-0.94,-0.85,-0.5,0,0.5,0.85,0.94,0.985\}\), and an initial magnetization \(\beta_{0}=100\). In the case of the retrograde disk with \(a=-0.94\), we also varied the initial magnetic field strengths, with \(\beta_{0}\in\{100,200,400,800\}\). We evolve most of the simulations until \(t=2\times 10^{4}t_{g}\), where \(t_{g}=r_{g}/c\), apart for aM94b800, which is evolved to \(t=2.5\times 10^{4}t_{g}\) due to the weak initial magnetic field and the longer time required to reach the MAD state for this setup. Additionally, simulation aM94b100h is evolved until \(t=5\times 10^{4}t_{g}\) in order to study the long time behavior of our accretion disk system. We use the spin and the initial \(\beta_{0}\) to name the simulations: "a" stands for spin, "M" indicates a negative value, and "b" represents the initial \(\beta_{0}\), such that, for instance, aM94b100 stands for a simulation with the negative ("M") spin \(a=-0.94\) and an initial \(\beta_{0}=100\). A summary of all the simulations used in this work is given in Table 1. ### Numerical aspects Since we are studying accretion around rotating black hole, we use the Kerr metric for our simulations. The Kerr-Schild (KS) coordinate system \((t,r,\theta,\phi)\) is used as the physical coordinates, while to both enhance the robustness of the calculation and focus the computation in the region of interest, namely close to the black hole and at the equator, the modified Kerr-Schild (MKS, see _e.g._McKinney and Gammie, 2004) coordinates \((t,q^{1},q^{2},\phi)\) are used in the numerical calculation. The relation between these coordinates, as implemented in cuHARM, can be found in section 4.1 of Begue et al. (2023). We use the inflow and outflow boundary conditions in the radial direction at small and large radii, respectively. In the \(\theta\) direction we use the reflective boundary condition, and the periodic boundary condition is used in the \(\phi\) direction. To address the potential numerical errors in empty or strongly magnetized region, we adopt the same flooring model as in Begue et al. (2023), which is used in many other papers, e.g. Porth et al. (2019). The density \(\rho\) and the internal energy \(u\) are limited using \[\rho=\max\left(\rho,10^{-20},10^{-5}r^{-\frac{3}{2}}\right), \tag{2}\] \[u=\max\left(u,10^{-20},\frac{10^{-5}}{3}r^{-\frac{5}{2}}\right). \tag{3}\] Matter and energy are added when needed to preserve the conditions \(b^{2}/\rho<50\) and \(b^{2}/u<2.5\times 10^{3}\). The reference resolution of most simulations is (\(N_{r}\times N_{\theta}\times N_{\phi}\)) = (192, 96, 96), except for aM94b100h and a0b100h, which have a slightly higher resolution (\(N_{r}\times N_{\theta}\times N_{\phi}\)) = (256, 128, 128). Here, the "h" appended to the name stands for "high resolution". The resolution of the simulations presented in this paper is somewhat lower than that used in some of the simulations presented in recent works. For example, Narayan et al. (2022) performed fairly similar simulations with a resolution of 288x192x144. White et al. (2020) examined the impact of different resolutions, and they argued that the accretion rate and the general disk structure agree across the simulations with different resolution. We use our higher resolution simulations, aM94b100h and a0b100h, to check the solidity of our results to a change in the resolution. We did not find any significant difference between the low and high resolution simulations. ### Diagnostics Following Komissarov (1999), let \(b^{\mu}\equiv(\star F)^{\mu\nu}u_{\nu}\) represent the 4-vector magnetic field and \(u^{\mu}\) be the 4-velocity, which is orthogonal to \(b^{\mu}\). In the ideal MHD limit, the dual to the Faraday tensor is given by \[(\star F)^{\mu\nu}=b^{\mu}u^{\nu}-b^{\nu}u^{\mu}. \tag{4}\] In this limit of a perfect magnetized fluid, the stress energy tensor \(T^{\mu\nu}\) is given by \[T^{\mu\nu}=(h+b^{2})u^{\mu}u^{\nu}+\left(p_{g}+\frac{b^{2}}{2}\right)g^{\mu\nu }-b^{\mu}b^{\nu}. \tag{5}\] Here, \(h=\rho+u+p_{g}\) is the enthalpy, \(p_{g}\) is the gas pressure, \(b^{2}=b^{\mu}b_{\mu}\) and \(g^{\mu\nu}\) is the metric tensor with determinant noted \(\sqrt{-g}\). Using cuHARM, we solve the general relativistic magneto-hydrodynamic equations, namely \[\nabla_{\mu}\left(\rho u^{\mu}\right)=0 \tag{6}\] \begin{table} \begin{tabular}{|c||c|c|c||c|c|c|} \hline Name & \(\beta_{0}\) & spin & Resolution \(N_{r}\times N_{\theta}\times N_{\phi}\) & MAD parameter & Accretion Rate & Jet Efficiency \\ \hline \hline aM985b100 & 100 & -0.985 & 192 \(\times\) 96 \(\times\) 96 & \(12.26^{+2.58}_{-2.02}\) & \(23.54^{+9.54}_{-6.52}\) & \(0.28^{+0.15}_{-0.11}\) \\ \hline aM94b100 & 100 & -0.94 & 192 \(\times\) 96 \(\times\) 96 & \(14.46^{+3.99}_{-2.22}\) & \(24.30^{+10.19}_{-8.69}\) & \(0.26^{+0.15}_{-0.09}\) \\ \hline aM85b100 & 100 & -0.85 & 192 \(\times\) 96 \(\times\) 96 & \(17.59^{+3.52}_{-2.97}\) & \(22.62^{+10.76}_{-6.55}\) & \(0.26^{+0.12}_{-0.10}\) \\ \hline aM5b100 & 100 & -0.5 & 192 \(\times\) 96 \(\times\) 96 & \(26.24^{+2.28}_{-3.34}\) & \(31.55^{+9.58}_{-9.15}\) & \(0.11^{+0.02}_{-0.02}\) \\ \hline a0b100 & 100 & 0 & 192 \(\times\) 96 \(\times\) 96 & \(30.94^{+2.25}_{-5.08}\) & \(15.12^{+2.44}_{-2.46}\) & \(0.06^{+0.01}_{-0.01}\) \\ \hline a5b100 & 100 & 0.5 & 192 \(\times\) 96 \(\times\) 96 & \(32.73^{+7.02}_{-6.56}\) & \(45.16^{+15.28}_{-12.90}\) & \(0.22^{+0.07}_{-0.06}\) \\ \hline a85b100 & 100 & 0.85 & 192 \(\times\) 96 \(\times\) 96 & \(29.94^{+5.10}_{-4.79}\) & \(34.23^{+14.34}_{-11.36}\) & \(0.90^{+0.29}_{-0.25}\) \\ \hline a94b100 & 100 & 0.94 & 192 \(\times\) 96 \(\times\) 96 & \(25.37^{+4.27}_{-4.20}\) & \(32.15^{+13.39}_{-10.26}\) & \(1.05^{+0.36}_{-0.38}\) \\ \hline a985b100 & 100 & 0.985 & 192 \(\times\) 96 \(\times\) 96 & \(22.69^{+3.93}_{-3.69}\) & \(32.63^{+15.74}_{-10.80}\) & \(1.22^{+0.47}_{-0.32}\) \\ \hline \hline aM94b200 & 200 & -0.94 & 192 \(\times\) 96 \(\times\) 96 & \(15.09^{+2.66}_{-2.89}\) & \(20.75^{+6.46}_{-6.04}\) & \(0.28^{+0.12}_{-0.11}\) \\ \hline aM94b400 & 400 & -0.94 & 192 \(\times\) 96 \(\times\) 96 & \(16.37^{+2.40}_{-2.21}\) & \(20.41^{+5.85}_{-4.92}\) & \(0.34^{+0.10}_{-0.08}\) \\ \hline aM94b800 & 800 & -0.94 & 192 \(\times\) 96 \(\times\) 96 & \(16.22^{+1.80}_{-1.67}\) & \(15.36^{+5.78}_{-4.60}\) & \(0.33^{+0.07}_{-0.06}\) \\ \hline \hline a0b100h & 100 & 0 & 256 \(\times\) 128 \(\times\) 128 & \(32.84^{+4.23}_{-4.10}\) & \(43.12^{+17.25}_{-11.14}\) & \(0.07^{+0.01}_{-0.01}\) \\ \hline aM94b100h & 100 & -0.94 & 256 \(\times\) 128 \(\times\) 128 & \(15.13^{+1.81}_{-3.18}\) & \(23.94^{+10.65}_{-8.57}\) & \(0.27^{+0.07}_{-0.12}\) \\ \hline \end{tabular} \end{table} Table 1: List of the simulations presented in this paper with their initial magnetization \(\beta_{0}\), spin \(a\) and resolution. The three last columns give the time-average value of the MAD parameter \(\Phi_{B}\), of the accretion rate \(M\) and of the jet efficiency \(\eta\) for \(10^{4}t_{g}<t<2\times 10^{4}t_{g}\) (\(1.5\times 10^{4}t_{g}<t<2.5\times 10^{4}t_{g}\) for aM94b800, \(10^{4}t_{g}<t<5\times 10^{4}t_{g}\) for aM94b100h). \[\nabla_{\mu}\left(T^{\mu\nu}\right)=0 \tag{7}\] \[\nabla_{\mu}\left(\star F^{\mu\nu}\right)=0 \tag{8}\] which respectively are the equations of mass conservation, the equation of energy and momentum conservation, and the homogeneous Maxwell's equations. The MAD state mainly depends on the magnetic flux through the horizon, Therefore, we define the following radial diagnostics, which can eventually be evaluated at the horizon: 1. The mass accretion rate: \[\dot{M}(r)=\int_{\theta=0}^{\theta=\pi}\int_{\phi=0}^{\phi=2\pi}\sqrt{-g}\rho u ^{r}d\theta d\phi.\] (9) 2. The magnetic flux crossing the horizon (through one hemisphere): \[\phi_{B}(r=r_{H})=\frac{1}{2}\int_{\theta=0}^{\theta=\pi}\int_{\phi=0}^{\phi=2 \pi}\sqrt{-g}|\star F^{rt}|d\theta d\phi.\] (10) 3. The energy flux through the horizon towards the black hole: \[\dot{E}(r)=-\int_{\theta=0}^{\theta=\pi}\int_{\phi=0}^{\phi=2\pi}\sqrt{-g}T_{ t}^{r}d\theta d\phi.\] (11) 4. The angular momentum flux in the radial direction: \[\dot{J}_{r}(r)=\int_{\theta=0}^{\theta=\pi}\int_{\phi=0}^{\phi=2\pi}\sqrt{-g} T_{\phi}^{r}d\theta d\phi.\] (12) 5. The MAD parameter: \[\Phi_{B}=\frac{\phi_{B}}{\sqrt{\dot{M}(r=r_{H})}}\] (13) 6. The jet efficiency at the horizon: \[\eta(r=r_{H})=1+\frac{\dot{E}(r=r_{H})}{\dot{M}(r=r_{H})}.\] (14) Note that our definition of angular momentum flux \(\dot{J}\) and energy flux \(\dot{E}\) are opposite to those employed by Narayan et al. (2022), but are in agreement with the definitions of _e.g._ Porth et al. (2019). We are also interested in the structure of the disk and of the jet. Therefore, we define the additional following diagnostics. 1. The disk height, denoted by \((h/r)\): \[(h/r)\ (t,r)=\frac{\int_{0}^{2\pi}\int_{0}^{\pi}|\frac{\pi}{2}-\theta|\rho\sqrt{-g} d\theta d\phi}{\int_{0}^{2\pi}\int_{0}^{\pi}\rho\sqrt{-g}d\theta d\phi}.\] (15) 2. The \(\phi\)-average of a quantity \(q\): \[\langle q\rangle_{\phi}(t,r,\theta)=\frac{\int_{0}^{2\pi}q\sqrt{-g}d\phi}{ \int_{0}^{2\pi}\sqrt{-g}d\phi}.\] (16) 3. The disk-average of a quantity \(q\): \[\langle q\rangle_{\theta,\phi}(t,r)=\frac{\int_{0}^{2\pi}\int_{0}^{\pi}q\rho \sqrt{-g}d\theta d\phi}{\int_{0}^{2\pi}\int_{\theta=\pi/8}^{\theta=7\pi/8}\rho \sqrt{-g}d\theta d\phi}.\] (17) 4. The disk-average of a quantity \(q\) but within a narrow \(\theta\) range (used below in calculating the pressure): \[\langle q\rangle_{\theta,\phi}(t,r)=\frac{\int_{0}^{2\pi}\int_{\theta=\pi/8}^{ \theta=7\pi/8}q\rho\sqrt{-g}d\theta d\phi}{\int_{0}^{2\pi}\int_{\theta=\pi/8}^ {\theta=7\pi/8}\rho\sqrt{-g}d\theta d\phi}.\] (18) We will also present time-averaged quantities and time- and \(\phi-\)averaged maps, which are computed for \(10^{4}t_{g}<t<2\times 10^{4}t_{g}\), unless specified otherwise, and the full \(2\pi\) range for \(\phi\). ## 3 General Accretion Dynamics ### Inflow equilibrium radius and radial limit of our analysis Most simulations are evolved until \(t=2\times 10^{4}t_{g}\), which is sufficient for the disk to be in the MAD state. We assume this state to be established when the MAD parameter reaches its average value at late time. We find that in all cases, the averaged MAD parameter is comparable to or greater than 15, the limiting value proposed by Tchekhovskoy et al. (2011) to define the MAD state. Only aM94b800 barely reaches the MAD state at \(t=2\times 10^{4}M\) because of the initial small magnetic field normalisation. So this simulation is evolved further until \(t=2.5\times 10^{4}M\) at which time the MAD state for this initial setup is well-established. We first study the radial profile of the mass accretion rate \(\dot{M}\) and of the angular momentum flux \(\dot{J}\), as given by Equations (9) and (12), respectively. The results are displayed on the left and middle panel of Figure 1 for simulations aM94b100 and a94b100, serving as examples. The results are similar for all other simulations. For aM94b100 and a94b100, the radial profiles of \(\dot{M}\) and \(\dot{J}\) are averaged over 3 different time intervals, namely from \(5\times 10^{3}\) to \(10^{4}t_{g}\), from \(10^{4}\) to \(1.5\times 10^{4}t_{g}\) and from \(1.5\times 10^{4}\) to \(2\times 10^{4}t_{g}\). In the last time interval, the mass accretion rates of these two runs are independent of the radius \(r\) for \(r<30r_{g}\). The angular momentum fluxes also exhibit a similar pattern, remaining constant at \(r<30r_{g}\). This means that the inner region with \(r<30r_{g}\) is in inflow equilibrium state. There are two key differences between the prograde and retrograde cases. Firstly, the sign of the angular momentum flux is different. The prograde black hole has a positive angular momentum flux, namely, the black hole is losing angular momentum, as was previously reported in Narayan et al. (2022). Conversely, the angular momentum flux of the retrograde black hole is negative, meaning that it accumulates positive angular momentum and spins down. The second difference is the existence of a radius at which the angular momentum flux changes sign for a retrograde black hole. This indicates that there is a net flux of angular momentum away from the black hole at large distance. This radius is also observed in the simulation presented in Narayan et al. (2022), see their figure 3 with the sharp drop of angular momentum flux at \(r\sim 10^{2}r_{g}\). We discuss in more details the differences for the angular momentum transport between prograde and retrograde disks in section 4. The duration of most of our simulations is shorter than recent long-time simulations. For example, Narayan et al. (2022) evolved the accretion system until \(t=10^{5}t_{g}\), such that they obtained an inflow equilibrium radius at \(\sim 100r_{g}\), larger than ours by a factor 3 to 4. To verify the reliability of our results over a longer time span, we extend the simulation aM94b100h to \(t=5\times 10^{4}t_{g}\). In the right panel of Figure 1, we present the radial profile of the mass accretion rate \(\dot{M}\) and the angular momentum flux \(\dot{J}\) for this extended period. The time-averaged quantities for \(2\times 10^{4}t_{g}<t<3\times 10^{4}t_{g}\), \(3\times 10^{4}t_{g}<t<4\times 10^{4}t_{g}\) and \(4\times 10^{4}t_{g}<t<5\times 10^{4}t_{g}\) are displayed. These profiles show the same behavior as aM94b100 but the inflow equilibrium radius now extends to \(r>50r_{g}\). Therefore, the shorter simulation time does not affect the establishment of the equilibrium state but only limit the inflow equilibrium to Figure 1: The radial profile of the mass accretion rate \(\dot{M}\) (solid line) and of the angular momentum flux \(\dot{J}\) (dashed line) for aM94b100 (left panel), a94b100 (middle panel) and for the long term evolution aM94b100h (right panel) at different time periods (see legend). As the time increases, the steady region extends to larger radius. This suggests that aM94b100 and a94b100 have established inflow outflow equilibrium at \(r<30r_{g}\) for \(1.5\times 10^{4}t_{g}<t<2\times 10^{4}t_{g}\), while it is about \(70r_{g}\) at \(4\times 10^{4}<t_{g}<5\times 10^{4}\) for aM94b100h. For the simulations with the negative spin on the left and right panels, the angular momentum fluxes change sign at \(r=30-50r_{g}\) and \(r=50-100r_{g}\), respectively. smaller radius. In the following, we will focus on the inner region characterized by \(r\leq 30_{g}\) and demonstrate, where relevant, that we obtain similar results to those presented in Narayan et al. (2022). Considering that we are using (i) a different numerical scheme, (ii) a different numerical resolution, and (iii) analysing the results earlier in the simulation, this shows the solidity of these results. ### Mass accretion rate and magnetic flux at the horizon Figure 2: Time evolution of the mass accretion rate \(\dot{M}\). The left column pertains to retrograde disks, while the right columns corresponds to prograde disks. For convenience, we set the y-axes to identical scales to facilitate direct comparison. The horizontal lines are the time-averaged mass accretion rate \(\dot{M}\), where the averaged is taken from \(10^{4}t_{g}\) to \(2\times 10^{4}t_{g}\). The time evolution of the mass accretion rate \(\dot{M}\) at the horizon, derived from Equation 9, are shown in Figure 2 for all simulations with different spins. The same rough behaviour is observed in all cases. After an initial quiet period, the mass accretion rate steadily increases to reach its late time average around \(t\sim 5\times 10^{3}t_{g}\). The left column of Figure 2 shows the time evolution of \(\dot{M}\) for retrograde disks, while the corresponding prograde disks are displayed in the right column. The y-axis scale is identical to facilitate comparison. There is a clear dependence of the mass accretion rate Figure 3: The time evolution of MAD parameters \(\Phi_{B}\). The left column represents retrograde disks, while the right column is for prograde disks. We use the horizontal lines to denote the time-averaged \(\Phi_{B}\) from \(10^{4}t_{g}\) to \(2\times 10^{4}t_{g}\), which is referred to as the typical MAD parameter during the MAD state. on the black hole spin. First, as displayed by the dashed line, representing the time average of the mass accretion rate for \(10^{4}t_{g}<t<2\times 10^{4}t_{g}\), prograde disks have a higher mass accretion rate than retrograde ones. The time-averaged accretion rate from \(t=10^{4}t_{g}\) to \(t=2\times 10^{4}t_{g}\) is listed in Table 1 alongside the \(1\sigma\) temporal variation. Second, the variation of the mass accretion rate is more pronounced in prograde disks than in retrograde disks. The time evolution of the MAD parameter can be described as follows. As the accretion proceeds, the magnetic flux at the horizon accumulates until it saturates. At this point, the MAD parameter, as well as the magnetic flux threading the horizon and the mass accretion rate, are regulated by eruptions of magnetic flux. These eruptions expel the magnetic field to far distances from the black hole. In the MAD state, the pressure of the saturated magnetic field is balanced with the gas pressure (Bisnovatyi-Kogan & Ruzmaikin, 1974; Narayan et al., 2003b). Although the flux eruptions cause the fluctuation in both the magnetic flux (Begelman et al., 2022) and the mass accretion rate, the MAD parameters generally remains stable around their time-averaged value. Figure 4: Spin-dependence of several disk and jet properties. Top left panel: mass accretion rate \(\dot{M}\) at the horizon for different spins. The blue points are the results of low resolution simulations, while the red points are the high resolution simulations. Top middle panel: the normalized MAD parameters for different spins. Plotted on top (green dashed line) the fit results of Narayan et al. (2022); Their formula is divided by \(\sqrt{4\pi}/2\) to be consistent with our definition of MAD parameter, see Section 3.2 for more details. Top right panel: the jet efficiency as a function of spin. The green points are the predicted efficiencies and the blue (red) points are the efficiencies derived from our simulation (red- higher resolution). The error bars represent the 1 sigma temporal variation of each quantity. \(\eta>1\) represents more than 100% efficiency. Bottom left: the dependence of the slope \(k\equiv(d\Phi_{B}/dt)/(d\dot{M}/dt)\) on the spin \(a\). The slope \(k\) decreases as the spin \(a\) increases until \(a=0\), at which point the slope \(k\) begin to increase as \(a\) continues to increases. However, \(a=0.85\) slightly deviates from this trend. Bottom right panel: the time averaged disk height \(h/r\) given by Equation (15) as a function of radius. To explore the differences in the MAD state for different spins, we present the time evolution of the MAD factors in Figure 3, where the left column is for negative spin simulations and the right column is for positive spin simulations, similar to Figure 2. The horizontal dashed lines represent the time-averaged MAD parameters from \(t=10^{4}t_{g}\) to \(t=2\times 10^{4}t_{g}\), at which interval the MAD state is well established. The time-averaged MAD parameters during this period are also listed in Table 1. We find that the MAD parameters of prograde disks are higher compared to the corresponding retrograde disks. Additionally, similarly to the mass accretion rate, we find that the temporal fluctuations of the MAD parameters are greater for prograde disk. This is in agreement with the findings of Porth et al. (2021), who also found that their co-rotating simulation has weaker flux expulsions than the counter-rotating case. We also observe many more small flux eruptions for the co-rotating case, but they are accompanied by several strong eruptions, such as the one at \(\sim t=9\times 10^{3}\) for a94b100, as seen on Figure 3. Both the mass accretion rate and the MAD parameter are strongly dependent on the black hole spin, \(a\). This relation is displayed in the top left panel of Figure 4. We find that the mass accretion rate is highest for low, positive spin, and drops for higher values of the spin, both for prograde and retrograde disks, with retrograde disks show lower values than prograde disks. For the \(a=0\) spin we show two results, one with the standard resolution and one with a higher resolution, which we believe is more accurate (see discussion below). We also show in Figure 4 the \(1\sigma\) temporal fluctuation of the mass accretion rate as error bars. Clearly, prograde disks have larger fluctuations (represented by larger error bars on Figure 4) than retrograde disks. We present the relationship between the MAD parameters during the MAD state and the spin of black hole in the top middle panel of Figure 4. The MAD parameter increases as the spin increases from \(a=-0.985\) to \(a=0.5\), and then decreases as the spin increases from \(a=0.5\) to \(a=0.985\). Narayan et al. (2022) reported a similar trend and used a third order polynomial to fit the relationship between \(a\) and the MAD parameter \(\Phi_{B}\). We include their fit result (displayed by the red dashed line) in the top middle panel of Figure 4 to compare with our results. Their definition of MAD parameters slightly differs from ours, so we renormalised their fit formula by a factor of \(\sqrt{4\pi}/2\) to account for this discrepancy. Our results are in good agreement with theirs, which enhances credibility of the results considering that we use a different code, different resolutions, and shorter simulation duration. Our simulations have a somewhat lower resolution (by a factor of \(\approx 4\)) compared to Narayan et al. (2022); Chatterjee & Narayan (2022). To address this limitation, we conducted two simulations, aM94b100h and a0b100h, which have identical initial conditions as aM94b100 and a0b100, respectively, but differ by having a higher resolution. We show the mass accretion rates and the MAD parameters of these two simulations in Figures 2 and 3, alongside their counterpart with lower resolution. In the case of aM94b100 and aM94b100h, there is no significant difference in terms of mass accretion rates and MAD parameters. However, for a0b100 and a0b100h, the mass accretion rates are different by a factor of nearly three. According to White et al. (2019), this indicates a too low resolution for our simulation with spin \(a=0\) (but not for our simulations with rotating black-holes as is suggested by aM94b100h). They demonstrate that resolving the mass accretion rate demands a better resolution than to resolve the MAD parameter, as is also shown here. Indeed, the saturation values of the MAD parameters are nearly identical. This indicates that the properties of the MAD state should be similar. Most of our simulations end at \(t=2\times 10^{4}t_{g}\). As discussed in sub-section 3.1, we extend aM94b100h to \(t=5\times 10^{4}t_{g}\). The time evolution of the mass accretion rate \(\dot{M}\) and of the MAD parameter \(\Phi_{B}\) is shown in Figure 5. The MAD parameter remains relatively stable, oscillating around its average value, once the MAD state is established. However, the mass accretion rate \(\dot{M}\) gradually decreases until \(t=5\times 10^{4}t_{g}\), which is caused by the spread of the disk as the simulation advances. ### Characteristic dynamical timescale In the MAD state, magnetic flux eruptions play a major role in the dynamic of the accretion: the strong magnetic pressure pushes the gas out, which leads to the fluctuations of the accretion rate and of the magnetic flux. Moreover, these magnetic flux eruptions have been proposed as the origin of the Sgr A\({}^{*}\) flares observed in infrared by the GRAVITY collaboration (GRAVITY Collaboration et al., 2018, 2020). In particular, Dexter et al. (2020) numerically estimated the recurrence time of the flare to be between \(10^{3}\) and \(10^{4}\)\(r_{g}/c\), corresponding to 5 to 50 hours for Sgr A\({}^{*}\). The maximum of the intensity is correlated with the sharp drop of the MAD parameter at the onset of each eruption (Dexter et al., 2020), lasting around a hundred \(r_{g}/c\)(Ripperda et al., 2022). The dynamic of a flux tube created in each eruptions was studied in details by Porth et al. (2021). They found that the motion of the low-density/high magnetization region is at first strongly radial because of the magnetic tension, and then tends to circularize when the field is nearly vertical. The typical life-time of these magnetic flux tubes was found numerically to be around 2 orbits, depending on their size and magnetic energy. We use the discrete Fourier transform to study the duty cycle of the MAD parameters. The analysis is performed considering data from \(t=10^{4}t_{g}\) to \(t=2\times 10^{4}t_{g}\) for which, as a preparation step, the time series are de-trended and their average removed, so that the fundamental Fourier coefficient is null. We show the power spectrum of the MAD parameter \(\Phi_{B}\) for our simulations in Figure 6. We focus on the power spectrum from \(f\simeq 4\times 10^{-3}\) to \(f\simeq 2\times 10^{-4}\), which corresponds to the period from \(T=250t_{g}\) to \(T=5000t_{g}\). All our simulations show the same trend: after a few dominant peaks, the power spectrum decays proportionally to \(f^{-1}\), characteristic of pink noise, which are shown as red dashed lines in Figure 6. Janiuk and James (2022) performed a similar study, not on the MAD parameter, but rather on the mass accretion rate. Furthermore, their disks are smaller than ours, with \(r_{\rm in}=6\) or \(12\) and \(r_{\rm max}=12\) or \(25\)\(r_{g}\). They found a power-spectrum of the mass accretion rate, which is well approximated with a power-law of index \(\sim 1.5\). They further report a dependence between the index and the black hole spin. Our analysis, on the other hand, does not show such a spin dependence on the index for the MAD parameter. For some simulations, such as aM94b100 and a5b100, the power spectrum shows clear peaks in the frequency window preceding the onset of the noise. The corresponding periods are \(t=1,111t_{g}\) for aM94b100 and \(t=1,666t_{g}\) for a5b100. For the other simulations, no dominant time scale can be unambiguously identified. Previous identifications of a cyclic behavior of mass accretion rate and magnetic flux were published by, e.g., Chashkina et al. (2021). By analysing the data of their 2D GRMHD simulations, they found a period around \(t\simeq 500r_{g}/c\)(see also, e.g., Igumenshchev, 2008; Dihingia et al., 2023). However, as noted by Chashkina et al. (2021), this pattern is an artefact due to the nature of 2D simulations, preventing non-asymmetric instabilities and phenomena, which are inherently responsible for the continuous accretion in 3D, to develop. ### Relation between the MAD parameter and the mass accretion rate The strong magnetic field during the MAD state pushes out the gas and stops, or at least regulates the dynamics of the in-falling of matter. Therefore, we naively expect that an increase of the magnetic flux should result in the decrease of the mass accretion rate, in other words, that they are anti-correlated. An anti-correlation is also expected if accretion proceeds by interchange instability, as matter replaces a highly magnetized region closer to the black hole resulting in a drop in the MAD parameter (Porth et al., 2021). However, Porth et al. (2021) did not find a correlation or an anti-correlation between \(\dot{M}\) and \(\Phi_{B}\). We expand their results for all black hole spins: none of our simulations show a correlation or an anti-correlation between \(\dot{M}\) and \(\Phi_{B}\). We further studied the relation between the time-derivatives of the mass accretion rate \(\dot{M}\) and of the MAD parameter \(\Phi_{B}\). We first remove their average and de-trend the time series, then we use Fourier transformation to filter out the high (and weak) frequency components. We checked that this procedure does not change the results presented below. Instead, it allows for a better visualisation of the result. The processed time-derivatives are displayed in the left column of Figure 7 for aM94b100 and a94b100. The blue line is the normalized \(d\dot{M}/dt\), and the red dashed line represents the normalized \(d\Phi_{B}/dt\). Here, by "normalise" we mean that the two time-derivatives are stretched so that the maximum of their absolute value is 1. We show the resulting time series for aM94b100 and a94b100 as examples, but all other Figure 5: The long time evolution of the mass accretion rate and MAD parameters for simulation aM94b100h. The MAD parameter remains steady with time although the mass accretion drops. simulations exhibit a similar behavior. To the (positive) heights of \(d\dot{M}/dt\) correspond the (negative) lows of \(d\Phi_{B}/dt\) and vice-versa. The time-derivatives of \(\dot{M}\) and \(\Phi_{B}\) are clearly anti-correlated. In the right column of Figure 7, we show this anti-correlation. This means that when the mass accretion rate increases, the flux threading the horizon decreases. This can be understood as follows. Focusing in the region close to the equator, the horizon surface is separated into two regions: the accretion funnel from which the matter falls inward and the highly magnetized, low density regions of the magnetic flux eruption. As the MAD parameter increases, the magnetic pressure outside of the accretion funnel Figure 6: Power spectrum of the MAD parameters in the time interval \(10^{4}t_{g}<t<2\times 10^{4}t_{g}\) (\(1.5\times 10^{4}t_{g}\) to \(2.5\times 10^{4}t_{g}\) for aM94b800). In all simulations, the power is concentrated in the low frequency region, which corresponds to periods about \(500-2000t_{g}\). In the high frequency region, the power spectrum decreases following \(\sim f^{-1}\). We use a red dashed line to show this evolution. increases, and it gets compressed resulting in a lower accretion rate. Similarly, as the magnetic flux eruption develops, the magnetic flux at the horizon drops, thereby reducing the magnetic pressure and allowing for a larger accretion funnel and accretion rate. The latter is also enhanced by the interaction between the low density magnetic flux tube and the inner region of the turbulent disk at \(r\sim 10-15~{}r_{g}\). Indeed, the magnetic flux tube velocity is sub-keplerian, reducing the velocity of the matter in the disk, which then falls "radially" onto the black hole. We further perform a linear fit to this anti-correlation, and show the dependence of the slope \(k\equiv(d\Phi_{B}/dt)/(d\dot{M}/dt)\) on the spin \(a\) in the bottom left panel of Figure 4. We find that the non-spinning black hole has the steepest slope \(k\), which means that smaller variations in mass accretion rates correspond to larger variation in the MAD parameter, compared to other black hole spins. A smaller slope \(k\) is in general a characteristic of large absolute value of spin \(a\), apart for \(a=0.5\) or for \(a=0.85\), which would deserve further investigations. Finally, we point out that the filtering performed in preparation to the data changes the value of \(k\) by a factor of up to about two, but the trend remains the same if one accounts for this re-scaling. We further note that there seem to be a strong dependence of \(k\) on the resolution, as is shown in the bottom left panel of Figure 4 by the two red dots, which are the slopes obtained from simulations aM94b100 and a0b100. Figure 7: Left: time evolution of the time derivatives of the mass accretion rate and of the MAD parameter. We show the results for aM94b100 and a94b100, but other simulations also exhibit similar behaviors. These two quantities, \(d\dot{M}/dt\) and \(d\Phi_{B}/dt\) have opposite signs throughout the simulations. There is a clear anti-correlation between these two derivatives, shown in the right panels. ### Jet efficiency The jet efficiency, given by Equation (14) depends on the magnetic flux through the horizon normalised by the accretion rate \(\dot{M}\). In agreement with many previous publications (see, e.g., Tchekhovskoy et al., 2011; Tchekhovskoy & McKinney, 2012; McKinney et al., 2012; Narayan et al., 2022), we observe that the jet efficiency of positive spin black hole with \(a>0.5\) is on average larger than 100 %, meaning that the black hole is actually loosing energy: the jet is powered by the Blandford and Znajek mechanism (Blandford & Znajek, 1977). We use the fitting formula proposed by Tchekhovskoy et al. (2010) and latter used by Narayan et al. (2022) to interpret the efficiency, namely \[\eta=\frac{\kappa}{4\pi}\Omega_{H}^{2}\Phi_{B}^{2}\left(1+1.38\Omega_{H}^{2}- 9.2\Omega_{H}^{4}\right). \tag{19}\] Here \(\kappa=0.08\sqrt{\pi}\) is a numerical constant and \(\Omega_{H}=ac/r_{H}\) is the angular velocity at the horizon. This value of \(\kappa\) is chosen such that \(\sqrt{\pi}\) accounts for the difference in the definition of \(\Phi_{B}\) used here and in Tchekhovskoy et al. (2010), and the numerical factor \(0.08\) is chosen to match the normalisation of our numerical data. The predicted efficiency, _i.e._ using the measured value of the MAD parameter \(\Phi_{B}\) together with Equation (19), and the simulated efficiency, namely directly measured from our simulations as defined by Equation (14) are shown in the right panel of Figure 4 as a function spin. It is clear that they are consistent with each other. Narayan et al. (2022) reported the same behavior but used a different value of \(\kappa=0.05\) differing from our normalisation by a factor smaller than 2. This discrepancy could be due to the time at which the measurements are performed. Indeed, when averaging the results of simulation aM94b100h at late times, \(10^{4}t_{g}<t<5\times 10^{4}t_{g}\), we find a lower numerical factor of \(\sim 0.06\). ### Effects of the initial magnetic field strength Since the magnetic field plays a critical role in the accretion process and its regulation, we wish to assess the solidity of our result with respect to the initial magnetic field strength. To this end, we also performed simulations with different values of \(\beta_{0}\) for \(a=-0.94\). In Table 1, we list three additional simulations with initial magnetic field strength \(\beta_{0}\in 100,\ 200,\ 400,\ 800\). The simulation aM94b100 has the strongest initial magnetic field strength, while aM94b800 has the weakest. In Figure 8, we present the mass accretion rates and MAD parameters for those additional simulations with different initial magnetic field \(\beta_{0}\). The left panel shows that at the beginning of the accretion process, the mass accretion rate \(\dot{M}\) increases with the initial magnetic field strength, which can be attributed to the fact that the transfer of angular momentum is initially driven by MRI, which development depends on the strength of the magnetic field. Therefore, the larger the initial magnetic field, the higher the mass accretion rate. However, at late time \(t>15,000t_{g}\), we see that the mass accretion rates achieve similar values across all simulations. Figure 8: Mass accretion rates and MAD parameters of aM94b100, aM94b200, aM94b400, and aM94b800, which are simulations with the same initial conditions but with a different initial magnetic field strengths. In the left panel, we show the mass accretion rates. The larger the initial magnetization (the smaller \(\beta_{0}\)), the faster the mass accretion rate increases. After \(t=1.5\times 10^{4}t_{g}\), the mass accretion rates agree across all simulations. In the right panel, the MAD parameters of these simulations are shown to be consistent with each other after \(t=1.5\times 10^{4}t_{g}\). However, the time required to reach the MAD state is clearly different. In the right panel of Figure 8, it is seen that the MAD parameters also achieve similar values across all simulations during the MAD state, here reached after \(t>15,000t_{g}\) specifically for aM94b800. This suggests that the MAD state is independent of the initial magnetic field strength. However, the time required to reach the MAD state varies. To investigate the dependence of the required time on the initial \(\beta_{0}\), we use the following criteria to define \(t_{\rm MAD}\), which is the time required to reach the MAD state 1. The mean MAD parameter averaged from \(t=t_{\rm MAD}\) to \(t=t_{\rm MAD}+1000\)\(t_{g}\) should be larger than the \(1\sigma\) lower limit of the final MAD parameter. 2. The MAD parameter in the MAD state should be highly variable. Therefore, we require the derivative of the MAD parameter to change sign several times between \(t=t_{\rm MAD}\) to \(t=t_{\rm MAD}+1000\)\(t_{g}\). According to these two criteria, the time it takes to reach the MAD state \(t_{\rm MAD}\) for aM94b100, aM94b100h, aM94b200, aM94b400 and aM94b800 is 3650 \(t_{g}\), 3630 \(t_{g}\), 5220 \(t_{g}\), 8665 \(t_{g}\) and 15505 \(t_{g}\), respectively. In the left panel of Figure 9, we display the time required to reach the MAD state as a function of the initial magnetic field strength \(\beta_{0}\). We find that \(t_{\rm MAD}\) increases linearly with \(\beta_{0}\), namely it increases as the initial magnetic field strength weakens. The best fitting result is a linear function, \(t_{\rm MAD}=17\beta_{0}+1.9\times 10^{3}\). This result suggests that the accumulation of the magnetic flux linearly depends on the initial magnetic field strength. We also study the dependence of \(t_{\rm MAD}\) on the spin for the other simulations with the same initial \(\beta_{0}\). The right panel of Figure 9 displays the dependence of \(t_{\rm MAD}\) on the spin \(a\). As the spin increases from \(a=-1\) to \(a=0\), \(t_{\rm MAD}\) increases. It then decreases as \(a\) increases from \(0\) to \(1\). This dependence is fitted with a third order polynomial which results in \(t_{\rm MAD}=1.9\times 10^{2}a^{3}-1.2\times 10^{3}a^{2}-4.6\times 10^{2}a+4.4 \times 10^{3}\). ### Evolved disk and jet structures The strong magnetic field at the center of the accretion system shapes the disk and launches the bipolar jet at low radii. Therefore, the disk structure is an important characteristics of the MAD state. In the bottom right panel of Figure 4, we show the time-averaged disk height \(h/r\), which is averaged from \(t=10^{4}t_{g}\) to \(t=2\times 10^{4}t_{g}\). We find that retrograde disks are thicker than prograde disks. In the inner region (\(5r_{g}<r<15r_{g}\)), the disk height increases as the Figure 9: Left Panel: The relation between the coasting time \(t_{\rm MAD}\) to reach the MAD state and the initial magnetic field. We show the results for aM94b100, aM94b200, aM94b100h, aM94b400, and aM94b800 in blue circles. The coasting time linearly depends on the initial magnetic field. We use a linear function to fit it, and the best fit result is shown by the red dashed line. Right Panel: The \(t_{\rm MAD}\) as a function of the value of the spin \(a\). We use a third order polynomial function to fit this relation, and the best fitted result is shown by the dashed red line. absolute value of the spin increases, which is consistent with the results of Narayan et al. (2022). However, contrary to the findings in this aforementioned paper, we find that the non-spinning black hole has the thinnest disks of all, while they argued that the disk of the non-spinning black hole is thicker than that of prograde disks. A possible explanation to this discrepancy is the fact that our analysis is carried at earlier times. We show the time and azimuthal averages of the density \(\rho\), of the plasma parameter \(\beta\) and of the magnetization \(\sigma\) for the simulations aM94b100, a94b100 and a0b100 in Figure 10. These values are obtained using Equation 16, and are then averaged from \(t=10^{4}t_{g}\) to \(t=2\times 10^{4}t_{g}\). The difference between these three spins is clear. In the density \(\rho\) plot, the non-spinning black hole has the thinnest disk, while the prograde disk is the thickest, which is consistent with the bottom right panel of Figure 4. The \(\beta\) plots show similar behaviors as the density. There are clear vacuum regions inside the jet, especially for a0b100, which are attributed to the numerical flooring rather than physical effects. We show the polar angle of the jet boundary, which we associate to \(\sigma=1\), as a function of radius for aM94b100, a94b100 and a0b100 in the left panel of Figure 11. The vertical lines correspond to \(1\sigma\) variations and the points represent the median values of the polar angle \(\theta\) for a specific radius \(r\). These values are obtained for \(10^{4}t_{g}<t<2\times 10^{4}t_{g}\). As shown in Figure 3, all three runs remain in the MAD state during this time period, and maintain inflow-outflow equilibrium. It is seen that the prograde disk has a wider jet than the retrograde at \(r<20r_{g}\), which is consistent with the findings of Narayan et al. (2022). It is also clearly seen that the non-spinning black hole has the widest jet. This result is consistent with Figure 10. In order to quantify the degree of variation compared to their mean, we use the standard error \(\sigma_{\theta}/\bar{\theta}\), where \(\sigma_{\theta}\) is the \(1\sigma\) variations of \(\theta\) and \(\bar{\theta}\) is the median. The temporal and radial (between \(2r_{g}\) and \(30r_{g}\) ) averages of \(\sigma_{\theta}/\bar{\theta}\) for aM94b100, a94b100 and a0b100 are 0.11, 0.08 and 0.07, respectively. We further show the kernel density estimation of \(\theta_{\sigma=1}\) at \(r=10r_{g}\) in the right panel of Figure 11. The boundary of the retrograde disk has the largest variations, as expected from large shear stresses at the boundary between the jet and the disk, which have opposite toroidal velocity. Our results are, in this sense, consistent with those of Wong et al. (2021), who argued both analytically and numerically that retrograde disks have stronger shear across the jet-disk boundary than prograde ones. ### Pressure In section 3.7, we demonstrated that prograde disks are thinner and have wider jets. We attribute these to the pressure distribution inside the disk and jets. In Figure 12, we show the different components of the magnetic pressure and of the gas pressure for aM94b100, a0b100 and a94b100. These components are calculated using Equation (18) and are further time-averaged. This implies that they are biased towards large density regions. We find that the overall total pressure \(p+b^{2}/2\) is the lowest for non-rotating black hole and the largest for prograde disk. At small radii, the magnetic pressure \(b^{2}\), represented by the blue line, dominates over the gas pressure. As the radius increases, the gas pressure gradually becomes dominant. The transition radius for \(a=0.94,\,0,\,-0.94\) are \(r_{\rm eq}=2.71r_{g},\,3.13r_{g},\,\,2.35r_{g}\), respectively. Begelman et al. (2022) analyzed a prograde disk, which is somewhat smaller than the ones we analyze here. They reported that the gas pressure evolves \(\propto r^{-2}\) while the magnetic pressure was found to evolve faster closest to the black hole. Here we find that the gas pressure evolution is somewhat steeper than \(r^{-2}\). This could be due to the different disk structure, different resolution we are using or the shorter time interval over which the averaged is performed. We note that a similar radial evolution of the gas pressure, namely \(p_{g}\propto r^{-2}\) was reported by Tchekhovskoy et al. (2011) and McKinney et al. (2012) who found that \(p_{g}\propto r^{-1.9}\) for their "thinner" disk models. We further show in Figure 12 the radial dependence of all individual magnetic pressure components. The total magnetic pressure has a similar radial evolution in our work and in the work of Begelman et al. (2022), namely its radial evolution is steeper than \(r^{-2}\) close to the black hole. As is seen from Figure 12, the polar component \(b_{\theta}^{2}\) never dominates for any of the spins and is much lower than the radial and toroidal components. Second, at small radii inside the ISCO, the main difference between rotating and non-rotating black holes is the contribution of the toroidal field \(b_{\phi}^{2}\): it is negligible compared to the radial component for a non-rotating black hole, while both components are of the same order for rapidly rotating black holes. The toroidal field even dominates for the prograde disk very close to the black hole. This explains the difference between the conclusions of Begelman et al. (2022), who studied disks around rapidly rotating black holes, and Chatterjee & Narayan (2022) who studied non-spinning black hole systems, which is the importance of the toroidal component in a MAD accretion process. Similar radial evolution of the magnetic field components were reported by Tchekhovskoy et al. (2011) and McKinney et al. (2012) who found that \(b_{r}\propto r^{-1.5}\) and Figure 10: Time and azimuthal averaged density \(\rho\), ratio of gas to magnetic pressure \(\beta\) and magnetization \(\sigma\) for aM94b100 (top), a0b100 (middle) and a94b100 (bottom). The black lines in the middle and right panels represent the \(\beta=1\) and \(\sigma=1\) conditions, respectively. \(b_{\phi}\propto r^{-1}\) for their "thinner" disk models. Clearly, the radial dependence of \(b^{r}\) does not seem to depend on the black hole spin, while the toroidal component does. We show the \(\phi-\) and time- averaged maps of the pressure components of- aM94b100, a0b100 and a94b100 in Figure 13. All sub-figures use the same color scaling, allowing an easy comparison between them. At small radii close to the black hole, the radial component \(b_{r}^{2}\) are large for all three simulations and decrease quickly as the radius increases. For \(a=\pm 0.94\), large values of \(b_{r}^{2}\) are also found at large radii \(r\sim 10r_{g}\) in the jet region. The polar component, \(b_{\theta}^{2}\), is negligible in all three cases. Note that the blank regions in the color-maps of \(b_{\theta}^{2}\) are caused by numerical truncating errors: this component is too small relative to the other two components. The distributions of the toroidal component \(b_{\phi}^{2}\) also shows a great variance. The non-spinning black hole has the weakest \(b_{\phi}^{2}\) component, while it is larger for the Figure 11: Left Panel: The \(1\sigma\) region of jet boundary for \(a=-0.94,\ 0\) and \(0.94\). We use the magnetization parameter \(\sigma=1\) condition as the jet-disk boundary and derive the \(1\sigma\) variations and median of the polar angle \(\theta\) for \(\sigma=1\) at a given radius \(r\). The vertical lines are \(1\sigma\) variations and the dots are the median values. Right Panel: The kernel density estimation (KDE) of the polar angle \(\theta\) such that \(\sigma=1\) at radius \(r=10r_{g}\). The non-spinning black hole has the widest jet, corresponding to the largest \(\theta\), while the prograde disk with \(a=0.94\) has a wider jet than the retrograde disk with \(a=-0.94\). In addition, the non-spinning black hole has the narrowest peak, which suggests that the jet boundary of a non-spinning black hole is the most stable. Figure 12: The angular- and time-averaged gas pressure \(p\) (yellow) and magnetic pressure \(b^{2}\) (blue) for aM94b100, a0b100 and a94b100. We further show each component of the magnetic pressure, \(b_{r}^{2}\) (red), \(b_{\theta}^{2}\) (green) and \(b_{\theta}^{2}\) (purple). These values are calculated by Equation 18 and then time-averaged. The dashed (light blue) curve represents an \(r^{-2}\) dependence. Figure 13: The time and azimuthal averages of the magnetic pressure components, namely \(b_{\rho}^{2}\) (first column), \(b_{\theta}^{2}\) (second column), and \(b_{\phi}^{2}\) (third column), and of the gas pressure (last column) for \(a=-0.94\) (top), \(a=0\) (middle) and \(a=0.94\) (bottom). The color scaling is the same for all spins and all components to better display the differences. The most striking difference is the small contribution of the azimuthal component \(b^{\phi}\) to the total magnetic pressure for the non-spinning black hole \(a=0\). spining black holes. We note the drop of \(|b^{\phi}|^{2}\) at the equator: this is because \(b^{\phi}\) changes sign at the equator due to the presence of a current sheet separating the two hemispheres. This drop is more pronounced and visible for the spinning black holes. The gas pressure \(p_{g}\) of the non-spinning black hole contains clear weak channels about \(\theta\simeq\frac{\pi}{3}\) and \(\theta\simeq\frac{2\pi}{3}\). These channels are also visible for the spinning black holes, but are less pronounced. As we previously discussed in section 3.7, these channels are caused by the numerical floors applied in the regions closest to the pole, which are required to maintain numerical stability in our simulations. ## 4 Angular Momentum Flux It has long been argued that magnetic fields drive the transfer of angular momentum during accretion via the development of the magneto-rotational instability (Balbus & Hawley, 1991, 1998), but also by helping to launch a disk wind and producing a strong jet, both components capable of significantly transporting angular momentum. In this section, we investigate the dependence of angular momentum flux on the spin of the black hole. We use the definitions in Chatterjee & Narayan (2022) to define the total angular momentum flux \(\dot{J}_{\rm total}\), the advected angular momentum flux \(\dot{J}_{\rm adv}\) and the angular momentum flux due to stresses \(\dot{J}_{\rm stress}\), \[\dot{J}_{\rm total}^{i}(r,\theta)=\left\langle T^{i}_{\,\phi}\right\rangle_{ \phi,t} \tag{20}\] \[\dot{J}_{\rm adv}^{i}(r,\theta)=\left\langle\left(\rho+u_{g}+\frac{b^{2}}{2} \right)u^{i}\right\rangle_{\phi,t}\left\langle u_{\phi}\right\rangle_{\phi,t} \tag{21}\] \[\dot{J}_{\rm stress}^{i}(r,\theta)=\dot{J}_{\rm total}^{i}(r,\theta)-\dot{J}_{ \rm adv}^{i}(r,\theta) \tag{22}\] where \(\langle X\rangle_{j}\) represent the average of \(X\) with respect to variable \(j\). Note that for this section, there is no weighting by the density \(\rho\). We further decompose the stressed induced angular momentum flux into its Maxwell \(\dot{J}_{\rm stress,M}\) and its Reynolds \(\dot{J}_{\rm stress,R}\) components: \[\dot{J}_{\rm stress,M}^{i}(r,\theta)=\left\langle\frac{b^{2}}{2}u^{i}u_{\phi}- b^{i}b_{\phi}\right\rangle_{\phi,t} \tag{23}\] \[\dot{J}_{\rm stress,R}^{i}(r,\theta)=\left\langle\left(\rho+u_{g}+\frac{b^{2}} {2}\right)u^{i}u_{\phi}\right\rangle_{\phi,t}-\dot{J}_{\rm adv}^{i} \tag{24}\] Following these definitions, the time- and \(\phi\)-averaged angular momentum flux in the radial direction of a94b100, a0b100 and aM94b100 are shown in Figure 14 at three different radii, namely \(10r_{g}\), \(15r_{g}\) and \(20r_{g}\). For non-spinning black hole, depicted in the middle line of Figure 14, the results we obtained are similar to the results reported in Chatterjee & Narayan (2022). The radial component of the total angular momentum flux \(\dot{J}_{\rm total}^{r}\) is negative in the disk region at all sampled radii, \(r=10r_{g}\), \(15r_{g}\), and \(20r_{g}\). This indicates that as the matter accretes, it brings a net angular momentum to the black hole. However, at \(\theta\simeq 0.35\pi\) and \(0.65\pi\), the total angular momentum flux changes sign to become positive. This shows that angular momentum is being transported away from the accretion disk by the wind. The radial angular momentum flux converges to 0 as it approaches the poles, underlying the weakness of the jet for \(a=0\). In the disk, the contributions of \(\dot{J}_{\rm adv}^{r}\) and \(\dot{J}_{\rm stress,R}^{r}\) dominate and are always negative, suggesting that they are responsible for the inward transport of angular momentum in the disk, as was previously demonstrated by Chatterjee & Narayan (2022). We note that the relative importance of the Reynolds components is large at small radii but decreases towards large radii and is in agreement with the results of Chatterjee & Narayan (2022) at \(r=20r_{g}\). Furthermore, the Maxwell stress component \(\dot{J}_{\rm stress,M}^{r}\) is always positive with two maxima, reached at \(\theta\simeq 0.4\pi\) and \(\theta\sim 0.6\pi\). At these high latitudes, the radial component of the total angular momentum flux is dominated by \(J_{\rm stress,M}^{r}\) underlying the importance of magnetic fields in the production and dynamics of the wind above the disk in the transport of angular momentum. The results of the black hole with positive spin \(a=0.94\) are displayed in the top line of Figure 14. We find this case to be qualitatively similar to the non-spinning black hole scenario: the radial component of the total angular momentum flux is negative in the disk and positive at high latitudes, with a change of sign at \(\theta\sim 0.4\pi\) and \(\theta\sim 0.6\pi\). Outside the disk, we find that the Maxwell component strongly dominates and reaches its maxima symmetrically with respect to the equator at \(\theta\sim 0.25\pi\) and \(\theta\sim 0.75\pi\). The radial components of the angular momentum flux remain large up to the poles. This indicates the existence of powerful magnetized winds and jets, through which the disk and the black hole lose a substantial fraction of their angular momentum. We further see that the angular momentum flux contributed by advection is negative in the disk region. However, it is positive at high latitude, which is different from the case of the non-spinning black hole. This may be due to the large contribution of the magnetic fields energy density \(b^{2}/2\) in its expression. Finally, we find that the Reynolds stress component is much smaller than any of the other components and can be safely ignored. For the negative spin simulation \(a=-0.94\) displayed in the bottom line of Figure 14, we find that the radial angular momentum flux distribution with angle is significantly distinct from that of the non-rotating and positive spin black holes. This is mostly due to the fact that the poloidal velocity changes sign: the plasma corotate with the black hole close to the pole while it corotates with the disk at the equator. First, we find that the radial component of the total angular momentum flux is always negative at all angles. As a result, the black hole gains angular momentum and its spin decreases towards \(0\). This is consistent with the result presented in Figure 1. The contribution of the advection to the flux of angular momentum is similar to the cases of positive and null spins, namely its contribution is maximum and negative at the equator. The Maxwell component contribution is positive at the equator, showing that magnetic fields are responsible for taking away radial angular momentum also in this case. However this contribution is negative around \(\theta\simeq 0.2\pi\) and \(0.8\pi\) and dominates the radial total angular momentum flux at those angles. The signs of angular momentum flux are mainly determined by the sign of the azimuthal velocity \(u_{\phi}\) and of \(b^{r}b_{\phi}\). In Figure 15, we show the \(\phi\)- and time-averaged evolution of \(u_{\phi}\) and of \(b^{r}b_{\phi}\) with the polar angle \(\theta\) at the three radii selected for the angular momentum flux in Figure 14. For the \(a=0\) and \(a=0.94\) cases, the toroidal velocity \(u_{\phi}\) is positive for all angles \(\theta\) as expected, therefore, the sign of the advection contribution to the angular momentum flux \(\dot{J}_{\rm adv}\) depends on the sign of \(u^{r}\). On average, \(u^{r}\) is negative in the disk region since the matter is accreted towards the black hole. It is however positive at high latitude, as matter is carried away in the disk wind and the jet. This change of sign explains the pattern of \(\dot{J}_{\rm adv}\) seen in Figure 14 for \(a=0.94\). For the non-spinning black hole, the azimuthal velocity quickly goes to \(0\) at high latitude, which explains the null contrbution of the advection to the angular momentum flux in this region. On the other hand, the term \(b^{r}b_{\phi}\) is always negative at all angles \(\theta\) for the black hole with spin \(a=0.94\). In this case, the maxima (one in each hemisphere) of \(|b^{r}b_{\phi}|\) is reached closer to the poles than for the non-rotating black hole, further underlying the different disk structure: rotating black holes have wider disk than non-rotating ones, as was already demonstrated in Section 3.7 and in Figures 10 and 11. The polar angles \(\theta\) at which the maxima of \(b^{r}b_{\phi}\) are reached correspond to the angle at which the contribution of the Maxwell stress is maximal. The actual sign of the component \(b^{r}b_{\phi}\) depends on our arbitrary choice of initial disk magnetization, and specifically on the orientation of the initial magnetic field loop. A different orientation would lead to a different sign of the \(b^{r}b_{\phi}\) component. Indeed, the sign of \(b_{\phi}\) is set by frame-dragging and it is positive in the south hemisphere and negative in the north hemisphere, for prograde disks. However, the sign of \(b^{r}\) would be opposite if the initial magnetic field loop would have had a different orientation, as demonstrated in McKinney et al. (2012). We note here another difference between the \(a=0\) and \(a=0.94\) black holes: \(b^{r}b_{\phi}\) is negative close to the poles for the rotating black hole, while it is very small for the non-rotating black hole. The situation is different for the retrograde disk, with a different evolution of the azimuthal velocity \(u_{\phi}\) and of the term \(b^{r}b_{\phi}\) with polar angle \(\theta\), both shown in the third line of Figure 15 for \(a=-0.94\). The toroidal velocity \(u_{\phi}\) is also positive in the disk region. However, it becomes negative close to the poles around \(\theta\simeq 0.2\pi\) and \(\theta\simeq 0.8\pi\). The term \(b^{r}b_{\phi}\) is also different from that of the prograde disk. First, it has a different sign close to the pole: \(b^{r}b_{\phi}\) is positive for a retrograde disk since (i) \(b_{\phi}\) is negative in the south hemisphere and positive in the north hemisphere because of frame dragging, and (ii) \(b^{r}\) is negative in the south hemisphere and positive in the north hemisphere. Here, the sign of \(b^{r}\) is set by the orientation of the initial magnetic field loop. At the equator, however, \(b^{r}b_{\phi}\) has the same sign for both prograde and retrograde disks. This explains the sign of the Maxwell stress contribution to the radial angular momentum flux. In order to understand how angular momentum is transported throughout the disk, we show in Figure 16 colormaps of the angular momentum flux modulus \[\dot{J}^{2}=(\dot{J}^{r})^{2}+(\dot{J}^{\theta})^{2}, \tag{25}\] for the total, advection and Maxwell stress components and the associated streamlines. The top line shows the total angular momentum flux, while the middle and bottom lines show the contribution of the advection and Maxwell stress, respectively. The left, middle and right columns are for spins \(a=-0.94,~{}0,~{}0.94\) respectively. We note that the scale of the color coding is the same in each line to ease the comparison between the different black hole spins. Further, we point out that the equilibrium radius for these simulations is around \(r=30r_{g}\), prompting caution for larger radius. First, it is clear that each of these figures is antisymmetric with respect to the equator, as expected. We note that these figures are obtained directly from the data from our simulation without imposing the symmetry as done by Chatterjee and Narayan (2022). From the top line, we further see that for all three spins, angular momentum is lost by the disk through the disk wind. There is one more difference between those figures: the angular momentum flux at the pole is negative for \(a=-0.94\) and \(a=0\), while it is positive for \(a=0.94\). Also, the contribution to the total angular momentum flux in this region appears to be small compared to the flux at the equator as seen in Figure 14. In addition, Figure 16 shows the presence of a transition layer in the simulation with a negative and a null spin. In this transition layer, the angular momentum flux changes direction, being oriented outwards in the wind and inward in the jet. The position of this transition layer is at larger angle from the pole for \(a=-0.94\) than for \(a=0\). This transition layer also appears in the Maxwell stress contribution. The streamlines of the Maxwell stress contribution to the total momentum flux are shown in the last line of Figure 16. We find two interesting features. First both the prograde and retrograde disks display a large Maxwell stress contribution to the angular momentum flux in the jet region contrary to the non-spinning black hole. This underlines that the jets of a non-rotating black hole in the MAD regime are weak and do not transport a substantial amount of angular momentum (and in fact energy) to their surrounding environments. The largest outward contribution in the non-rotating black hole actually comes from the magnetized wind. It is also clear that the jet of the prograde disk is stronger than that of the retrograde black hole and deposit angular momentum faster into its environment. We further see a substantial Maxwell stress contribution into the wind, which shares the same characteristics as the total angular momentum flux, meaning it is weaker for retrograde disks. Finally, we find that for the retrograde disk, a region of nearly zero Maxwell stress contribution separates the wind region from the disk region. This transition is associated with the change of sign of the toroidal velocity \(u^{\phi}\), shown by the red line in Figure 16 for \(a=-0.94\). It is clear that the red line follows the transition region. ## 5 Conclusion In this work, we performed several GRMHD simulations of thick accretion disks in the MAD regime around black holes characterized by different spin \(a\) with cuHARM. Our key results can be summarized as follows: 1. We studied the angular momentum flux for our simulations with \(a=-0.94,\ 0,\ 0.94\) and underlined the differences. We found that i) a substantial amount of angular momentum is transported away in the magnetized wind and that the Maxwell stresses are larger for rotating black hole than for non-rotating black hole as expected because of frame-dragging. In fact, the amount of angular momentum transported by the "jet" for the non-rotating black hole is small and negligible. These results were provided in Section 4 and are clearly displayed in Figure 16. 2. We did not find any correlation between the mass accretion rate and the MAD parameter. However, we did find an anti-correlation between their time derivatives, displayed for our simulations aM94b100 and a94b100 on Figure 7. We provided a heuristic explanation for this result in Section 3.4. 3. We underlined the difference in the magnetic field component strengths for spinning and non-spinning black holes in Section 3.8. From Figures 12 and 13, it is seen that the \(\theta\) component is always subdominant, while the relative importance of the \(\phi\) component depend on the spin of the black hole. In the non-rotating case, the toroidal component is negligible compared to the radial component close to the horizon, while these components are comparable for the rotating black hole. The toroidal component even dominates very close to the horizon for \(a=0.94\). We therefore recover both results from Begelman et al. (2022) and Chatterjee and Narayan (2022) on the importance of the toroidal component in regulating the accretion, and confirmed that the difference has its origin in the black hole spin. 4. We underlined the differences in the structure of the disks and the jets in the MAD state as a function of spin in Section 3.7. In particular, we found that retrograde disks are wider than the corresponding prograde disks, in agreement with the findings of Narayan et al. (2022). Correspondingly, the jets of prograde disk are narrower than the jets of retrograde disks. These results are displayed in Figure 11. 5. We studied the MAD parameters and investigated their dependence on the spin and the initial magnetic field strength in Sections 3.2 and 3.6. We found that our numerical results are in very good agreement with those of Figure 14: The radial angular momentum flux components \(\dot{J}^{r}\) as a function of the polar angle \(\theta\) for a94b100 (top), a0b100 (middle) and aM94b100 (bottom) at 3 different radii, \(r=10,\ 15,\ 20r_{g}\). Narayan et al. (2022), namely the MAD factor increases with spin until \(a=0.5\) after which it decreases. The fitting formula from Narayan et al. (2022), which we renormalized to account for the difference in definitions, accurately describes the distribution of \(\Phi_{B}\) with spin. Therefore, this result holds across resolution, simulation duration and numerical method. The results of this analysis are summarized in the top raw of Figure 4. We find that the \(\Phi_{B}\) and \(\Phi_{B}\) are consistent with the \(\Phi_{B}\) and \(\Phi_{B}\). Figure 15: The toroidal component of the 4-velocity \(u^{\phi}\) and \(b^{r}b_{\phi}\) as a function of \(\theta\) for a94b100 (top), a0b100 (middle) and aM94b100 (bottom), at the radii at which the radial angular momentum flux are obtained Figure 16: Time and \(\phi-\)averaged maps of the angular momentum flux of the simulations with spin \(a=-0.94\) (left), \(a=0\) (middle) and \(a=0.94\) (right). The color map represents the modulus of the angular momentum flux, \(\sqrt{(\hat{J}^{\theta})^{2}+(\hat{J}^{r})^{2}}\). The first row shows the total angular momentum flux, while the second and third rows show the contributions of the advection and Maxwell stress components, respectively. The color coding for each row is the same to ease the comparison. For \(a=-0.94\), we use the red lines to denote the transition radius where the toroidal velocity component \(u^{\phi}\) changes sign. It is seen that its position corresponds to the region in which the radial component of the Maxwell stress changes sign as well. that the MAD parameter does not have a strong dependence on the initial magnetic field strength parameter \(\beta_{0}\). It only takes longer for the simulation to reach the MAD state, namely for the magnetic field to saturate to its final value. Once the MAD state is achieved, all simulations with different \(\beta_{0}\) share the same characteristics, as shown in Figure 8. 6. We attempted to identify a characteristic variability time in the MAD regime by studying the temporal variation of \(\Phi_{B}\) via the Fourier transform in Section 3.3. Contrary to the 2D case (see e.g. Chashkina et al., 2021), no clear period could be identified unambiguously. This is because in 3D simulation, accretion proceed via non-axisymmetric instabilities, such as the interchange instability (Spruit et al., 1995; McKinney et al., 2012; Begelman et al., 2022). Yet, the typical variability time we find is around a few hundreds \(t_{g}\) to a thousands, comparable to the estimates from Lloyd-Ronning et al. (2016) and James et al. (2022). This time scale is also comparable to the time scale inferred by Wong et al. (2021) who studied the layer between the disk and the jet. Our results shed light on the differences in the accretion dynamics of disks in the MAD state across spins, prompting for more detailed analysis of the transport of angular momentum by jets and winds, as well as the the role of the toroidal magnetic field in (i) shaping the disk and jet and (ii) providing the norm of the MAD parameter, and via the norm understanding the efficiency of conversion between accretion luminosity and black hole spin energy deposited into the Poynting jets.
2305.02946
What Else Can Voronoi Diagrams Do For Diameter In Planar Graphs?
The Voronoi diagrams technique was introduced by Cabello to compute the diameter of planar graphs in subquadratic time. We present novel applications of this technique in static, fault-tolerant, and partially-dynamic undirected unweighted planar graphs, as well as some new limitations. 1. In the static case, we give $n^{3+o(1)}/D^2$ and $\tilde{O}(n\cdot D^2)$ time algorithms for computing the diameter of a planar graph $G$ with diameter $D$. These are faster than the state of the art $\tilde{O}(n^{5/3})$ when $D<n^{1/3}$ or $D>n^{2/3}$. 2. In the fault-tolerant setting, we give an $n^{7/3+o(1)}$ time algorithm for computing the diameter of $G\setminus \{e\}$ for every edge $e$ in $G$ the replacement diameter problem. Compared to the naive $\tilde{O}(n^{8/3})$ time algorithm that runs the static algorithm for every edge. 3. In the incremental setting, where we wish to maintain the diameter while while adding edges, we present an algorithm with total running time $n^{7/3+o(1)}$. Compared to the naive $\tilde{O}(n^{8/3})$ time algorithm that runs the static algorithm after every update. 4. We give a lower bound (conditioned on the SETH) ruling out an amortized $O(n^{1-\varepsilon})$ update time for maintaining the diameter in *weighted* planar graph. The lower bound holds even for incremental or decremental updates. Our upper bounds are obtained by novel uses and manipulations of Voronoi diagrams. These include maintaining the Voronoi diagram when edges of the graph are deleted, allowing the sites of the Voronoi diagram to lie on a BFS tree level (rather than on boundaries of $r$-division), and a new reduction from incremental diameter to incremental distance oracles that could be of interest beyond planar graphs. Our lower bound is the first lower bound for a dynamic planar graph problem that is conditioned on the SETH.
Amir Abboud, Shay Mozes, Oren Weimann
2023-05-04T15:48:25Z
http://arxiv.org/abs/2305.02946v3
# What Else Can Voronoi Diagrams Do For Diameter In Planar Graphs? ###### Abstract The Voronoi diagrams technique, introduced by Cabello [SODA'17] to compute the diameter of planar graphs in subquadratic time, has revolutionized the field of distance computations in planar graphs. We present novel applications of this technique in static, fault-tolerant, and partially-dynamic undirected unweighted planar graphs, as well as some new limitations. * In the static case, we give \(n^{3+o(1)}/D^{2}\) and \(\tilde{O}(n\cdot D^{2})\) time algorithms for computing the diameter of a planar graph \(G\) with diameter \(D\). These are faster than the state of the art \(\tilde{O}(n^{5/3})\)[SODA'18] when \(D<n^{1/3}\) or \(D>n^{2/3}\). * In the fault-tolerant setting, we give an \(n^{7/3+o(1)}\) time algorithm for computing the diameter of \(G\setminus\{e\}\) for every edge \(e\) in \(G\) (the replacement diameter problem). This should be compared with the naive \(\tilde{O}(n^{8/3})\) time algorithm that runs the static algorithm for every edge. * In the incremental setting, where we wish to maintain the diameter while adding edges, we present an algorithm with total running time \(n^{7/3+o(1)}\). This should be compared with the naive \(\tilde{O}(n^{8/3})\) time algorithm that runs the static algorithm after every update. * We give a lower bound (conditioned on the SETH) ruling out an amortized \(O(n^{1-\varepsilon})\) update time for maintaining the diameter in _weighted_ planar graph. The lower bound holds even for incremental or decremental updates. Our upper bounds are obtained by novel uses and manipulations of Voronoi diagrams. These include maintaining the Voronoi diagram when edges of the graph are deleted, allowing the sites of the Voronoi diagram to lie on a BFS tree level (rather than on boundaries of \(r\)-division), and a new reduction from incremental diameter to incremental _distance oracles_ that could be of interest beyond planar graphs. Our lower bound is the first lower bound for a dynamic planar graph problem that is conditioned on the SETH. Planar graphs, diameter, dynamic graphs, fault tolerance * [color=black]0.0 ## 1 Introduction The Voronoi diagrams technique is based on the fact that the Voronoi diagrams are not necessarily planar graphs. In the static case, we give \(n^{3+o(1)}/D^{2}\) and \(\tilde{O}(n\cdot D^{2})\) time algorithms for computing the diameter of a planar graph \(G\) with diameter \(D\). These are faster than the state of the art \(\tilde{O}(n^{5/3})\)[SODA'18] when \(D<n^{1/3}\) or \(D>n^{2/3}\). In the static case, we give an \(n^{7/3+o(1)}\) time algorithm for computing the diameter of \(G\setminus\{e\}\) for every edge \(e\) in \(G\) (the replacement diameter problem). This should be compared with the naive \(\tilde{O}(n^{8/3})\) time algorithm that runs the static algorithm for every edge. In the incremental setting, where we wish to maintain the diameter while adding edges, we present an algorithm with total running time \(n^{7/3+o(1)}\). This should be compared with the naive \(\tilde{O}(n^{8/3})\) time algorithm that runs the static algorithm after every update. We give a lower bound (conditioned on the SETH) ruling out an amortized \(O(n^{1-\varepsilon})\) update time for maintaining the diameter in _weighted_ planar graph. The lower bound holds even for incremental or decremental updates. Our upper bounds are obtained by novel uses and manipulations of Voronoi diagrams. These include maintaining the Voronoi diagram when edges of the graph are deleted, allowing the sites of the Voronoi diagram to lie on a BFS tree level (rather than on boundaries of \(r\)-division), and a new reduction from incremental diameter to incremental _distance oracles_ that could be of interest beyond planar graphs. Our lower bound is the first lower bound for a dynamic planar graph problem that is conditioned on the SETH. ## 2012 ACM Subject Classification Theory of computation \(\rightarrow\) Shortest paths; Theory of computation \(\rightarrow\) Dynamic graph algorithms ### 2012 \(\rightarrow\) Dynamic graph algorithms #### 2012.1 \(\rightarrow\) Dynamic graph algorithms Planar graphs, diameter, dynamic graphs, fault tolerance #### 2013.1 \(\rightarrow\) Dynamic graph algorithms Planar graphs, diameter, dynamic graphs, fault tolerance #### 2013.2 \(\rightarrow\) Dynamic graph algorithms Planar graphs, diameter, dynamic graphs, fault tolerance #### 2013.3 \(\rightarrow\) Dynamic graph algorithms Planar graphs, diameter, dynamic graphs, fault tolerance #### 2013.4 \(\rightarrow\) Dynamic graph algorithms Planar graphs, diameter, dynamic graphs, fault tolerance #### 2013.5 \(\rightarrow\) Dynamic graph algorithms Planar graphs, diameter, dynamic graphs, fault tolerance #### 2013.6 \(\rightarrow\) Dynamic graph algorithms Planar graphs, diameter, dynamic graphs, fault tolerance #### 2013.7 \(\rightarrow\) Dynamic graph algorithms Planar graphs, diameter, dynamic graphs, fault tolerance #### 2013.8 \(\rightarrow\) Dynamic graph algorithms Planar graphs, diameter, dynamic graphs, fault tolerance #### 2013.9 \(\rightarrow\) Dynamic graph algorithms Planar graphs, diameter, dynamic graphs, fault tolerance #### 2013.1 \(\rightarrow\) Dynamic graph algorithms Planar graphs, diameter, dynamic graphs, fault tolerance #### 2013.1 \(\rightarrow\) Dynamic graph algorithms Planar graphs, diameter, dynamic graphs, fault tolerance #### 2013.1 \(\rightarrow\) Dynamic graph algorithms Planar graphs, diameter, dynamic graphs, fault tolerance #### 2013.2 \(\rightarrow\) Dynamic graph algorithms Planar graphs, fault tolerance #### 2013.1 \(\rightarrow\) Dynamic graph algorithms Planar graphs, fault tolerance #### 2013.2 \(\rightarrow\) Dynamic graph algorithms Planar graphs, fault tolerance #### 2013.1 \(\rightarrow\) Dynamic graph algorithms Planar graphs, fault tolerance #### 2013.1 \(\rightarrow\) Dynamic graph algorithms Planar graphs, fault tolerance #### 2013.2 \(\rightarrow\) Dynamic graph algorithms Planar graphs, fault tolerance #### 2013.1 \(\rightarrow\) Dynamic graph algorithms Planar graphs, fault tolerance #### 2013.2 \(\rightarrow\) Dynamic graph algorithms Planar graphs, fault tolerance #### 2013.1 \(\rightarrow\) Dynamic graph algorithms Planar graphs, fault tolerance #### 2013.2 \(\rightarrow\) Dynamic graph algorithms Planar graphs, fault tolerance #### 2013.3 \(\rightarrow\) Dynamic graph algorithms Planar graphs, fault tolerance #### 2013.1 \(\rightarrow\) Dynamic graph algorithms Planar graphs, fault tolerance #### 2013.1 \(\rightarrow\) Dynamic graph algorithms Planar graphs, fault tolerance #### 2013.2 \(\rightarrow\) Dynamic graph algorithms Planar graphs, fault tolerance #### 2013.3 \(\rightarrow\) Dynamic graph algorithms Planar graphs, fault tolerance #### 2013.1 \(\rightarrow\) Dynamic graph algorithms Planar graphs, fault tolerance #### 2013.1 \(\rightarrow\) Dynamic graph algorithms Planar graphs, fault tolerance #### 2013.2 \(\rightarrow\) Dynamic graph algorithms Planar graphs, fault tolerance #### 2013.3 \(\rightarrow\) Dynamic graph algorithms Planar graphs, fault tolerance #### 2013.4 \(\rightarrow\) Dynamic graph algorithms Planar graphs, fault tolerance #### 2013.5 \(\rightarrow\) Dynamic graph algorithms Planar graphs, fault tolerance #### 2013.6 \(\rightarrow\) Dynamic graph algorithms Planar graphs, fault tolerance #### 2013.7 \(\rightarrow\) Dynamic graph algorithms Planar graphs, fault tolerance #### 2013.8 \(\rightarrow\) Dynamic graph algorithms Planar graphs, fault tolerance #### 2013.9 \(\rightarrow\) Dynamic graph algorithms Planar graphs, fault tolerance #### 2013.10 \(\rightarrow\) Dynamic graph algorithms Planar graphs, fault tolerance #### 2013.11 \(\rightarrow\) Dynamic graph algorithms Planar graphs, fault tolerance #### 2013.12 \(\rightarrow\) Dynamic graph algorithms Planar graphs, fault tolerance #### 2013.13 \(\rightarrow\) Dynamic graph algorithms Planar graphs, fault tolerance #### 2013.14 \(\rightarrow\) Dynamic graph algorithms Planar graphs, fault tolerance #### 2013.1 \(\rightarrow\) Dynamic graph algorithms Planar graphs, fault tolerance #### 2013.15 \(\rightarrow\) Dynamic graph algorithms Planar graphs, fault tolerance #### 2013.16 \(\rightarrow\) Dynamic graph algorithms Planar graphs, fault tolerance #### 2013.17 \(\rightarrow\) Dynamic graph algorithms Planar graphs, fault tolerance #### 2013.18 \(\rightarrow\) Dynamic graph algorithms Planar graphs, fault tolerance #### 2013.19 \(\rightarrow\) Dynamic graph algorithms Planar graphs, fault tolerance #### 2013.19 \(\rightarrow\) Dynamic graph algorithms Planar graphs, fault tolerance #### 2013.20 \(\rightarrow\) Dynamic graph algorithms Planar graphs, fault tolerance #### 2013.21 \(\rightarrow\) Dynamic graph algorithms Planar graphs, fault tolerance #### 2013.22 \(\rightarrow\) Dynamic graph algorithms Planar graphs, fault tolerance #### 2013.23 \(\rightarrow\) Dynamic graph algorithms Planar graphs, fault tolerance #### 2013.24 \(\rightarrow\) Dynamic graph algorithms Planar graphs, fault tolerance #### 2013.25 \(\rightarrow\) Dynamic graph algorithms Planar graphs, fault tolerance #### 2013.26 \(\rightarrow\) Dynamic graph algorithms Planar graphs, fault tolerance #### 2013.27 \(\rightarrow\) Dynamic graph algorithms Planar graphs, fault tolerance #### 2013.28 \(\rightarrow\) Dynamic graph algorithms Planar graphs, fault tolerance #### 2013.29 \(\rightarrow\) Dynamic graph algorithms Planar graphs, fault tolerance #### 2013.29 \(\rightarrow\) Dynamic graph algorithms Planar graphs, fault tolerance #### 2013.30 \(\rightarrow\) Dynamic graph algorithms Planar graphs, fault tolerance #### 2013.14 \(\rightarrow\) Dynamic graph algorithms Planar graphs, fault tolerance #### 2013.16 \(\rightarrow\) Dynamic graph algorithms Planar graphs, fault tolerance #### 2013.17 \(\rightarrow\) Dynamic graph algorithms Planar graphs, fault tolerance #### 2013.18 \(\rightarrow\) Dynamic graph algorithms Planar graphs, fault tolerance #### 2013.19 \(\rightarrow\) Dynamic graph algorithms Planar graphs, fault tolerance #### 2013.21 \(\rightarrow\) Dynamic graph algorithms Planar graphs, fault tolerance #### 2013.22 \(\rightarrow\) Dynamic graph algorithms Planar graphs, fault tolerance #### 2013.23 \(\rightarrow\) Dynamic graph algorithms Planar graphs, fault tolerance #### 2013.24 \(\rightarrow\) Dynamic graph algorithms Planar graphs, fault tolerance #### 2013.25 \(\rightarrow\) Dynamic graph algorithms Planar graphs, fault tolerance #### 2013.26 \(\rightarrow\) Dynamic graph algorithms Planar graphs, fault tolerance #### 2013.27 \(\rightarrow\) Dynamic graph algorithms Planar graphs, fault tolerance #### 2013.28 \(\rightarrow\) Dynamic graph algorithms Planar graphs, fault tolerance #### 2013.29 \(\rightarrow\) Dynamic graph algorithms Planar graphs, fault tolerance #### 2013.28 \(\rightarrow\) Dynamic graph algorithms Planar graphs, fault tolerance #### 2013.29 \(\rightarrow\) Dynamic graph algorithms Planar graphs, fault tolerance #### 2013.21 \(\rightarrow\) Dynamic graph algorithms Planar graphs, fault tolerance ## 1 Introduction The diameter problem asks to compute the largest distance in the graph. It is one of the most basic and extensively studied problems in the graph algorithms literature, and moreover, it is prominent in Fine-grained Complexity where it has driven the development of innovative hardness reductions [1, 4, 5, 9, 11, 17, 29, 36, 67]. Assuming the strong exponential time hypothesis (SETH), there is also no truly subquadratic algorithm for diameter[67, 5] in undirected, unweighted graphs with treewidth \(\Omega(\log n)\). For graphs of bounded treewidth, the diameter can be computed in near-linear time [5] (see also [41, 50] for algorithms with time bounds that depend on \(D\)). Near-linear time algorithms were developed for many other restricted graph families, see e.g. [14, 13, 31, 32, 33, 34, 40, 43, 49, 66]. One of the outstanding questions that has remained open despite a decade of major developments in algorithms and conditional lower bounds for graph problems is whether diameter can be solved in near-linear time in _planar graphs_. Until 2017, only logarithmic improvements over the natural \(O(n^{2})\) bound (of computing all-pairs shortest-path, APSP) had been known [23, 72]. The consensus was that truly subquadratic time is impossible and the focus of the community was on proving a hardness result, e.g. under SETH. But then, in a celebrated paper, Cabello [22] gave a subquadratic \(\tilde{O}(n^{11/6})\) time algorithm, that was later improved to the current-best \(\tilde{O}(n^{5/3})\) bound [45]. The breakthrough in Cabello's work [22] is his novel use of _Voronoi Diagrams_ (VDs) in planar graph algorithms. This new machinery has revolutionized the field of distance computation problems in planar graphs and has lead to several breakthroughs [26, 28, 35, 47, 63] including a surprising and almost-optimal _distance oracle_ - a problem that had hitherto seen many gradual improvements using different techniques both in the exact [10, 21, 26, 30, 35, 39, 47, 57, 63, 64, 42, 46, 65, 63, 65, 66, 73] and the approximate [24, 48, 54, 55, 56, 69, 74] settings. Consequently, the main meta question occupying the minds of researchers in planar graph algorithms is: _what else can Voronoi diagrams do for us?_ ### Dynamic Planar Diameter It is natural to expect VDs to produce breakthroughs in the domain of _dynamic_ planar graphs. Dynamic data structures that support updates and queries to a graph have remarkable applications in theory (as a subroutine in static algorithms) and practice (for changing inputs). Many ingenious algorithms for basic problems in dynamic planar graphs have been developed in the last few decades, including connectivity, distances, and cuts [6, 18, 19, 25, 28, 37, 42, 51, 52, 53, 55, 57, 59, 62, 68, 51, 53, 55, 59, 69, 52, 68], but large (polynomial) gaps remain compared to the lower bounds [3]. Only few of these works [27, 28] use VDs and only in a limited way (they recompute the VD from scratch after every update). It is clear that major advancements await if one is able to maintain the VD machinery _dynamically_ in a meaningful way. In this paper, we investigate this possibility by focusing on the diameter problem. The state-of-the-art algorithm recomputes the diameter from scratch after every update in time \(\tilde{O}(n^{5/3})\). This is not surprising since the only useful technique against diameter (in static graphs) is based on VDs, and we do not know how to make VDs dynamic. The first question that comes to mind is: Suppose, optimistically, we could make VDs as dynamic as possible; _what time bound would we hope to get?_ Clearly, we cannot get \(O(n^{2/3-\varepsilon})\) time per update until we break the \(\tilde{O}(n^{5/3})\) bound for static graphs. Moreover, a conditional \(n^{2/3-o(1)}\) lower bound (under the APSP or Online Matrix Vector Conjectures) follows from the reductions of Abboud and Dahlgaard [3]. So perhaps dynamic VDs would lead to a matching \(O(n^{2/3})\) upper bound? Our first result rules out this possibility with an \(n^{1-o(1)}\) lower bound under SETH. [Lower Bound on Dynamic Diameter] If the diameter of a dynamic undirected planar graph on \(n\) nodes can be maintained with \(O(n^{1-\varepsilon})\) amortized time per weight-change, then SETH is false. This holds even if the dynamic algorithm is allowed to preprocess the initial graph in \(poly(n)\) time, and even in the partially-dynamic setting where weights only increase or only decrease. Notably, this is the first lower bound for a dynamic planar graph problem that is based on the SETH (as opposed to other conjectures) and only the second example of such a result if we consider _static_ planar graph problems as well [2, 46]. Towards Dynamic Voronoi Diagrams.A large gap of \(n^{2/3}\) remains despite our lower bound and it is likely that it can be closed if we can indeed make VDs dynamic.1 In this paper, we take a small (but arguably the first) step towards this goal: we give an efficient algorithm for updating the VD after the deletion of one edge in the graph, much faster than recomputing it from scratch. (We refer to Section 5 for an overview and all the details.) This small step already has interesting applications. While it applies for general (weighted) planar graphs, the applications we have found only gain an advantage in unweighted planar graphs. Footnote 1: It is tempting to think that Theorem 3.2 implies a dynamic diameter algorithm with update time \(\tilde{O}(n^{1.6})\); Use an \(r\)-division and maintain for each piece the DDG and bisectors. Upon an update of an edge in a piece \(P\), recompute the DDG of \(P\) (using MSSP) and the bisectors of \(P\) (using Theorem 3.2). For each vertex in the graph, recompute all additive weights using FR-Dijkstra, and compute the furthest vertex in each piece using Theorem 3.2. The caveat is that this approach does not handle properly the case where both endpoints of the diameter path belong to the same piece (not necessarily \(P\)). The reason is that the VD mechanism only handles paths that visit at least one boundary node. A concrete application is a faster algorithm for the replacement diameter: given a graph \(G\) return the diameter of \(G\setminus\{e\}\), the graph obtained by removing the edge \(e\), for all edges \(e\). The trivial algorithm for this problem makes \(O(n)\) calls to a static diameter algorithm, one for each edge, and achieves \(\tilde{O}(n^{8/3})\) running time. We improve this upper bound by an \(n^{1/3}\) factor to \(n^{7/3+o(1)}\) by utilizing our efficient updates to VDs, along with other tricks that are also based on VDs (but not in a dynamic way). [Replacement Diameter] Given an unweighted undirected planar graph \(G=(V,E)\), there is an \(n^{7/3+o(1)}\) time algorithm that for every edge \(e\in E\) outputs the diameter of \(G^{e}=(V,E\setminus\{e\})\). An additional new result is a faster algorithm for diameter in the _incremental_ setting where we start from an empty graph and need to maintain the diameter while \(O(n)\) edges are being added (without violating the planarity). The trivial algorithm recomputes the diameter after every update in a total of \(\tilde{O}(n^{8/3})\) time, and we improve it to \(n^{7/3+o(1)}\). [Incremental Diameter] There is an algorithm that maintains the diameter of an unweighted undirected planar graph undergoing edge insertions in a total of \(n^{7/3+o(1)}\) time. This result is based on an elegant reduction from incremental diameter to incremental _distance oracles_ that could be of interest beyond planar graphs. Its analysis relies on recent works on the _bipartite independent set_ queries introduced by Beame et al. [13]. ### Static Planar Diameter Back to diameter in static graphs, what else can we hope to get from VDs? Of course, the biggest open question is whether the \(n^{5/3}\) bound can be improved to \(n^{1+o(1)}\), or whether one can prove a super-linear lower bound. Toward this question, we would like to understand the hard/easy cases, and a natural parameter to consider is \(D\) - the diameter itself. One of the main algorithmic contributions of this paper, that is crucial to the aforementioned upper bounds, is an algorithm beating \(n^{5/3}\) when \(D\) is large (in the range \([n^{2/3+\varepsilon},n]\)). Notably, it implies that anyone seeking a tight conditional lower bound cannot use constructions with very large diameter. [Static Large Diameter] The diameter can be computed in \(n^{3+o(1)}/D^{2}\) time on an unweighted undirected planar graph with diameter \(D\). Our new algorithm applies VDs in a novel way, where the VD sites lie on a BFS tree level, as opposed to lying on the boundary of pieces in an \(r\)-divisions. While our result is the first to address the large \(D\) case, the other extreme of small \(D\) has already been studied. Eppstein [41] gave the first near-linear time algorithm for constant \(D\), with an exponential dependence on \(D\). This dependency was later improved as a byproduct of new \((1+\varepsilon)\)-approximation algorithms for diameter[15, 24, 70, 41]. The state of the art is \(\tilde{O}(n\cdot D^{5})\) using the \((1+\varepsilon)\)-approximation \(\tilde{O}(n\cdot(1/\varepsilon)^{5})\)-time algorithm of Chan and Skrepetos [24] with \(\varepsilon=1/D\). The final result of this paper is an improved bound of \(\tilde{O}(nD^{2})\) which increases the range in which the \(n^{5/3}\) bound can be beaten from \(D<n^{2/15-\varepsilon}\) to \(D<n^{1/3-\varepsilon}\). [Static Small Diameter] The diameter can be computed in \(\tilde{O}(n\cdot D^{2})\) time on an unweighted undirected planar graph with diameter \(D\). Our algorithm exploits VDs in a more natural way than that of Chan and Skrepetos [24], if our goal is solve the small \(D\) case exactly (recall that their focus is on approximations). It remains an interesting open question whether the \(\tilde{O}(n\cdot(1/\varepsilon)^{5})\) time approximation algorithm can be improved. This is related to another challenge of computing _approximate VDs_ faster than exact, which we do not address in this paper. ## 2 Preliminaries A recursive decomposition tree \(\mathcal{T}\) of a planar graph \(G\) is the tree obtained (in linear time) by recursively separating \(G\) with a separator of size \(\sqrt{|G|}\). \(\mathcal{T}\) is a binary tree whose nodes correspond to subgraphs of \(G\) (called _pieces_), with the root being all of \(G\) and the leaves being pieces of constant size. We identify each piece \(P\) with the node representing it in \(\mathcal{T}\) (we can thus abuse notation and write \(P\in\mathcal{T}\)), and with its boundary \(\partial P\) (i.e. vertices that belong to some separator along the recursive decomposition used to obtain \(P\)). An important property for us (see e.g. [47, Lemma 3.1]) is that the sum of \(|P|\cdot|\partial P^{\prime}|\) over all pairs of siblings \(P,P^{\prime}\) in \(\mathcal{T}\) is \(\tilde{O}(n^{1.5})\). An \(r\)-_division_[44] of a planar graph \(G\) is a decomposition of \(G\) into \(\Theta(n/r)\) pieces, each of them with \(O(r)\) vertices and \(O(\sqrt{r})\) boundary vertices (vertices shared with other pieces). It is possible to compute an \(r\)-division in \(O(n)\) time [58] with the useful property that the boundary vertices of each piece lie on a constant number of faces of the piece (called _holes_). The _dense distance graph_ (DDG) of a piece \(P\) is the complete graph over the boundary vertices of \(P\). The length of edge \(uv\) in the DDG of \(P\) equals to the \(u\)-to-\(v\) distance inside \(P\). Note that the DDG of \(P\) is non-planar. The DDG of an \(r\)-division is the union of DDGs of all pieces of the \(r\)-division. Thus, the total number of vertices in the DDG is \(O(n/\sqrt{r})\), and the total number of edges is \(O(n)\). The DDG of an \(r\)-division can be computed in \(\tilde{O}(n)\) time using the MSSP algorithm [57]. Fakcharoenphol and Rao [42] described an \(\tilde{O}(n/\sqrt{r})\) time implementation of Dijkstra's algorithm (nicknamed FR-Dijkstra) on the DDG. The difficult case for computing the diameter is when the furthest pair of vertices lie in different pieces. Consider some source vertex \(s\) outside of some piece \(P\). For every boundary vertex \(u\) of \(P\), let \(d(u)\) denote the \(s\)-to-\(u\) distance in \(G\). The _additively weighted Voronoi diagram_ of \(P\) with respect to \(d(\cdot)\) is a partition of the vertices of \(P\) into pairwise disjoint sets (Voronoi cells), each associated with a unique boundary vertex (site) \(u\). The vertices in the cell \(\operatorname{Vor}(u)\) are all the vertices \(v\) of \(P\) such that \(u\) is the last boundary vertex of \(P\) on the shortest \(s\)-to-\(v\) path. In other words, every site \(u\) of \(P\) has _additive weight_\(d(u)\), the _additive distance_ from a site \(u\) to a vertex \(v\) of \(P\) is defined as \(d(u)\) plus the length of the shortest \(u\)-to-\(v\) path inside \(P\), and the cell \(\operatorname{Vor}(u)\) contains all vertices \(v\) of \(P\) that are closer (w.r.t. additive distances) to \(u\) than to any other site in \(S\). The _boundary_\(\partial\operatorname{Vor}(u)\) of a cell \(\operatorname{Vor}(u)\) consists of all edges of \(P\) that have exactly one endpoint in \(\operatorname{Vor}(u)\). For example, in a Voronoi diagram of just two sites \(u\) and \(v\), the boundary of the cell \(\operatorname{Vor}(u)\) is a \(uv\)-cut and is therefore a cycle in the dual graph. This cycle is called the _uv-bisector_. The complexity \(|\partial\operatorname{Vor}(u)|\) of a Voronoi cell \(\operatorname{Vor}(u)\) is the number of faces of \(P\) that contain vertices of \(\operatorname{Vor}(u)\) and of at least two more Voronoi cells. For every source \(s\), computing the furthest vertex from \(s\) in \(P\) thus boils down to computing, for each site \(u\), the furthest vertex (w.r.t. additive distance) from \(u\) in \(\operatorname{Vor}(u)\), and then returning the maximum value among all sites \(u\). [[45]] Let \(P\) be an edge-weighted planar graph with \(r\) vertices. Let \(S\) be a set of \(b\) sites that lie on the boundaries of \(\tilde{O}(1)\) faces2 of \(P\). The \(uv\)-bisectors of all pairs \(u,v\in S\) and all possible additive weights \(d(u),d(v)\) can be computed and represented in \(\tilde{O}(rb^{2})\) time and space. Then, given any additive weights \(d(\cdot)\) to \(S\), a representation of the Voronoi diagram w.r.t these weights can be constructed in \(\tilde{O}(|S|)\) time. With this representation, for any site \(u\in S\) we can query the maximum distance from \(u\) to a vertex in \(\operatorname{Vor}(u)\) in \(\tilde{O}(|\partial\operatorname{Vor}(u)|)\) time. Footnote 2: Theorem 1.1 in [45] is phrased for a constant number of faced (called holes). However, as pointed in footnote 8 in [45], the dependency of the running time on the number of holes is polynomial, so the theorem applies also to the case of a polylogarithmic number of holes. ## 3 Static Diameter ### An \(n^{3+o(1)}/D^{2}\) Algorithm In this subsection we prove Theorem 3.1, stating that the diameter can be computed in \(n^{3+o(1)}/D^{2}\) time on an unweighted undirected planar graph with diameter \(D\). We first present a randomized \(\tilde{O}(n^{4}/D^{3})\) time algorithm, and then show how to improve it to \(n^{3+o(1)}/D^{2}\). We then show how to derandomize both algorithms. We begin with two simple observations about the BFS levels when the diameter is \(\geq D\). Let \(s\) be any node in a graph of diameter \(\geq D\). Then at least one out of the \(D/2\) middle levels of the BFS tree rooted at \(s\) has size \(O(n/D)\). Let \(s\) be any node in \(G\) and let \(L_{i}\) be the set of nodes at level \(i\) in the BFS tree rooted at \(s\). Let \(G_{i}\) be the subgraph of \(G\) that is induced by \(\bigcup_{j\geq i}L_{j}\). Then for each connected component \(C\) of \(G_{i}\) the nodes in \(L_{i}\cap C\) lie on a single face. Proof.: To see that the vertices of \(L_{i}\cap C\) all lie on the same face in \(G_{i}\), consider the embedding of the component \(C\) of \(G_{i}\) inherited from the embedding of \(G\). Viewing \(C\) as a graph obtained from \(G\) by deleting edges and vertices, one can start from any vertex of \(L_{i}\) and follow a curve in the plane that only goes through deleted edges and vertices until reaching the root \(s\) of the BFS tree. Hence all vertices of \(L_{i}\) lie on a single face of \(C\), and hence also of \(G_{i}\). A randomized algorithm.We first compute in \(O(n)\) time a 2-approximation (lower bound) \(\tilde{D}\) of \(D\) by computing a BFS tree and choosing \(\tilde{D}\) to be the furthest root-to-leaf distance. Then, we repeat the following procedure \(\theta(n\log n/\tilde{D})\) times, and return the largest distance found: 1. Randomly sample a source \(s\), compute its BFS tree. Let \(D^{\prime}\) be the depth of this tree. Note that \(D\geq D^{\prime}\geq D/2\). Let \(S=L_{i}\) be the set of nodes at level \(i\) satisfying both \(D^{\prime}/4<i<3D^{\prime}/4\) and \(|S|=O(n/D^{\prime})=O(n/D)\). By Observation 3.1, such a set exists. Let \(G_{i}\) be the subgraph of \(G\) induced by \(\bigcup_{j\geq i}L_{j}\). 2. Compute \(d(v,b)\) for all \(v\in G\) and all \(b\in S\). 3. For each connected component \(C\) of \(G_{i}\): 1. Compute all bisectors in \(C\) of sites \(C\cap S\) (that lie on a single face by Observation 3.1). 2. For each node \(v\) in \(G\setminus G_{i}\), compute the VD of \(C\) w.r.t the additive weights \(d(v,b)\), and compute the distance from \(v\) to its furthest vertex in every Voronoi cell of the VD. Running time.The first step takes \(O(n)\) time by computing and traversing the BFS tree of \(s\). The second step takes \(O(n^{2}/D)\) time by doing a BFS from each vertex of \(S\) in \(O(n)\) time. The most expensive step is 3a. By Theorem 3.1, all bisectors of a connected component \(C\) can be computed in \(\tilde{O}(|C|\cdot|C\cap S|^{2})\) time. Over all connected components, this sums up to \(\tilde{O}(n\cdot(n/D)^{2})\) (since the \(C\)'s are disjoint and sum up to \(n\), and the \(C\cap S\) are disjoint and sum up to \(O(n/D)\)). Finally, in step 3b, for each vertex \(v\), computing \(v\)'s VD and furthest vertex in every cell takes \(\tilde{O}(|C\cap S|)\) time by Theorem 3.1. Over all connected components, this sums up to \(\tilde{O}(n/D)\), and thus over all vertices \(v\) to \(\tilde{O}(n^{2}/D)\). The total running time of the entire procedure is thus \(\tilde{O}(n\cdot(n/D)^{2})\), and since we repeat the procedure \(\tilde{O}(n/D)\) times we get \(\tilde{O}(n^{4}/D^{3})\). #### Correctness. It remains to prove that the distance we return is indeed the diameter with high probability. Let \(x,y\) be the two endpoints of the diameter (i.e. \(D=d(x,y)\)). Then, the probability that a random source \(s\) satisfies \(d(s,x)\leq D^{\prime}/4\) and \(d(s,y)\geq 3D^{\prime}/4\) is at least \(D^{\prime}/4n\) (because this happens if \(s\) is one of the first \(D^{\prime}/4\) nodes on the path from \(x\) to \(y\)). Therefore, this happens with high probability for at least one of the sources \(s\) that we choose. For this \(s\), we will have that \(x\in G\setminus G_{i}\) while \(y\in G_{i}\) (it is impossible that \(y\in G\setminus G_{i}\) because then an \(x\)-to-\(y\) path through \(s\) would be shorter than \(D\)), and then the largest distance that we find is guaranteed to be \(d(x,y)\). #### Derandomization. Observe that to derandomize the algorithm, it suffices to replace the sampling of sources with a (deterministic) selection of a set of sources \(\mathcal{S}\) of size \(O(n/D)\) such that a diameter endpoint \(x\) is at distance \(\leq D^{\prime}/4\) from at least one source \(s\in\mathcal{S}\). To construct \(\mathcal{S}\), pick an arbitrary source \(s\) and compute it's BFS tree \(T\) of depth \(D^{\prime}\leq D\). Find a level \(L_{i}\) that has only \(O(n/D^{\prime})=O(n/D)\) nodes and \(0.4D^{\prime}\leq i\leq 0.5D^{\prime}\). Similarly, find a level \(L_{j}\) that has only \(O(n/D)\) nodes and \(0.8D^{\prime}\leq j\leq 0.9D^{\prime}\). The set of sources is then \(\mathcal{S}=\{s\}\cup L_{i}\cup L_{j}\). It is easy to verify that every vertex \(v\) in the graph has an ancestor or a descendant in \(T\) that belongs to \(\mathcal{S}\) and is at distance at most \(D^{\prime}/4\leq D/4\) from \(v\). A faster algorithm.Next, we improve the running time to \(n^{3+o(1)}/D^{2}\). Again, we will start with a randomized algorithm and then derandomize. Let \(B_{\rho}(v)\) denote the ball with radius \(\rho\) around vertex \(v\). Recall that our goal is to sample w.h.p. a vertex \(s\) in \(B_{\tilde{D}/4}(x)\) (without knowing \(x\)), where \(x\) is a diameter endpoint. Let \(\rho=\tilde{D}/4\). In order to sample a vertex \(s\) in \(B_{\rho}(x)\) w.h.p., it suffices to randomly sample a set of \(\tilde{O}(n/|B_{\rho}(x)|)\) vertices (rather than sampling \(\tilde{O}(n/\rho)\) vertices as in the approach above). Then, for each sampled vertex \(s\), we can find a level \(L_{i}\) in the BFS tree of \(s\) with \(\rho<i\leq 2\rho\) s.t. \(|L_{i}|<|B_{2\rho}(s)|/\rho\) (rather than \(n/\rho\) as in the approach above). Then, executing the approach above (i.e., executing steps 2-3 of the \(\tilde{O}(n^{4}/D^{3})\) algorithm above) for a specific \(s\) would take time \(\tilde{O}(n(|B_{2\rho}(s)|/\rho)^{2})\) to compute all bisectors, \(\tilde{O}(n|B_{2\rho}(s)|/\rho)\) to compute all additive weights, and \(\tilde{O}(|B_{2\rho}(s)|^{2}/\rho)\) to construct the Voronoi diagrams for all vertices above level \(i\). We see that if \(|B_{\rho}(x)|\) is large then we gain because we have to sample fewer vertices, and if \(|B_{2\rho}(s)|\) is small then we gain because the amount of work for each sampled vertex decreases. For this approach to work, we need to (1) estimate \(|B_{\rho}(x)|\), and (2) relate \(|B_{\rho}(x)|\) and \(|B_{2\rho}(s)|\). To address (1), we simply estimate \(|B_{\rho}(x)|\) by enumerating all powers of two \(2^{k}\) for \(0\leq k\leq\log n\). To address (2), note that \(|B_{\rho}(x)|<|B_{2\rho}(s)|<|B_{3\rho}(x)|\), and that there must exist a \(j\in\{1,2,\ldots,\sqrt{\log_{3}n}\}\) s.t. \(|B_{3\rho_{3}-j}(x)|/|B_{\rho_{3}-j}(x)|<3^{\sqrt{\log_{3}n}}=n^{o(1)}\) (if not, \(|B_{\rho}(x)|>n\), a contradiction). The algorithm is therefore: For each \(1\leq j\leq\sqrt{\log_{3}n}\), let \(\rho_{j}=3^{-j}\rho\). For each \(0\leq k<\log n\) we sample \((n\log n)/2^{k}\) vertices \(s\) (reflecting our assumption that \(B_{\rho_{j}}(x)\leq 2^{k}\)). For each sampled vertex \(s\), if \(|B_{2\rho_{j}}(s)|>2^{k}3\sqrt{\log_{3}n}\), then, since \(|B_{\rho}(x)|<|B_{2\rho}(s)|<|B_{3\rho}(x)|\), it must be that \(s\notin B_{\rho_{j}}(x)\) or \(|B_{\rho_{j}}(x)|>2^{k}\) or \(|B_{\rho_{j-1}}(x)|/|B_{\rho_{j}}(x)|>3^{\sqrt{\log_{3}n}}\) (the disjunction is not exclusive). Hence, in this case we discard \(s\) and move on to the next sampled vertex. Otherwise, \(|B_{2\rho_{j}}(s)|\leq 2^{k}3^{\sqrt{\log_{3}n}}\), and we can find a level \(L_{i}\) with \(\rho_{j}<i<2\rho_{j}\) in the BFS tree rooted at \(s\) s.t. \(|L_{i}|<2^{k}3^{\sqrt{\log_{3}n}}/\rho_{j}\), and continue as in steps 2-3 from the previous algorithm. The overall running time is \[\sum_{j=0}^{\sqrt{\log_{3}n}}\sum_{k=0}^{\log n}\tilde{O}\left(\frac{n}{2^{k}} \left(n(2^{k}3^{\sqrt{\log_{3}n}}/\rho_{j})^{2}+n2^{k}3^{\sqrt{\log_{3}n}}/ \rho_{j}+(2^{k}3^{\sqrt{\log_{3}n}})^{2}/\rho_{j}\right)\right)=n^{3+o(1)}/D^{ 2}.\] To argue correctness, note that for \(j\) such that \(|B_{\rho_{j-1}}(x)|/|B_{\rho_{j}}(x)|\leq 3^{\sqrt{\log_{3}n}}\) and \(k\) such that \(2^{k-1}\leq|B_{\rho_{j}}(x)|\leq 2^{k}\), sampling \((n\log n)/2^{k}\) vertices will yield with high probability a vertex \(s\in B_{\rho_{j}}(x)\), and this \(s\) will not be discarded. This \(s\) satisfies \(d(s,x)\leq\rho_{j}\) and \(d(s,y)\geq 2\rho_{j}\), so the largest distance found for this \(s\) is guaranteed to be \(d(x,y)\) by the same argument as in the correctness of the slower algorithm. Derandomization.We use sparse neighborhood covers of Busch, Lafortune and Tirthapura [20] to derandomize the algorithm. A \(\rho\)-neighborhood cover \(Z\) of a graph \(G\) is a set of connected subgraphs called clusters, such that the union of all clusters is the vertex set of \(G\) and such that for each node \(v\in G\), there is some cluster \(C\in Z\) that contains \(B_{\rho}(v)\). The radius of a cover \(Z\) is the maximum radius of a cluster in \(Z\). The degree of a cover \(Z\) is the maximum number of clusters that a node in \(G\) is a part of. Busch et al. gave a deterministic \(O(n\log n)\)-time algorithm for computing, for any \(\rho>0\) and any connected planar graph, a \(\rho\)-neighborhood cover of any connected planar graph with radius \(16\rho\) and degree \(18\). See also [60] for an \(O(n)\) time algorithm. To adjust the arguments we redefine \(\rho_{j}=\rho 33^{-j}\) for \(j=1,\ldots,\sqrt{\log_{33}(n)}\), and use the fact that for some \(j\), \(|B_{\rho_{j-1}}|/|B_{\rho_{j}}|<33^{\sqrt{\log_{33}n}}\). To avoid sampling in our algorithm, for each choice of \(j,k\), we compute a \(\rho_{j}\)-neighborhood cover \(Z\). We pick an arbitrary vertex \(s\) from each cluster \(C\) of \(Z\) such that \(|C|>2^{k}\). Since the degree of \(Z\) is \(18\), the number vertices \(s\) we choose is at most \(18n/2^{k}\). If \(2^{k}<|B_{\rho_{j}}(x)|>2^{k+1}\) then the cluster \(C\) containing \(B_{\rho_{j}}(x)\) will have \(|C|>2^{k}\) vertices, and we will choose a vertex \(s\in C\). Since the radius of \(Z\) is \(16\rho_{j}\), \(d(s,x)\leq 16\rho_{j}\). If \(|B_{17\rho_{j}}(s)|>2^{k+1}33^{\sqrt{\log_{33}n}}\), we discard \(s\). Since \(B_{17\rho_{j}}(s)\) is contained in \(B_{33\rho_{j}}(x)=B_{\rho_{j-1}(x)}\), we are guaranteed that some \(s\) will not be discarded. For such \(s\) we find a level \(L_{i}\) with \(16\rho_{j}<i<17\rho_{j}\) in the BFS tree rooted at \(s\) s.t. \(|L_{i}|<2^{k+1}33^{\sqrt{\log_{33}n}}/\rho_{j}\). The level of \(x\) in the BFS tree is at most \(16\rho_{j}\), and since \(\rho_{j}<\rho<D/4\), the vertex \(y\) such that \(d(x,y)=D\) is at level greater than \(i\) in the BFS tree. Hence, executing lines 2-3 of the procedure for the algorithm in section 5 will report the distance \(D\). The running time analysis is identical to that of the randomized version since we made sure that the number of vertices we choose in the derandomiztion is at most some fixed constant times the number of sampled vertices in the randomized algorithm. ### An \(\tilde{O}(n\cdot D^{2})\) Algorithm In this subsection we prove Theorem 5, stating that the diameter can be computed in \(\tilde{O}(n\cdot D^{2})\) time on an unweighted planar graph with diameter \(D\). We begin with some preliminaries on a recursive decomposition using shortest path separators. Preliminaries.A _shortest path separator_ of a planar graph \(G\) is an undirected cycle \(C(G)\) consisting of a shortest \(s\)-to-\(u\) path, a shortest \(s\)-to-\(v\) path, and a single edge \(uv\), such that both the interior and exterior of the cycle consist of at most \(2/3\) of the total number of the faces of \(G\). Such a separator can be found in \(O(n)\) time [61]. By recursively separating \(G\) with shortest path separators (halting the recursion when we reach subgraphs of size \(\leq D\)), we obtain the _decomposition tree_\(\mathcal{T}\). The root of \(\mathcal{T}\) corresponds to the entire graph \(G\). A node corresponding to subgraph \(P\) (we interchangeably refer to it as node \(P\)) has two children, whose subgraphs correspond to the interior and exterior of the separator \(C(P)\). Observe that for every node \(P\in\mathcal{T}\) the size of the shortest path separator \(C(P)\) is \(O(D)\). This is because \(C(P)\) consists of two shortest paths, each of length at most \(D\). Moreover, the boundary of \(P\) (vertices of \(P\) that have incident edges to vertices not in \(P\)) is included in the union of all \(C(P^{\prime})\) where \(P^{\prime}\) is an ancestor of \(P\), and is therefore of size \(O(D\log n)\) and lies on \(O(\log n)\) faces of \(P\). We compute the DDGs of every node (subgraph) \(P\in\mathcal{T}\) (i.e. copmute a data structure that can report in \(\tilde{O}(1)\) time the distances in the graph \(P\) between and pair of boundary vertices of \(P\)) using \(O(\log n)\) executions of MSSP on \(P\). This takes total \(\tilde{O}(n)\) time over the entire \(\mathcal{T}\). Now, given any vertex \(v\) in the subgraph \(P\), we can compute the distances in \(G\) from \(v\) to all boundary vertices of \(P\) in \(\tilde{O}(D)\) time using FR-Dijkstra. Namely, we initialize the \(\tilde{O}(D)\) boundary vertices of \(P\) to their distances from \(v\) in the graph \(P\) (via MSSP queries), and we run FR-Dijkstra on the union of the DDG of \(P\) and the DDGs of all \(P^{\prime}\) where \(P^{\prime}\) is a sibling of some ancestor of \(P\). The algorithm.For every non-leaf node \(P\in\mathcal{T}\), we compute the furthest pair of vertices \(u,v\in P\) where \(u\) is internal to \(C(P)\) and \(v\) is external to \(C(P)\). Observe that distances must be taken in the entire graph \(G\) since the shortest \(u\)-to-\(v\) path may venture out of \(P\). To this end, we precompute all bisectors of every graph \(P\in\mathcal{T}\), with the sites being the \(\tilde{O}(D)\) boundary vertices of \(P\). Using Theorem 6, this takes \(\tilde{O}(|P|\cdot D^{2})\) time (where \(|P|\) denotes the size of the subgraph \(P\)), so over all \(\mathcal{T}\) this takes \(\tilde{O}(n\cdot D^{2})\) time. (Observe that here we have used Theorem 6 with the sites lying on \(O(\log n)\) faces. As far as we know, in all prior uses of Theorem 6 the sites lie on \(O(1)\) faces). Then, for every vertex \(v\in P\), we compute the distances in \(G\) from \(v\) to all boundary vertices of \(P\) using FR-Dijkstra in \(\tilde{O}(D)\) time as explained above. We then use these distances as additive weights and apply Theorem 6 to find the furthest vertex from \(v\) in \(P\). This also takes \(\tilde{O}(D)\) time, so overall \(\tilde{O}(n\cdot D)\). We handle the leaf nodes \(P\in\mathcal{T}\) explicitly (recall that \(|P|\leq D\)). For each leaf node \(P\) we compute the all-pairs shortest-paths (APSP) in \(G\) between any two vertices \(u,v\in P\). This is done by running Dijkstra's standard algorithm from every \(v\in P\) on the graph \(P\) where the boundary vertices of \(P\) are initialized to their distances from \(v\) in \(G\) (that we have already computed as \(v\)'s additive weights). This takes \(\tilde{O}(D)\) time per \(v\), so \(\tilde{O}(D^{2})\) time per \(P\), and \(\tilde{O}(D^{2}\cdot n/D)=\tilde{O}(nD)\) over all leaves \(P\). ## 4 A Lower Bound on Dynamic Diameter In this section we prove Theorem 1. Namely, we give a conditional lower bound ruling out an amortized \(O(n^{1-\varepsilon})\) update time for maintaining the diameter of _weighted_ planar graphs that undergo a sequence of edge-weight updates. The proof is inspired by [3], however, there are quite a few changes since the reduction in [3] is from APSP (not SETH), to dynamic distance oracles (not dynamic diameter), and rules out \(O(n^{0.5-\varepsilon})\) update time (not \(O(n^{1-\varepsilon})\)). Our reduction is from the following problem, which is simply a recasting of the Orthogonal Vectors problem in the language of graphs. [Graph OV] Given an undirected tripartite graph \(G\) with parts \(A,C,B\) where \(|A|=|B|=n\) and the middle level has size \(|C|=O(\log n)\), where all edges are in \(A\times C\) and \(C\times B\) decide if there exists a pair \(a_{i}\in A,b_{j}\in B\) such that \(d_{G}(a_{i},b_{j})>2\). It is known that solving this Graph OV problem in \(O(n^{2-\varepsilon})\) time refutes SETH [67, 71]. Moreover, in the unbalanced version where \(|A|=n^{\alpha}\) and \(|B|=n^{\beta}\) for arbitrary constants \(\alpha,\beta>0\) we know that an \(O(n^{\alpha+\beta-\varepsilon})\) time algorithm refutes SETH. The structure of the reduction.Given an instance \(G\) of the Graph OV problem, we construct a dynamic planar graph \(H\). The graph \(H\) is composed of two grids, a left grid and a right grid, each of dimension \(|C|=O(\log n)\) by \(|A|=n\). The columns of both grids are indexed by the nodes of \(A\), such that the top node of the \(i^{th}\) column in the left (resp. right) grid is called \(a_{i}\) (resp. \(a^{\prime}_{i}\)). The rows of the grids correspond to the nodes in \(C\) such that the rightmost (resp. leftmost) node in the \(k^{th}\) row of the left (resp. right) grid is called \(c_{k}\) (resp. \(c^{\prime}_{k}\)). In both grids, all horizontal edges have weight \(2|C|\). In the left grid, the vertical edges in column \(i\) have weight \(2i\) and in the right grid the vertical edges of column \(i\) have weight \(2(n-i)\). In the left grid, for every \(i\) and \(k\), if the edge \((a_{i},c_{k})\) exists in \(G\), then we add a diagonal edge \(e_{k}\) from vertex \((k-1,i)\) to vertex \((k,i+1)\) whose weight is \(2i+2|C|-1\). We call such \(e_{k}\) a _shortcut_ edge (as it is shorter by \(1\) compared to the alternative path composed of a vertical edge followed by a horizontal edge). The two grids are connected by \(|C|\) edges: for each \(k\) we have an edge from \(c_{k}\) to \(c^{\prime}_{k}\) of weight \(2n|C|-2nk\). These \(|C|\) edges are the only edges in \(H\) whose weights will change throughout the reduction - all others will remain fixed. We add a single node \(x\) that is connected to all nodes in the top row of the left grid and all nodes of the top row in the right grid. We set the weight of every edge \((a_{i},x)\) to be \(i\cdot 4|C|\) and the weight of every edge \((x,a^{\prime}_{j})\) to be \((n-j)\cdot 4|C|\). The dynamic updates.After constructing the initial graph \(H\) as above, for every \(j=1,\ldots,n\) we obtain a graph \(H_{j}\) by applying the following updates to \(H\): for every \(k=1,\ldots,|C|\) if the edge \((c_{k},b_{j})\) exists in \(G\) then decrease by \(1\) the weight of the edge \((c_{k},c^{\prime}_{k})\) in \(H\) (we refer to such edge \((c_{k},c_{k}^{\prime})\) as a _decreased_ edge). The following main lemma shows that the diameter of \(H_{j}\) reveals whether or not there exists an \(a_{i}\in A\) such that \(d_{G}(a_{i},b_{j})>2\). Note, crucially, that we can generate all graphs \(H_{1},\ldots,H_{n}\) in sequence using only \(O(n\log n)\) updates since \(H_{i}\) differ from \(H_{i-1}\) by only \(O(\log n)\) edge weights. Under SETH, we cannot maintain the diameter throughout this sequence in \(O(n^{2-\varepsilon})\) time. Therefore, each update cannot be done in \(O(n^{1-\varepsilon})\) amortized time, thus proving Theorem 2 for the fully-dynamic case. To get a proof for the incremental case where edge weights only decrease we can do the following (the decremental case is symmetric). Redefine the weight of the \(O(\log n)\) edges so that they only decrease during the sequence: add \(2(n-i)\) to their weight in \(H_{i}\) so that their weight is the largest in \(H_{1}\) and smallest in \(H_{n}\). Then, the sequence of graphs can be generated by only \(O(n\log n)\) decrease-weight updates. The diameter of \(H_{i}\) increases by exactly \(2(n-i)\) so the same analysis goes through. For simplicity, we continue the proof in this section with the construction in the fully-dynamic case. For any \(j\), the diameter of \(H_{j}\) is larger than \(4n|C|-2\) iff there exists \(a_{i}\in A\) such that \(d_{G}(a_{i},b_{j})>2\). In the remainder of this section we prove the above lemma. First observe that by our choice of edge-weights the diameter of \(H_{j}\) correspond to some shortest \(a_{i}\)-to-\(a_{\ell}^{\prime}\) path. The following claim shows that in fact it is an \(a_{i}\)-to-\(a_{i}^{\prime}\) path. For all \(i\neq\ell\), \(d_{H_{j}}(a_{i},a_{\ell}^{\prime})<4n|C|-2\). Proof.: If \(\ell>i\), then the path \(a_{i}-x-a_{\ell}^{\prime}\) consisting of two edges costs \((n-(\ell-i))\cdot 4|C|<4n|C|-2\). Otherwise \(\ell<i\), then the \(a_{i}\)-to-\(a_{\ell}^{\prime}\) path that only uses horizontal edges costs \(2|C|(n-i+\ell)+2|C|=2n<4n|C|-2\). The following claim concludes the proof. For any \(i\), \(d_{H_{j}}(a_{i},a_{i}^{\prime})>4n|C|-2\) iff \(d_{G}(a_{i},b_{j})>2\). Proof.: Observe that the path \(a_{i}-x-a_{i}^{\prime}\) consisting of two edges costs \(4n|C|\). There may however be a shorter \(a_{i}\)-to-\(a_{i}^{\prime}\) path that passes through the grids. By our choice of edge weights (similarly to [3]) such shortest path must start with an \(a_{i}\)-to-\(c_{k}\) prefix (for some \(k\leq|C|\)) in the left grid, then use the \((c_{k},c_{k}^{\prime})\) edge, then in the right grid do \(i\) horizontal steps followed by \(k\) vertical steps. Moreover, the \(a_{i}\)-to-\(c_{k}\) prefix starts with \(k-1\) vertical steps, then uses a shortcut edge \(e_{k}\) if it exists (otherwise it does a horizontal step followed by a vertical step), and then it does \(n-i-1\) horizontal steps until reaching \(c_{k}\). Suppose first that there were no shortcut edges and no decreased edges at all. The length of such \(a_{i}\)-to-\(a_{i}^{\prime}\) path would then be \[d_{H}(a_{i},a_{i}^{\prime})=k\cdot 2i+(n-i)\cdot 2|C|+(2n|C|-2nk)+i\cdot 2|C|+k \cdot 2(n-i)=4n|C|.\] Note that this length \((4n|C|)\) is the same independent of \(k\) and of \(i\). Hence, the only way an \(a_{i}\)-to-\(a_{i}^{\prime}\) path can be shorter than \(4n|C|\) is by using shortcut edges and decreased edges. However, it can use at most one shortcut edge \(e_{k}\) and one decreased edge \((c_{k},c_{k}^{\prime})\). So its length is \(4n|C|-2\) iff there exists a \(k\) such that the shortcut \(e_{k}\) exists (i.e., \((a_{i},c_{k})\in E(G)\)) and the edge \((c_{k},c_{k}^{\prime})\) is decreased (i.e., \((c_{k},b_{j})\in E(G)\)), and this is iff \(d_{G}(a_{i},b_{j})\leq 2\). By subdividing all edges, the above reduction implies that \(O(n^{1/2-\varepsilon})\) update time is impossible for maintaining the diameter of _unweighted_ planar graphs. ## 5 Decremental Voronoi diagrams and Replacement Diameter Overview: A Step Toward Dynamic Voronoi Diagrams.The usefulness of Voronoi diagrams for diameter and distance reporting in static planar graphs make it natural to ask whether one can efficiently maintain some useful representation of Voronoi diagrams in the dynamic setting. This seems challenging because a change in a single edge or in a single additive weight can cause the entire Voronoi diagram to completely change. For example, decreasing the weight of a single edge in the Voronoi cell \(\operatorname{Vor}(b)\) of some site \(b\) may cause an expansion of \(\operatorname{Vor}(b)\) on the expense of every other Voronoi cell, even cells that were not neighbors of \(\operatorname{Vor}(b)\) before the change. The same is true for decreasing the additive weight of \(b\). Indeed, the few attempts to use Voronoi diagrams in dynamic planar graphs that we are aware of [27, 28], all recompute the Voronoi diagrams from scratch upon every update. We make a small step towards dynamic Voronoi diagrams by developing a mechanism for updating Voronoi diagrams in the decremental setting. In our opinion, this is the most novel technical contribution of our work. The deletion of an edge in some part of the graph only causes an increase in the additive weights of certain sites. When the additive weight of site \(b\) increases, its Voronoi cell shrinks. Namely, some vertices that were in \(\operatorname{Vor}(b)\) before the increase will belong to Voronoi cells of other sites after the increase. The crucial observation is that the only relevant sites in this process are \(b\) and the sites of the neighboring cells to \(\operatorname{Vor}(b)\) in the original Voronoi diagram. The time for the resulting update procedure is therefore proportional to the cell-degree of \(b\), rather than to the total number of sites in the Voronoi diagram. Unfortunately, the cell-degree of \(b\) may, in general, be as large as the number of sites. Nonetheless, this procedure turns out to be useful for the replacement diameter problem, where we can bound the number of times each site is affected by some edge deletion. Representing Voronoi diagrams.Let \(P\) be a planar graph with real edge-lengths. Let \(S\) be the set of vertices (_sites_) that lie on \(\tilde{O}(1)\) faces, called holes. Recall that every site \(s\in S\) has an associated additive weight \(d(s)\). Consider the Voronoi diagram of \(P\) with sites \(S\) and additive weights \(d(s)\). Let \(P^{*}\) be the planar dual of \(P\). Let \(\operatorname{VD}_{0}\) be the subgraph of \(P^{*}\) consisting of the duals of edges \((u,v)\) of \(P\) such that \(u\) and \(v\) are in different Voronoi cells. Let \(\operatorname{VD}\) be the graph obtained from \(\operatorname{VD}_{0}\) after eliminating all degree-2 vertices by repeatedly contracting any one of their incident edges. The vertices of \(\operatorname{VD}\) are called _Voronoi vertices_ and the edges of \(\operatorname{VD}\) are called _Voronoi edges_. Observe that every Voronoi edge corresponds to a consecutive segment of some bisector between two sites. Note that \(\operatorname{VD}\) may be disconnected, i.e., a planar map, and that the boundaries of faces of this planar map may be disconnected. Each face of \(\operatorname{VD}\) corresponds to a cell \(\operatorname{Vor}(s)\) of some site \(s\in S\). Hence there are at most \(|S|\) faces in \(\operatorname{VD}\). It is shown in [45] that the total number of edges, vertices, and faces of \(\operatorname{VD}\) is \(O(|S|)\). In what follows, when we say we compute a Voronoi diagram \(\operatorname{VD}\), we mean we use the algorithm in Theorem 6, which computes a representation of the planar map \(\operatorname{VD}\) defined above. Each Voronoi edge of \(\operatorname{VD}\) corresponds to a segment of a bisector. ### Maintaining Voronoi diagrams while additive weights increase. Consider an increase in the additive weight of a set \(B\subseteq S\) of sites. Such an increase can only change the shortest path (w.r.t. additive weights) of vertices \(v\) in the Voronoi cells of sites in \(B\). Either the shortest path to such \(v\) remains the same but its length increases by the change in the additive weight of the site, or \(v\) becomes part of a Voronoi cell of a different site. In the latter case, since each shortest path is entirely contained in a single Voronoi cell, planarity dictates that the new site must be a neighbor of a site in \(B\). We define the set \(N(B)\) of neighbors of the sites in \(B\) as the set of sites that are either in \(B\) or sites whose Voronoi cells are adjacent to the Voronoi cells of sites in \(B\). Note that \(|N(B)|=O(\sum_{b\in B}\text{cell-degree}(b))\). It follows from the discussion above that the only sites whose Voronoi cells might change as a result of such an increase are those in \(N(B)\). To compute the new Voronoi diagram we first compute the Voronoi diagram of \(P\) with just the sites \(N(B)\). By Theorem 3.1, this takes \(O(\sum_{b\in B}\text{cell-degree}(b))\) time. Let \(\text{VD}^{\prime}\) denote this Voronoi diagram, and let \(\text{VD}\) denote the Voronoi diagram of \(P\) before the change. We stress that the additive weights of \(\text{VD}^{\prime}\) are the ones after the increase, and the additive weights of \(\text{VD}\) are the ones before the increase. To obtain the Voronoi diagram of \(P\) after the change, we "glue" together parts of \(\text{VD}^{\prime}\) and \(\text{VD}\) as follows. See Figure 1 for an illustration. Recall that \(\text{VD}\) is a (possibly disconnected) planar map whose edges correspond to segments of bisectors of pairs of sites of \(\text{VD}\). The endpoints of these segments are Voronoi vertices of \(\text{VD}\). We start by deleting from \(\text{VD}\) all the Voronoi edges corresponding to bisectors involving at least one site of \(B\). For every Voronoi vertex \(v\) incident to a Voronoi cell of a site in \(B\), if all the Voronoi edges incident to \(v\) were deleted, then we delete \(v\) as well. Let \(\mathcal{E}\) denote the set of Voronoi edges \(e\) of \(\text{VD}\) such that \(e\) is incident to some Voronoi vertex \(v\), \(e\) was not deleted, but the preceding or following Voronoi edge of \(e\) in the cyclic order of edges around \(v\) was deleted. Every Voronoi edge \(e\in\mathcal{E}\) corresponds to a segment \(\beta\) of a bisector between two sites \(s_{1},s_{2}\in N(B)\setminus B\). Since the additive weights of these sites are unchanged, the segment \(\beta\) must be represented by a Voronoi edge \(e^{\prime}\) of \(\text{VD}^{\prime}\). Note that \(\beta\) may be a sub-segment of the bisector segment \(\beta^{\prime}\) corresponding to \(e^{\prime}\). Also note that it is easy to identify \(e^{\prime}\) with \(e\) during the computation of \(\text{VD}^{\prime}\) with no asymptotic time overhead.3 For each Voronoi edge \(e\in\mathcal{E}\) (of \(\text{VD}\)), we split its corresponding Voronoi edge \(e^{\prime}\) (of \(\text{VD}^{\prime}\)) into two edges \(e^{\prime}_{1},e^{\prime}_{2}\) by breaking \(\beta^{\prime}\) into two sub-segments at \(v\). Suppose \(e^{\prime}_{2}\) is the one whose corresponding bisector segment contains \(\beta\). Note that if \(v\) is an endpoint of \(e^{\prime}\) (i.e., if \(v\) is a Voronoi vertex of \(\text{VD}^{\prime}\) as well), then \(e^{\prime}_{1}\) is a trivial empty segment of the \(s_{1}\)-\(s_{2}\) bisector. We delete \(e^{\prime}_{2}\) from \(\text{VD}^{\prime}\), and merge the Voronoi edges \(e\) of \(\text{VD}\) and \(e^{\prime}_{1}\) of \(\text{VD}^{\prime}\) into a single Figure 1: An illustration of the process of computing the Voronoi diagram of a piece with 6 sites when the additive weight of site 1 is increased. (a) \(\text{VD}\), the Voronoi diagram of all 6 sites before the weight increase. (b) \(\text{VD}^{\prime}\), the Voronoi diagram of just the increased site (1) and its neighbors (2, 4, 6), after the increase. (c) \(\text{VD}\) and \(\text{VD}^{\prime}\) superimposed, with the edges deleted from \(\text{VD}\), and from \(\text{VD}^{\prime}\) in grey. Observe that all segments of bisectors between cells of the neighbors (2,4,6) that appear in \(\text{VD}\) also appear in \(\text{VD}^{\prime}\).(d) The glued Voronoi diagram. Voronoi edge whose corresponding segment is the concatenation of the segment \(\beta\) of \(e\) and the segment of \(e^{\prime}_{1}\). Doing so for the edges \(e\in\mathcal{E}\) effectively "glues" the relevant portion of \(\mathrm{VD}^{\prime}\) into \(\mathrm{VD}\), replacing the portion of \(\mathrm{VD}\) that we had deleted. The algorithm of [45] for constructing Voronoi diagrams from precomputed bisectors performs similar stitching and glueing operations, and the data structures used to represent Voronoi diagrams and bisectors support all the necessary operations in \(\tilde{O}(1)\) time per operation. Hence, the time complexity of this entire procedure is proportional (up to logarithmic factors) to the number of Voronoi vertices of the Voronoi cells of the sites in \(B\), which is \(O(\sum_{b\in B}\text{cell-degree}(b))\). ### Replacement Diameter We now describe how to use the new algorithm for maintaining Voronoi diagrams under additive weight decreases to get a faster algorithm for replacement diameter. The algorithm starts by computing a complete recursive decomposition tree \(\mathcal{T}\) of the graph \(G\). For every node (piece) in \(\mathcal{T}\) (corresponding to a subgraph of \(G\)) we compute all its bisectors. This takes \(\tilde{O}(n^{2})\) time over all \(\mathcal{T}\) using Theorem 6. Then, for every vertex \(s\in V\) we compute the BFS tree \(T_{s}\) of \(s\) in \(G\) and compute the _fault-tolerant single source distance oracle_ of Baswana et. al. [12] for \(s\) in \(G\). This oracle is constructed in \(\tilde{O}(n)\) time from \(G\), and can report in \(\tilde{O}(1)\) time the \(s\)-to-\(t\) distance in the graph \(G^{e}=(V,E\setminus\{e\})\) for any \(s,t\in V\) and any \(e\in E\). Overall, this also takes \(\tilde{O}(n^{2})\) time. For each piece \(P\in\mathcal{T}\), for each boundary vertex \(b\in\partial P\) we create the _induced tree_\(T_{b}^{P}\) from \(T_{b}\) by marking all vertices of \(P\) and all their lowest common ancestors, and contracting any edge whose endpoints are not marked. The resulting \(T_{b}^{P}\) has size \(O(|P|)\). For each edge \(e\) of \(G\) that was contracted in the process we store the edge of \(T_{b}^{P}\) into which \(e\) was contracted. Since the total number of boundary nodes and piece sizes over all pieces of \(\mathcal{T}\) is \(\tilde{O}(n)\), the total time to construct all these induced trees is \(\tilde{O}(n^{2})\). For each piece \(P\in\mathcal{T}\), let \(P^{\prime}\) be the sibling of \(P\) in \(\mathcal{T}\). Let \(b_{1},b_{2},\dots\) be the vertices of \(\partial P^{\prime}\) in some arbitrary order. For each vertex \(s\in P\) we compute the additivley weighted Voronoi diagram of \(s\) w.r.t \(P^{\prime}\) with sites \(\{b_{i}\}\) and additive weights \(d(s,b_{i})\). We also store for \(s\) a binary search tree (BST) over \(b_{1},b_{2},\dots\), where the node \(i\) in the tree stores the distance from \(s\) to the furthest vertex in \(\mathrm{Vor}(b_{i})\). This takes total \(\tilde{O}(n\sqrt{n})\) time over all \(P\in\mathcal{T}\) and all \(s\in P\). For each piece \(P\) with vertices \(v_{1},v_{2},\dots\) in arbitrary order, we store a BST over \(\{v_{i}\}\), where node \(i\) stores the furthest vertex from \(v_{i}\) in \(P^{\prime}\). This vertex can be found in \(\tilde{O}(1)\) time for each \(v_{i}\) by querying the maximum distance stored in the BST of \(v_{i}\). For every edge \(e\in E\), we need to compute the furthest pair of vertices in the graph \(G^{e}=(V,E\setminus\{e\})\). For an edge \(e\in E\) and two vertices \(u,v\in G\), we say that the pair \(u,v\) is _affected_ in \(G^{e}\) if \(e\) lies on the root-to-\(v\) path in \(T_{u}\). The main idea is to use the fact that a specific pair \(u,v\) is affected in at most \(D\) (rather than \(n\)) graphs \(G^{e}\) (since the shortest \(u\)-to-\(v\) path in \(G\) has at most \(D\) edges). For every affected pair \((u,v)\) there is some pair of sibling pieces \((P,P^{\prime})\) s.t. \(u\in P\) and \(v\in P^{\prime}\). Our strategy is to go over pairs of sibling pieces \((P,P^{\prime})\) in \(\mathcal{T}\), and handle all affected pairs for each \((P,P^{\prime})\) together as follows. Assume w.l.o.g. that \(e\notin P^{\prime}\). For each \(b\in\partial P^{\prime}\), we enumerate in \(T_{b}^{P}\) all the decendant vertices of the edge of \(T_{b}^{P}\) into which \(e\) was contracted (this may be an empty set if \(e\notin T_{b}\)). This way we identify all the affected pairs of the form \((u,b)\), where \(u\in P\) and \(b\in\partial P^{\prime}\). We query the Baswana et al. oracle for the \(u\)-to-\(b\) distance in \(G^{e}\) for each such affected pair. For each \(u\in P\), let \(B\) be the set of boundary vertices \(b\) such that \((u,b)\) is an affected pair. For each vertex \(u\in P\) with \(|B|\geq 1\), we update the Voronoi diagram of \(u\) w.r.t. \(P^{\prime}\) using the procedure Decremental-VD, which is described in subsection 5.1. This procedure updates the \(\mathrm{VD}\) (and the furthest vertex from each site) w.r.t the new additive weights in time \(\sum_{b\in B}\text{cell-degree}(b)\) where cell-degree is the number of Voronoi cells that are adjacent4 to the cell \(\operatorname{Vor}(b)\) in the original VD (i.e. before the deletion of \(e\)). Using the updated VD, we update the node corresponding to every \(b\in B\) in the BST of \(u\) with the new furthest vertex in \(\operatorname{Vor}(b)\). Let \(d\) be the maximum distance stored in the entire BST of \(u\). We update the node corresponding to \(u\) in the BST of \(P\) with the value \(d\). After handling all \(u\in P\) with \(|B|\geq 1\) in this way, the maximum value stored in the entire BST of \(P\) is the maximum distance in \(G^{e}\) between any pair of vertices \((u,v)\) with \(u\in P\) and \(v\in P^{\prime}\). Taking the maximum over all pairs of siblings \((P,P^{\prime})\in\mathcal{T}\) gives the diameter of \(G^{e}\). Footnote 4: Two cells are adjacent if there exists an edge \(e\) of \(G\) with one endpoint in each cell. The total running time for computing the furthest pair for the siblings \((P,P^{\prime})\) is analyzed as follows. The bottleneck is the time to update the VDs. Every time a pair \(u,b\) (where \(u\in P\) and \(b\in\partial P^{\prime}\)) is affected we spend \(\tilde{O}(\text{cell-degree}(b))\) time updating the VD of \(u\). Since each pair is affected by the deletion of at most \(D\) edges, the total time invested in updating VDs for \((P,P^{\prime})\) is bounded by \(\sum_{u\in P,b\in\partial P^{\prime}}D\cdot\text{cell-degree}(b)=|P|D\sum_{b} \text{cell-degree}(b)\), which is \(\tilde{O}(|P|D\cdot|\partial P^{\prime}|)\), since the sum of cell-degrees of the cells in a VD is order of the number of sites of the VD. Summing over all pairs of sibling pieces we get that the total time is \(\sum_{(P,P^{\prime})\in\mathcal{T}}\tilde{O}(|P|D\cdot|\partial P^{\prime}|)= \tilde{O}(n^{1.5}D)\). Hence, including the preprocessing, the total time for the entire replacement diameter algorithm is \(\tilde{O}(n^{2}+n^{1.5}D)\). We note that when \(D\geq n^{5/6}\), it is better to naively use the static \(n^{3+o(1)}/D^{2}\)-time algorithm from section 3.1 for each edge failure. Hence, replacement diameter can be solved in \(\min(n^{3+o(1)}/D^{2},\tilde{O}(n^{2}+n^{1.5}D))=n^{7/3+o(1)}\) time. ## 6 Incremental Diameter In this section we prove Theorem 3. Namely, we present a general reduction showing how to solve the diameter problem efficiently in _incremental_ graphs given two components: (1) a distance oracle for incremental graphs, and (2) a diameter algorithm for static graphs that is relatively fast when the diameter is large. Plugging in the incremental distance oracle of Das et al. [37] and the static algorithm of Section 3.1 we obtain an algorithm with total time \(n^{7/3+o(1)}\) which improves over the naive bound of \(\tilde{O}(n^{8/3})\). The new algorithm of this section comes closer to the \(n^{2-o(1)}\) lower bound of Section 4 for weighted graphs (the best lower bound for unweighted graphs is \(n^{1.5-o(1)}\)). The rest of this section is dedicated to proving this theorem. We begin by presenting the general reduction (that does not assume planarity nor unweighted edges) and then explain how it can be combined with existing algorithms for planar graphs to obtain the theorem. A reduction from diameter to \(s,t\)-shortest path.In an incremental graph, the diameter decreases with time, starting from some \(D\leq n\) (otherwise the graph is not connected and it is easy to check this efficiently) and ending at some \(D\geq 1\). The idea for the reduction is simple: we would like to recompute the diameter only when it decreases, and not after each of the \(n\) updates. While it is true that the diameter could decrease \(\Omega(n)\) times, from \(n\) to \(1\), the point is that re-computation is efficient when the diameter is large (due to the \(n^{3+o(1)}/D^{2}\) algorithm of Section 3.1) and then only \(O(D)\) of the re-computations will happen when the diameter is smaller than \(D\). Our incremental algorithm works as follows: * sample a new diameter pair:** Let \(P=\{(s,t)\mid d(s,t)=\Delta(G)\}\) be the set of pairs that realize the current diameter \(\Delta(G)\). Sample a pair \((s^{\prime},t^{\prime})\) from \(P\) uniformly at random (or from some distribution in which every pair is sampled with probability at most \(O(1/|P|)\)). * monitor the distance of the sampled pair:** Using an incremental distance oracle, monitor the distance between \(s^{\prime}\) and \(t^{\prime}\) throughout the sequence of edge insertions. Do nothing (except querying the oracle) as long as \(d(s^{\prime},t^{\prime})\) does not decrease; in which case it is still the correct diameter of the graph and can be output whenever there is a query. If a new edge causes \(d(s^{\prime},t^{\prime})\) to decrease, go back to Step 1. Each of the two steps involves one of the two ingredients in our reduction. Step 2 utilizes an incremental distance oracle, while Step 1 uses a static diameter algorithm _that can also sample a diameter pair_. At the end of this section we give a general reduction from the latter approximate sampling problem to the problem of finding the largest distance from each node in the graph (i.e. computing all eccentricities). Alternatively, one could notice that the diameter algorithms we will employ in Step 1 (and many other natural diameter algorithms) can be modified to also sample a diameter pair uniformly at random. Running time.Let us first bound the number of times we go to Step 1, which is the most costly step since it involves a static diameter computation. Step 2 is actually very cheap since we only perform one update and one query to an incremental distance oracle. \(\rhd\) Claim 14.For any (non adaptive) sequence of edge insertions that does not decrease the diameter of the graph, the expected number of times our algorithm samples a diameter pair (i.e. goes to Step 1) is \(O(\log n)\). Proof.: Let us first analyze the idealistic case in which we manage to sample truly uniformly in Step 1, and then point out that the same analysis essentially goes through when we sample almost uniformly. Each new edge \(e\) decreases the distance for a subset of pairs \(X_{e}\subseteq P\). Since the special pair \((s^{\prime},t^{\prime})\) is completely unknown to the adversary who is choosing the sequence of edge insertions, the probability that \(e\) causes the algorithm to go to Step 1 is exactly \(|X_{e}|/|P|\) and in that case the new set of "diameter pairs" becomes \(P\setminus X_{e}\). Therefore, the expected number of times we sample can be upper bounded by: \(f(|P|)\leq\max_{0\leq x\leq|P|}x/|P|+f(|P|-x)=O(\log|P|)\). If the sampling in Step 1 is only approximately uniform, but still satisfies that a pair is chosen with probability at most \(O(1/|P|)\) then the same analysis above goes through, up to an additional \(O(1)\) factor. Let \(T^{Diam}(n,D)\) denote the running time of a static diameter algorithm that samples a diameter pair as in Step 1, when the diameter of the graph is \(D\). Over all the \(O(n)\) edge insertions, the total expected running time of Step 1 is therefore at most \(\sum_{D=1}^{n}\log n\cdot T^{Diam}(n,D)\). To obtain our claimed upper bound of \(n^{7/3+o(1)}\) we will use two diameter algorithms inside this reduction: the \(T^{Diam}(n,D)=n^{3+o(1)}/D^{2}\) algorithm from Section 3.1 (for large \(D\)) and the \(T^{Diam}(n,D)=\tilde{O}(n^{5/3})\) algorithm [45] (for small \(D\)). (By the reduction in Section 6.1, these algorithms can also sample an approximately uniform pair as required by Step 1). The total expected time becomes: \[\sum_{D=1}^{n}\log n\cdot T^{Diam}(n,D)=\tilde{O}\left(\sum_{D=1}^{n^{2/3}}n ^{5/3}+\sum_{D=n^{2/3}}^{n}n^{3+o(1)}/D^{2}\right)=n^{7/3+o(1)},\] because \(\sum_{D=n^{2/3}}^{n}n^{3+o(1)}/D^{2}\leq\sum_{i=\log_{2}n^{2/3}}^{\log_{2}n}2 ^{i+1}\cdot n^{3+o(1)}/(2^{i})^{2}\leq\frac{n^{3+o(1)}}{n^{2/3}}\cdot 2\log n\). The additional time of Step 2 is at most \(n\cdot\sqrt{n}\) using the incremental distance oracle of Das et al. [37] that has \(O(\sqrt{n})\) time per update and query. ### Sampling a Diameter Pair In this section we show the final piece of the incremental diameter algorithm. Namely, a way to adapt the aforementioned static diameter algorithms so that they sample a diameter pair approximately uniformly. A first attempt, that does not quite work, is to add a random "perturbation" \(p_{e}\in(0,\varepsilon)\) to the weight of each edge \(e\), where \(\varepsilon<1/D\), and then argue that the (probably unique) pair realizing the diameter in the new graph is a uniformly random pair in \(P=\{(s,t)\mid d(s,t)=\Delta(G)\}\). Note that the perturbations increase the distance between all pairs by \(<1\) and therefore non-diameter-pairs cannot become diameter pairs. One issue, however, is that pairs with many paths of length \(\Delta(G)\) between them are more likely to be chosen than pairs with few such paths. A second attempt that resolves this issue is to add a perturbation to the nodes (e.g. by appending a private leaf to each node with a random weight on the new edge). This idea is closer to the actual solution but it still has an issue of correlations: a node that participates in many pairs might be sampled less frequently than a node that participates in few pairs. Therefore, we must take this difference into account when assigning the weights. Making these ideas go through is a bit complicated. Fortunately, there is an elegant reduction from our setting to the _bipartite independet set_ query model introduced by Beame et al. [13] and then use existing results on this model [7, 16, 38] in a black-box way. There is an algorithm that samples a pair in \(P=\{(s,t)\mid d(s,t)=\Delta(G)\}\) where each pair is sampled with probability at most \(O(1/|P|)\) and runs in time \((\min(\tilde{O}(n^{5/3}),n^{3+o(1)}/D^{2}))\) on unweighted planar graphs of diameter \(D\). The main lemma towards proving the theorem is the following. By making \(\log^{O(1)}n\) calls to an algorithm that returns all eccentricities we can sample a pair in \(P=\{(s,t)\mid d(s,t)=\Delta(G)\}\) where each pair is sampled with probability at most \(O(1/|P|)\) Proof.: Consider an implicit graph \(H\) in which there is an edge between two nodes \(s,t\) iff they are a diameter pair in \(G\) (i.e., \((s,t)\in P\)). Our goal is to sample an edge from \(H\) approximately uniformly. This can be achieved [7, 16, 38] by making a polylogarithmic number of queries to an oracle that, given two subsets \(L,R\subseteq V(H)\), decides whether there is any edge in \(L\times R\cap E(H)\). This is called a bipartite independent set oracle in the literature, following Beame et al. [13]. Thus, all we have to do is show that such a query can be supported in the time of a call to an algorithm that computes all eccentricities in the graph. First, we precompute the diameter \(\Delta(G)\) of \(G\). Then, given a query \(L,R\subseteq V\) we construct a graph \(G^{\prime}\) from \(G\) as follows. For each node \(v\in R\) we add a new "leaf" node \(l_{v}\) and connect it with an edge (of weight 1) to \(v\). Next, we compute the eccentricity of all nodes in \(G^{\prime}\). Finally, the answer to the query is yes if and only if there is a \(u\in L\) such that the eccentricity of \(u\) in \(G^{\prime}\) is \(\Delta(G)+1\); this can be checked in \(O(n)\) time. The correctness of the answer follows from the observation that the eccentricity of any node \(u\) in \(G^{\prime}\) is \(\Delta(G)+1\) if and only if there is a node \(v\) in \(G\) such that (1) \(d_{G}(u,v)=\Delta(G)\) and (2) a new leaf node \(l_{v}\) was appended to \(v\). This implies that (1) \((u,v)\in P\) is a diameter pair in \(G\), meaning that \((u,v)\in E(H)\), and that (2) \(v\in L\). Since we only check for \(u\in R\) our answer that \(L\times R\cap E(H)\) is non-empty is correct. To conclude the proof of Theorem 3 we simply point out that both of the relevant diameter algorithms already compute the eccentricity of all nodes.
2310.17643
Where you go is who you are -- A study on machine learning based semantic privacy attacks
Concerns about data privacy are omnipresent, given the increasing usage of digital applications and their underlying business model that includes selling user data. Location data is particularly sensitive since they allow us to infer activity patterns and interests of users, e.g., by categorizing visited locations based on nearby points of interest (POI). On top of that, machine learning methods provide new powerful tools to interpret big data. In light of these considerations, we raise the following question: What is the actual risk that realistic, machine learning based privacy attacks can obtain meaningful semantic information from raw location data, subject to inaccuracies in the data? In response, we present a systematic analysis of two attack scenarios, namely location categorization and user profiling. Experiments on the Foursquare dataset and tracking data demonstrate the potential for abuse of high-quality spatial information, leading to a significant privacy loss even with location inaccuracy of up to 200m. With location obfuscation of more than 1 km, spatial information hardly adds any value, but a high privacy risk solely from temporal information remains. The availability of public context data such as POIs plays a key role in inference based on spatial information. Our findings point out the risks of ever-growing databases of tracking data and spatial context data, which policymakers should consider for privacy regulations, and which could guide individuals in their personal location protection measures.
Nina Wiedemann, Ourania Kounadi, Martin Raubal, Krzysztof Janowicz
2023-10-26T17:56:50Z
http://arxiv.org/abs/2310.17643v1
# Where you go is who you are - A study on machine learning based semantic privacy attacks ###### Abstract Concerns about data privacy are omnipresent, given the increasing usage of digital applications and their underlying business model that includes selling user data. Location data is particularly sensitive since they allow us to infer activity patterns and interests of users, e.g., by categorizing visited locations based on nearby points of interest (POI). On top of that, machine learning methods provide new powerful tools to interpret big data. In light of these considerations, we raise the following question: What is the actual risk that realistic, machine learning based privacy attacks can obtain meaningful semantic information from raw location data, subject to inaccuracies in the data? In response, we present a systematic analysis of two attack scenarios, namely location categorization and user profiling. Experiments on the Foursquare dataset and tracking data demonstrate the potential for abuse of high-quality spatial information, leading to a significant privacy loss even with location inaccuracy of up to 200m. With location obfuscation of more than 1 km, spatial information hardly adds any value, but a high privacy risk solely from temporal information remains. The availability of public context data such as POIs plays a key role in inference based on spatial information. Our findings point out the risks of ever-growing databases of tracking data and spatial context data, which policymakers should consider for privacy regulations, and which could guide individuals in their personal location protection measures. location privacy, place labelling, semantic privacy, human mobility ## Introduction In the age of big data, an unprecedented amount of information about individuals is publicly available. Not only the information from social media profiles can be exploited to gain rich insights into the private life of individuals, but also data that is collected by applications on-the-fly. Collecting and selling such data has become a business model of commercial consumer data brokers, who distribute individual data of users, oftentimes without their awareness [14]. A particularly popular source is location data, as the whereabouts of people allow rich insights into their daily activities [36, 5, 20, 58], for example, for the purpose of profiling. Even though awareness for (location) privacy has increased in recent years [2], this is oftentimes not reflected in user behavior, which has been termed the "privacy paradox" [63, 7]. Only gradually, companies are reacting to imposed privacy regulations and the efforts of privacy advocates' groups [27]. For example, AppleTM is giving back control over data sharing decisions in the iPhoneTM, including location data1, and StravaTM offers to restrict track-visibility in their app for recording physical activities.2 The simplest way to protect location data is a form of masking or obfuscation of the exact geographic coordinates [44]; i.e., deliberately reducing the data quality [19]. While hiding the exact location may provide some anonymity, the risk of unwanted _semantic_ inference from the raw location data remains. For example, if a user is detected in a busy city district at night, it is very likely that the user is in a bar or club. This type of inference was recently termed "semantic privacy attack" [75], in contrast to previous work on location privacy that was mainly concerned with user _re-identification_ attacks [17, 67, 49, 50, 45]. In this work, we define and analyze a special type of semantic privacy attack that is motivated by the real-life problem that brokers obtain location data and sell them as valuable information about user behavior, for example, for targeted advertisement or for insurance policy offer. Here, we disregard how an adversary would _obtain_ location data but instead focus on the question of how he would _derive meaningful user profiles_ from the raw location data of a single user. We argue that a smart attacker would tackle this problem by utilizing spatial and temporal information for categorizing the locations that a user has visited, drawing from methods developed in reverse geocoding [43, 12, 22, 1, 47, 62], activity categorization [61, 77, 69, 57, 16, 23] and place labeling [83, 40, 18] research. For example, if the location data indicates a two-hour stay in a place with many bars nearby, the attacker may derive that the activity falls into the category "Nightlife". In a second step, the attacker could aggregate the (predicted) categories of all locations that a user visited into a location-based user profile. For example, the profile is 60% "Dining", 30% "Retail", and 10% "Nightlife". In short, we consider the following two semantic attack scenarios: **Task 1**: Given a location visit defined by geographic coordinates and a visitation time, the attacker aims to assign the place to the correct category. **Task 2**: Given the location visitation pattern of a user, the attacker aims to derive a user profile, defined as the visitation frequencies to each of the location categories. To the best of our knowledge, this type of location-based user profiling has not been regarded as a privacy attack, and similar definitions for user profiles are mainly found in literature on recommender systems [78, 85]. Note that if these tasks are feasible, the attacker would not only know about activity frequencies but also about when and where each type of activity is preferably carried out. The input data of the attacker is assumed to consist only of geographic coordinates and timestamps. Such data could stem from GNSS tracking data, from Call-Detail-Records [86], or other forms of movement data. According to [41], "an individual's level of geoprivacy cannot be reliably assessed because it is impossible to know what auxiliary information a third party may have access to." (p. 11). However, one can attempt to quantify the level of privacy by simulating realistic scenarios and measuring the accuracy of the attacker [71, 70]. By realistic, we mean that an attacker tries to enrich the raw data with as much information as possible and employs sophisticated algorithms to analyze patterns in such information. We believe that there is a lack of work analyzing 1) which spatial and temporal information may be exploited, 2) how the data quality, as well as the level of intended inaccuracy due to location protection measures, affects an attacker's accuracy, and 3) what is the relation to the density and quality of spatial context data, e.g., public POIs. We, therefore, evaluate the effectiveness of machine learning based semantic privacy attacks in different scenarios with respect to the information available to the attacker and, similar to [25], varying the data accuracy by means of random perturbations of the location. ## Related work ### Reverse geocoding and activity categorization Many studies utilize a well-known dataset of location check-ins from the Location-based Social Network (LBSN) Foursquare, which is very suitable due to its size, its detailed POI categorization taxonomy, and the availability of user-wise check-in data. The POIs and visitation patterns were analyzed for recommender system applications [84], for deriving interpretable latent representations of venues [3] or to infer urban land-use via clustering of POI data [24]. Yang et al. [81] train models on the Foursquare dataset to infer spatio-temporal activity preferences of users for the purpose of place recommendation. In this work, we take a machine learning viewpoint and regard the Foursquare data as a labeled dataset that is suitable to model the real-life scenario where an attacker aims to categorize the locations of an _unseen_ user. However, it was shown that not only spatial but also temporal information about location visits could be exploited to infer location categories [52]. This has been reported implicitly in other work, for example, Do and Gatica-Perez [18] regard the problem of automatic place labeling into 10 categories, leveraging visitation patterns, e.g., temporal features (start and end time or duration) and visitation frequency from smartphone data. McKenzie et al. [54] connect this observation to geoprivacy research by showing that temporal information or texts from social media posts can be exploited for inference about user locations by matching their semantic signatures [39, 53]. While our study is on location categorization and user profiling, in contrast to user localization, their study inspired us to include temporal features in the attack scenario and to contrast their effect on the attacker's success to the one due to spatial information. Furthermore, work on user profiling from location data (our second attack task) can mainly be found in the literature on recommender systems, which is surveyed in [6]. The POI embedding of users can be viewed as their location profile, for example, with graph-based embeddings [78]. Ying et al. [85] compare users by their "semantic trajectory", defined as the categories of sequentially visited places. We follow their approach but disregard the order of places. ### Location privacy research Privacy risks and potential privacy preservation techniques were studied extensively in the past years [64]. In location privacy research, it was found that a few track points are sufficient to uniquely identify users [17, 67, 28], that it is possible to track people just by the speed and starting location [26] or by accelerometer readings [34], and that even topological representations of movement data without coordinates can be exploited to match users [49, 50]. A common aim of many works is to maintain the performance of a location-based service while providing privacy guarantees; i.e., to optimize the privacy-utility trade-off [72, 9]. Various frameworks for protecting sensitive location data were proposed [19, 55, 10, 68, 56], oftentimes based on k-anonymity [73, 31, 33] or \(\epsilon\)-differential privacy [35, 4, 21, 38]. For an overview of possible privacy attacks on location data and protection methods we refer to the reviews by Kounadi et al. [44] and Wernke et al. [76]. This work instead analyzes privacy attacks that aim to reveal personal information, i.e., interests and behavioural patterns. Related work in this direction, for example, investigates to what extent demographics (e.g., age or gender) and visited POIs can be derived from location traces [46]. Crandall et al. [15] and Olteanu et al. [59] analyze co-location events and the risk to infer social ties. Tu et al. [75] recently termed the inference of private semantic information from movement trajectories as a "semantic" privacy attack, and they specifically regard contextual POI data as semantics. We build up on their definition and consider attacks that aim to infer POI categories. Tu et al. [75] propose l-diversity and t-closeness measures to protect trajectories from semantic inference. However, these approaches rely on trusted third-party (TTP) services that mask the data of multiple users and update their data iteratively in online applications [60, 42]. Omitting the dependence on a TTP is possible, for example, with simple location obfuscation methods, i.e., adding random noise to coordinates or methodologically translating geographic coordinates in space [19, 4]. Zhang et al. [88] and Gotz et al. [29] further propose context-aware masking techniques that are applicable to new users. Here, we do not aim to compare location protection methods, but to quantify the risks of realistic semantic privacy attacks without access to a TTP service. Thus, we utilize location obfuscation mainly as a tool for modelling reduced data quality in real-world scenarios. As proposed by Shokri [70], we evaluate the attacker's accuracy to quantify privacy loss. ## Experimental design We take a machine learning viewpoint and assume that the attacker aims to learn a mapping from visited locations to categories. The available data are a time series of location visits of a new user \(u\). We group the raw data by location in order to gather temporal information about the visitation patterns to one location. The dataset \(D_{u}\) for one user \(u\) can be formalized as \[D_{u}=\{\left(l_{i}^{u},[t_{1}(l_{i}^{u}),t_{2}(l_{i}^{u}),\ldots]\right)\mid l _{i}^{u}\in L_{u}\}\ =\{\left(l_{i}^{u},T_{u}(l_{i}^{u})\right)\mid l_{i}^{u}\in L_{u}\} \tag{1}\] where \(L_{u}\) is the set of all locations visited by the user \(u\), \(l_{i}^{u}\) is one location in \(L_{u}\), and \(t_{j}(l_{i}^{u})\) is the time of the \(j\)-th visit of user \(u\) to location \(l_{i}^{u}\). For simplicity, we abbreviate the ordered list of visit times as \(T_{u}(l_{i}^{u})\). Furthermore, we assume there exists an unambiguous mapping \(c:L\longrightarrow C\) from each location to a category from a predefined location-category set \(C\). For example, \(C=\{\) Dining, Sports, Shopping\(\}\) and the categories for user \(u\) are \(c(l_{1}^{u})=\) Shopping, \(c(l_{2}^{u})=\) Dining, etc. The attacker aims to learn a model \(\hat{c}\) that approximates the true mapping \(c\). The most straightforward approach for \(\hat{c}\) is a spatial nearest neighbor join with a public POI dataset; i.e., if the spatially closest POI is a restaurant, then \(\hat{c}(l_{i}^{u})=\) Dining. More sophisticated methods could pool the spatial and temporal information and frame \(\hat{c}\) as a machine learning model. Here, we simulate the latter via the XGBoost (XGB) algorithm [13]. XGB is a tree-based boosting method that was repeatedly shown to outperform Neural Networks on tabular data [30] and is known to perform particularly well in classification tasks with unbalanced data, as it is the case here. We also chose XGB for its interpretability and since it was empirically superior to a multi-layer perceptron approach in our tests (see Methods - Machine learning model). Together, we consider the following attack scenarios: * **Spatial join:** For each user-location \(l_{i}^{u}\), the category of the public POI that is closest to its geographic location \((x(l_{i}^{u}),y(l_{i}^{u}))\) is assigned. - Temporal features). * Spatial features). No temporal visit information is considered, only coordinates and publicly available POI data. * **XGB spatiotemporal:** The model is trained on all available features, i.e., features derived from \((x(l_{i}^{u}),y(l_{i}^{u}))\) and \(T_{u}(l_{i}^{u})\) as well as available POI data. In addition, we report the results for an uninformed attacker, where the predictions are drawn randomly from a categorical distribution, with the class probabilities corresponding to the class frequency in the training data. In our experimental setup, we take an ML perspective and simulate the attack on _new_ users via a train-test data split. Evaluating the accuracy of this attack requires a _labeled_ dataset \(\mathcal{D}\) of user-location pairs \(l_{i}^{u}\); i.e., the location category \(c(l_{i}^{u})\) must be _known_. GNSS tracking datasets usually do not provide detailed and reliably place labels. Instead, we found a public dataset from the location-based social network Foursquare most suitable for this experiments since location visits are given as check-ins to places of known categories. The dataset was already used for related tasks [84, 3, 81, 24], but without regarding privacy aspects. The places are categorized into 12 distinct classes according to the Foursquare place taxonomy (see Figure 3 for the list of categories and section Methods - Data and preprocessing for details). Additionally, we also use the Foursquare places as public POI data that may be exploited by the attacker as auxiliary spatial context data. Figure 1 provides a visual overview of the experimental setup. The input data (geographic coordinates and time points) are enriched with spatial and temporal features. Before computing spatial features, the location is _obfuscated_ within a varying radius \(r\) to simulate GNSS inaccuracies and possible privacy protection measures (see Methods - Location masking). Then, the data is split into train and test sets, either by user or spatially, to simulate transfer to new users or even to other geographic regions. All results are reported on the combination of all test sets from 10-fold cross validation (Methods - Data split). Figure 1: Overview of the experimental setup. The samples are spatiotemporal data about location visitation patterns. We simulate reduced data quality and potential protection measures by obfuscating the geographic coordinates. The samples are then featurized into vectors encoding temporal visitation patterns and spatial context. We simulate a privacy attack with a trained ML model on new users by a train-test split and evaluate the attacker’s accuracy on the test data. ## Results ### Effect of location obfuscation on place labeling accuracy The results for task 1 (location categorization) are evaluated in terms of accuracy, i.e., the number of correctly categorized places divided by the total number of samples, across all users and all locations (90790 samples in NYC and 211834 in Tokyo): \[Acc(\hat{c},c)=\frac{\sum_{l_{i}^{u}\in\mathcal{D}}\mathbbm{1}[\hat{c}(l_{i}^{u} )=c(l_{i}^{u})]}{|\mathcal{D}|} \tag{2}\] Figure 2 shows the classification accuracy of the attack scenarios by the obfuscation radius. Note that \(r=0\) is an unrealistic scenario, since the check-in data and the public POI context data are both from the Foursquare dataset and are based on the exact same set of geographic coordinates. Thus, a simple spatial nearest neighbor join of the check-in location with public POIs achieves 100% accuracy if no obfuscation is applied. Deriving a user's location from tracking data would obviously hardly yield the exact same point coordinates as a public POI. We, therefore, consider more realistic scenarios with weak obfuscation, and, additionally, protective scenarios with strongly obfuscated coordinates. The results presented in Figure 2 indicate that the accuracy decreases rapidly with the obfuscation radius. However, even when the attacker uses only temporal information, the accuracy is 39.1% for Tokyo and 29.7% for NYC, which is significantly better than random (grey line). On top of that, spatial context information can benefit the attack even when the location is obfuscated within a radius of 1km. This is remarkable and demonstrates the danger of powerful privacy attacks that make use of public POI data. In the appendix, we relate these findings to the spatial autocorrelation of place types (Figure 17) and we demonstrate that the results of NYC and Tokyo are surprisingly similar (see appendix Figure 12). Furthermore, the categorization accuracy depends on the place type; i.e., some categories are harder to detect than others. Figure 3 presents the confusion matrix for the attack scenario at 100m obfuscation. The error is more evenly distributed over categories than expected, although "Dining" and "Retail" are predicted disproportionally often (see appendix Figure 11). Figure 2 additionally compares a user split to a spatial split to analyze generalization across space (see Methods - Data split). Note that a user split is expected to be strictly better than the spatial split because the input data does not include user-identifying information such as age or gender, rendering the generalization to new users as easy as to any new samples. Surprisingly, the spatial cross-validation split only has a minor effect on the attacker's accuracy (decrease of \(\sim 5\)%). We conclude that the attacker's training data set is not required to cover the exact same region for the privacy attack to be successful. Figure 2: Effect of location obfuscation radius on the attacker’s performance in categorizing locations. Spatial information are valuable for an ML algorithm even with up to 1km of obfuscation. ### User profiling error for probabilistic and frequency-based profiling While the ability of a potential attacker to categorize visited locations is concerning, we argue that the main risk is _user profiling_ based on the predicted categories. It is unclear to what extent the high categorization accuracy on a location level transfers to a high profiling accuracy on a user level. Here, we define a user profile as the frequency of different types of locations in the user's mobility patterns. Our definition corresponds to the term-frequency in the TF-IDF statistic3, which measures the frequency of a word in a specific document in relation to the overall occurrence of the term (in the corpus). Here, the "words" are place categories and a "document" is the location trace of one user. We provide examples for such TF-based user profiles in Figure 3(b) ("Ground truth"). In the following, we define \(p(u)\) as the profile of user \(u\), and \(p_{c}(u)\) as the entry of the vector corresponding to the frequency of category \(c\in C\). For example, the ground truth profile of User 1 in Figure 3(b) corresponds to \([0.25,0.5,0.25]\), since \(p_{\text{Dining}}(\text{User 1})=0.25,p_{\text{Retail}}(\text{User 1})=0.5,p_{\text{ Nightline}}(\text{User 1})=0.25\). In this study, we aim to quantify how accurately the adversary could predict \(p(u)\). The evaluation of user profiling performance boils down to comparing the difference between two categorical distributions, namely the distributions of the real profile \(p(u)\) versus the predicted category frequencies \(\hat{p}(u)\): Footnote 3: The inverse term frequency (IDF) would correspond to a weighting of the user’s category-frequency by the overall frequency of this category in the data, giving higher weights to rare categories. Since the weights are the same for all users, IDF does not help to distinguish users, neither intuitively nor empirically. We therefore only characterize users by the easily interpretable TF term. \[E_{\hat{p}(u),p(u)}=\sqrt{\sum_{c\in C}(\hat{p}_{c}(u)-p_{c}(u))^{2}} \tag{3}\] The attacker can estimate the profile \(\hat{p}(u)\) simply by counting the predicted place categories. For example, in Figure 4 the "Retail" category is predicted one out of four times for user 2 and therefore takes a value of 0.25 in the profile (see Figure 3: Normalized confusion matrix of predictions in NYC with Foursquare data and location prediction with an obfuscation radius of 100m. The accuracy is rather balanced across categories; however, many activities are erroneously classified as ”Dining”. orange arrow). However, many ML-based classification models actually predict a "probability"4 for each category, as shown in Figure 4a. The XGBoost model, for example, outputs the prediction frequency of each category among its base learners (decision trees). Probabilistic predictions provoke a second way to estimate \(\hat{p}(u)\), namely by averaging the predicted probabilities per category (see blue arrow in Figure 4). In the following, we term the first option (computing the frequency of predicted categories, orange) as "hard" profiling and the second option (averaging category-wise probabilities, blue) as "soft" user profiling. As shown in the toy example in Figure 4, soft profiling can increase or decrease the error compared to hard profiling (e.g., decrease from 0.354 to 0.219 for user 1, but increase from 0 to 0.071 for user 2). Footnote 4: The probability distribution over categories is usually derived from the predicted values with a softmax function or by averaging hard predictions of base estimators and is, therefore, by no means the actual posterior distribution. While the provided uncertainties are oftentimes poorly calibrated [32], they nevertheless add information to the final predicted label. In Figure 5, we empirically compare both strategies on our dataset in terms of the error \(E\) defined above. Only the error for the strongest attack scenario (XGB spatio-temporal) is shown, averaged over cities (NYC and Tokyo). The profiling error is significantly lower for the soft profiling strategy that is based on probabilistic predictions. In particular, the error of "hard" profiling increases proportionally with a doubling of the obfuscation radius, while the error of soft-labeling increases sub-linearly (see Figure 5). This result is consistent for all considered scenarios. It demonstrates that well-calibrated probabilistic prediction methods are more dangerous in terms of user profiling than point predictors, even if the latter may achieve a higher place classification accuracy. All further results are reported for the _soft_ predictions in order to simulate the strongest attack. Figure 4: User profiling from location labelling. The predicted labels for individual location visits can be aggregated per user to yield an estimated user profile, either by the frequency or the average probability of the predictions. ### User reidentification accuracy based on the estimated profiles Judging from the error alone it is difficult to interpret how much the user profile actually reveals. Such interpretation depends on the variance of the user profiles: For example, if all users have the same profile, the prediction error may be very low, but there is no value in profiling. As a more interpretable metric, we follow previous privacy research and analyze the possibility of re-identifying users by their predicted profile. Given the pool of ground-truth user profiles (Figure 4b green), we match the predicted profiles by finding their nearest neighbors in the pool based on the Euclidean distance of their profile vectors. We report the results in terms of top-5 re-identification accuracy, also called hit@5. In Figure 6, the re-identification accuracy is shown by the attack scenario. A corresponding plot of the profiling error is given in the appendix (Figure 14). Although the accuracy decreases quickly with stronger obfuscation, it is still larger than 10% even with an obfuscation radius of 1.2km. The average uninformed (random) identification accuracy is 0.6% on average, with 1083 users in NYC and 2293 users in Tokyo. To compare the decay of the user profiling performance to the decay in place categorization accuracy (Figure 2), we fit an exponential function of the form \(f(x)=a+c\cdot e^{-x\cdot\lambda}\) to both results. The place categorization accuracy decays with \(a=0.3439,\beta=0.0097,c=0.6216\), indicating that the accuracy decreases with a rate of \(e^{-0.0097}=0.9903\) but converges to around \(0.3439\). The function fit for the user identification accuracy yields \(a=0.0625,\beta=0.0121,c=0.9518\). In other words, with every 50 meters added to the location obfuscation radius, the user re-identification accuracy is reduced by a factor of \(0.5488\) (\(=e^{-0.0121*50}\)). With \(r=57.43\), the accuracy has approximately halved. This firstly demonstrates that place categorization does not directly translate into user profiling, as the profiling accuracy decays faster than the categorization accuracy, and secondly gives guidance for selecting a suitable masking radius. ### Induced privacy loss of ML-based privacy attacks Finally, we transform the re-identification accuracy into a _privacy loss_ metric following [49]. They define the privacy loss \(PL\) for one user \(u\in U\) as \[PL(u)=\frac{P_{attack}\big{(}u=u^{*}\mid D_{u}\big{)}}{P_{uninformed}(u=u^{*})} \tag{4}\] where \(P_{uninformed}\) is the probability of an uninformed adversary to match \(u\) to the true user \(u^{*}\), corresponding to a random pick from all users \(U\), so \(P_{uninformed}=\frac{1}{|U|}\). The probability of an informed adversary, on the other hand, is the probability to match the user to the correct profile by utilizing sensitive user data including geographic coordinates and visitation times. We assume that given a pool of users \(U\), the attacker would match \(u\) to a user \(u_{i}\in U\) from the pool with a probability proportional to the similarity of their profiles: Figure 5: Comparison of user-profiling errors achieved from averaging ”hard” predictions or ”soft” prediction probabilities for each category. Probabilistic classifications improve the spatial attack, in particular for lower-quality location data. \[P_{attack}(u=u_{i}|\mathcal{D})\propto softmax\big{(}sim(u,u_{i})\big{)}=\frac{e^{ sim(u,u_{i})}}{\sum_{j=1}^{|U|}e^{sim(u,u_{j})}} \tag{5}\] where we define the similarity as the inverse distance of the user profile vectors \(sim(u,u_{i})=\big{(}E_{\hat{p}(u),p(u_{i})}\big{)}^{-1}\). Note that Manousakas et al. [49] use a rank-based measure of similarity, which however seems unintuitive given that we know the exact distance between each pair of user-profiles and not only their respective rank. The median privacy loss is \(11\) if the adversary is given spatio-temporal information where the locations are obfuscated by 100m (see appendix Table 1). In other words, the adversary is still 11 times better at re-identifying a user by his profile than with a random strategy. Moreover, the adversary with spatio-temporal data is \(9.9\) times better than an adversary that uses only temporal information, even though the spatial data are obfuscated up to 100m. At higher location obfuscation, the privacy loss converges. The strongest attack only yields a median privacy loss of \(3.74\) at 200 meters obfuscation radius and \(2.13\) at 400m. However, the privacy loss strongly varies across users. Figure 7 shows the cumulative distribution of users. If the locations are obfuscated by 100m, around 80% of the users have a privacy loss lower than 250; however, the distribution is heavy-tailed with a considerable number of users that are still easy to identify. Nevertheless, we conclude that obfuscating the location with a radius between 100 and 200 meters would significantly reduce the risk of successful profiling attacks for a large majority of users. ### Features that affect the predictability of place categories One advantage of boosted-tree based machine learning methods such as XGBoost is that decision trees are interpretable. While the individual decision boundaries are not transparent in large ensembles of trees, one can still compute the importance of individual features in terms of their mean decrease of data impurity. The respective importance of the spatial and temporal features included in our study are shown in Figure 8. The most important spatial features are the number of POIs per category among the \(k\) nearest POIs. The spatial embedding features derived with the space2vec (embed 0 - embed 16) method apparently do not add much information. The time of the day, expressed in sinus and cosinus of the hour and binary variables for morning, afternoon and evening, also play a significant role, highlighting the relevance of temporal information. ### Dependency on POI data quality To simulate incomplete POI data, we subsample 75% or 50% randomly from the Foursquare POIs. Furthermore, the performance with POI data from OSM instead of Foursquare is evaluated. In this experiment, only the predictions of the strongest attack (XGB spatio-temporal) on NYC check-in data are evaluated. Figure 9 depicts the results, where "Foursquare (all)" corresponds to the results in Figure 2. The removal of Foursquare POIs has surprisingly little effect on the user identification accuracy. Even with 50% of the POIs, 84.8% of the check-ins can be classified correctly (see appendix Figure 13), translating to a top-5 identification accuracy of 94%. This is due to the spatial autocorrelation between places of certain categories (see appendix Figure 17). Meanwhile, it is much harder to classify the category of Foursquare check-ins with OSM POIs. We hypothesize that this is due to substantial differences between OSM and Foursquare POI data. Previous work [87] tried to match cafes in the OSM dataset to cafes in the Foursquare set and find that only around 35% can be matched exactly (Levenshtein distance of labels=1), with a spatial accuracy of around 30-40m. In addition to these location differences, in our case there are also differences in the place categories, which we partly had to assign _manually_ to the OSM POIs (see Methods - Data and preprocessing). Nevertheless, the low performance with OSM data unveils important difficulties for an attacker to utilize inaccurate, incomplete and dissenting datasets of POIs. ### Influence of the POI density Furthermore, the difficulty level of the attack depends on the density of spatial context data, since it is easier to match a location to a nearby POI if the number of nearby POIs is low. We quantify this relation by computing the number of surrounding POIs within 200m for all considered places in NYC and Tokyo. In Figure 10, the place labelling accuracy is shown by POI density groups. Places in dense areas; i.e., with many surrounding POIs, are harder to classify. For example, when the obfuscation radius is 100m, the mean number of POIs within 200m around the (non-obfuscated) location is 58 for correctly predicted samples, but 85 for erroneously classified samples. However, the variance between Figure 8: Feature importances in the XGBoost classifier. The occurence of different categories and their mean distance are the most important features for place categorization. the curves shown in Figure 10 is lower than expected. Only points with less than ten nearby POIs are significantly easier to match. The dependence of the predictability on the POI density calls for a context-aware protection scheme [88, 4]. We implement such scheme by setting the obfuscation radius \(r\) for a specific location such that at least \(m\) public POIs lie within the radius. For the sake of comparability, we tune \(m\) to a value that leads to an average obfuscation radius of \(200m\) (\(m=16\)). In other words, when obfuscating each location \(l\) within a context-aware radius \(r(l)\) that covers exactly 16 public POIs, then \(\frac{1}{|\mathcal{D}|}\sum_{l\in\mathcal{D}}r(l)\approx 200\). As desired, this masking scheme destroys the relation between POI density and accuracy. However, our experiments show that the average accuracy _increases_ compared to the accuracy reported for location-independent masking in Figure 2 (accuracy of \(0.52\) compared to \(0.49\) for the experiment on NYC-Foursquate data with XGB spatio-temporal). This also holds at a user-level, where the user-profiling performance is higher with context-aware location obfuscation (\(0.27\) vs. \(0.23\)). It seems that the weak obfuscation of locations in high-density regions has a greater effect than the strong obfuscation of isolated places. We conclude that simple context-aware obfuscation based on POI density is not sufficient to reduce privacy risks, at least not at the same average obfuscation level. While the evaluation of protection methods is out of the scope of this work, further work is needed to understand their effectiveness against undesired user-profiling. ## Discussion We have quantified the risks of undesired user profiling in different attack scenarios, varying 1) the information available to the attacker, 2) the location data quality in terms of obfuscation radius, and 3) the POI data quality. We comment on each aspect in the following. First, our experiments reveal that machine learning methods can efficiently exploit spatial context data, even with low data quality or incomplete data. We further confirm previous findings by [52] that even only temporal information about location visits poses a significant privacy risk. This risk may be further increased, for example, if also the opening times of surrounding POIs are used as input features [80]. In general, more powerful ML methods may increase privacy risks beyond our results. A particularly interesting finding is the superiority of _probabilistic_ predictions for deriving user profiles. In other words, a potential attacker can estimate the importance of different place types in a user's life without knowing the category for each individual place exactly. Furthermore, we took a user-centric viewpoint and derived location protection recommendations. The exponential decay of user identification accuracy demonstrates the high effectiveness of simple protective measures, and the results suggest that the privacy risks become negligible when the location is obfuscated with a radius of around \(200\)m. While such inaccuracy may be intolerable in navigation apps, it yields a good trade-off in other applications such as social media, where the approximate location is still interesting to friends but not yet informative for profiling attacks. However, further experiments on other datasets are necessary to validate the results. Our analysis is based on an experimental Figure 10: Place categorization accuracy by POI density (number of POIs within 500m). Visited places in very dense areas are harder to classify. Figure 9: Dependency of the attacker’s success on the POI quality. The strongest attack scenario based on spatio-temporal data is shown. While the completeness of POI data has a disproportionally low impact on user profiling, using OSM data decreases the attacker’s success. setting where each visited location can for sure be matched to a public POI. An attack that aims to classify user activities that are not related to public POIs is, therefore, expected to be more difficult (e.g., detecting a visit to a friend's place). In the appendix (Figure 15), we provide a study on a GNSS-based tracking dataset where stay points are labeled with a few broad activity categories, but it would be highly interesting to reproduce our results on a GNSS dataset with more detailed place categories. However, datasets that are large and labeled at the same time are rare [11]. Finally, we see a strong dependency of the attacker's success on the density and completeness of spatial context data. Thus, future privacy protection algorithms should not only regard past studies on protection efficiency, but also improvements in public databases. We hope to inspire future research on the risks and, importantly, on suitable protection methods against such novel semantic privacy attacks. Further analysis may, for example, investigate which users are particularly easy or hard to profile. The classification of users into a predefined set of profiles or a cluster of profiles could provide further insights into the actual dangers of unwanted behavior analysis. Finally, it may be an interesting endeavor to develop location protection techniques that specifically target the weaknesses of machine learning models, similar to adversarial attacks [37]. ## Conclusion Semantic privacy deserves more attention in geoprivacy research, considering the business case of data brokers and the interest of companies in semantic information in contrast to raw data. Our analysis is a first step towards a better understanding of the actual risk for a user to reveal sensitive behavioral data when sharing location data with applications. Spatial and temporal patterns in location data lead to a significant opportunity for user profiling, even if the coordinates are not accurate. However, this effect diminishes with stronger location protection. Our analysis, therefore, enables users and policy-makers to derive recommendations on a suitable protection strength. ## Methods In the following, our methods are described in detail. Our implementation is available open-source at [https://github.com/mie-lab/trip_purpose_privacy](https://github.com/mie-lab/trip_purpose_privacy). ## Data and preprocessing ### Check-in data from Foursquare Our study mainly uses data from the location-based social network _Foursquare_. In contrast to tracking datasets or data from other social networks (e.g., tweets), the Foursquare dataset offers labeled and geo-located place visitation data. Specifically, users check-in at venues, e.g., a restaurant, and the geographic location of the venue as well as a detailed semantic label, e.g., "Mexican restaurant", are known. Similar to other studies [81, 82], we use the Foursquare subset of New York City and Tokyo in order to simplify location processing and to study the variability of the results over two different cities. The data was collected by [81] from 12 April 2012 to 16 February 2013 and was downloaded from their website5. Note that Foursquare has changed over the years, and the data thus differs from today's usage of this LBSN. This is not an issue for our study, as the underlying location visitation patterns are expected to remain similar. Footnote 5: [https://sites.google.com/site/yangdingqi/home/foursquare-dataset?pli=1](https://sites.google.com/site/yangdingqi/home/foursquare-dataset?pli=1) As a first step, we clean the category labels of place check-ins of users. We focus on leisure activities and do not consider home and work check-ins for several reasons: 1) Home and work location can be inferred by _temporal_ features such as the time of the day and visit duration. Spatial POI data are not necessary. 2) Identifying home and work is possible with simple heuristics, e.g., assigning the most often visited location as home and the second-most-frequently visited location as work. We believe that previous attempts on this task mainly suffer from insufficient data quality and the lack of reference data, and not the difficulty of the task itself. 3) Many Foursquare users in the dataset do not check-in at home or work since the social network was mainly used to share leisure activities, at least in 2012 when the data was gathered and before changes where made to their (check-in) app. In total, the Foursquare POIs in NYC and Tokyo are labeled with 1146 distinct categories. A taxonomy is provided with 11 groups on the highest level, such as _Dining and Drinking_ or _Arts and Entertainment_. We use this categorization as the ground-truth location categories, but make a few changes in order to sufficiently distinguish common types of leisure activities that are relevant for user profiling. Specifically, we divide the category _Dining and Drinking_ into categories _Dining_ (all kinds of restaurants), _Nightlife_ (bars), and _Coffee and Dessert_, based on the label given on lower levels of the taxonomy. Furthermore, the category _Community and Government_ is split into the categories _Education_ and _Spiritual Centers_. Other subcategories that can not be fitted into these two, e.g. "government building" or "veteran club", are omitted. Finally, there are around 100 labels in the NYC-Tokyo Foursquare dataset from 2012 that do not appear in the (up-to-date) Foursquare POI taxonomy. We manually assign these labels to categories. The final distribution of the labels in NYC check-ins is shown in the appendix in Figure16a. Furthermore, the check-in dataset is cleaned by merging subsequent check-ins of the same user at the same location. A check-in event is deleted if it occurs within one hour of the previous check-in at that location, leading to the removal of 0.496% of the NYC check-ins and 0.63% of the ones in Tokyo. #### Public POI data We assume that the attacker can access public POI data, such as the POIs from Foursquare. However, categorizing check-in locations in the Foursquare data is easy when the Foursquare POIs are given since they correspond exactly in their geographic location and each check-in can (in theory) be matched to a known POI. Apart from obfuscating the check-in location to simulate inaccurate GNSS data, we also simulate incomplete POI data by sampling 50% and 75% of the Foursquare POIs at random. Last, we simulate a situation with substantially different POI data by using POIs from OSM. The Python package pyrosm[74] is used to download all places of the categories "healthcare", "shop", "amenity", "museum", "religious", "transportation", and "station" (public transport) from OSM. The "amenity" category in particular contains a large collection of places, and we first delete all places labeled as "parking space" since they accounted for a large fraction of the data and are irrelevant to our analysis. We further manually re-label the POIs in order to assign place categories. The same categories as in the Foursquare dataset are used and the mapping from OSM-POI-types to our categories is given in detail in our code base6. Footnote 6: [https://github.com/mie-lab/trip_purpose_privacy/blob/main/data/osm_poi_mapping.json](https://github.com/mie-lab/trip_purpose_privacy/blob/main/data/osm_poi_mapping.json) ### Spatial and temporal input features to machine learning model #### Temporal features Temporal features are computed from \(T_{u}(l_{i}^{u})\) as the following: * **Visit frequency features:** The absolute visit frequencies of location \(l_{i}^{u}\), corresponding to \(|T_{u}(l_{i}^{u})|\), and the relative frequency with respect to all check-ins by \(u\), formally \[\text{f}_{\text{visit,frequency}}(l_{i}^{u})=\frac{|T_{u}(l_{i}^{u})|}{\sum_{l_{ i}^{u}\in L_{u}}\sum_{l_{j}\in T_{u}(l_{i}^{u})}t_{j}}\] (6) The absolute frequencies are scaled with a logarithm to reflect well-known power-law properties of location visitation patterns [8, 66]. * **Duration features:** In the Foursquare dataset used as training data, the check-outs of location visits are not provided, so only the start time is known. Thus, we approximate the visit duration by computing the time until the next check-in. Since no check-outs are (publicly) available, there are many outliers with gaps over more than a day. We flatten these outliers by scaling logarithmically, and finally, we take the average over the individual visit durations. Formally, the visit time is subtracted from the time of its subsequent check-in, given as the minimum time of all following check-ins of the user: \[\text{f}_{\text{dur}}(l_{i}^{u})=\frac{1}{|T_{u}(l_{i}^{u})|}\sum_{j=0}^{|T_{ u}(l_{i}^{u})|}\log\Big{(}\min_{\begin{subarray}{c}k,m\\ \text{s.t.}\ t_{m}(l_{k}^{u})>t_{j}(l_{i}^{u})\end{subarray}}t_{m}(l_{k}^{u} )-t_{j}(l_{i}^{u})\Big{)}\] (7) The duration of the last check-in overall is omitted. Although this approximation is very rough due to the dependence on the LBSN usage frequency of users, we empirically observed that it is still helpful for inference. * 5pm), in the evening (5pm - 10pm), or at night (10pm - midnight). The time thresholds were selected to reflect different activities (e.g. dining vs nightlife). The exact daytime was encoded with trigonometric functions (sine and cosine) to reflect their cyclical properties, as is common in machine learning. #### Spatial features The attacker can utilize the recorded geographic coordinates to predict the location category. However, inputting the raw coordinates to a model is not advisable as they suffer from uncertainty and, more importantly, the model would not generalize to other spatial regions. Thus, spatial features are usually derived from the context of the spatial location, here public POI data, since the categories of surrounding POIs are a valuable predictor [79] for the user's location category. POI data are, for example, available from the public Foursquare API or from Open Street Map (OSM). In either case, the dataset includes geographic point data and a categorization taxonomy of broad and more specific POI labels, e.g., a POI may be part of both the "Shoe Store" and the overarching "Retail" category. For most spatial features, we only use the broadest level and denote its categories as \(\Psi=\{\psi_{1},\ldots,\psi_{n}\}\). A POI \(p\) has a set of coordinates \((x(p),y(p))\), and is assigned to a main POI category, \(c_{p}(p)\).7 For example, \(p\) may be assigned to \(c_{p}(p)=\psi_{2}=\) Retail. Footnote 7: Note that our notation explicitly distinguishes _location_ categories (\(c(l_{i}^{u})\in C\)) from _POI_ categories (\(c_{p}(p)\in\Psi\)), since they may be different. For example, an attacker could use POI data with 10 categories (\(|\Psi|=10\)) to classify user location data into only three categories such as \(C=\{\)Work, Leisure, Eating\(\}\). In the literature, different approaches have been used to extract features from the POI distribution around a specific point. We found empirically that a combination of the following methods yields the best results for our attacker's task: * **Category-count of the k-nearest POIs**: Given a location \((x(l_{i}^{u}),y(l_{i}^{u}))\), the \(k\) closest POIs \(p_{1},\ldots,p_{k}\) are found via a ball tree search, and the count of each category among those is computed. The result is a feature vector where the first element corresponds to the number of occurrences of the _first_ category among the \(k\) closest POIs and accordingly for the other categories; formally \[\Big{[}\sum_{i=1}^{k}\mathbb{1}[c_{p}(p_{i})=\psi_{1}],\ \ \sum_{i=1}^{k}\mathbb{1}[c_{p}(p_{i})=\psi_{2}],\ \ \ldots\Big{]}\] (8) Furthermore, as an indicator of the POI density at \((x,y)\), the mean distance from the \(k\) nearest POIs is extracted as a feature. We set \(k=20\) in our experiments. * **Count and distance of POIs within a fixed radius**: The semantic attack requires more specific distance information of the POIs for each category. For example, if there is no restaurant within 1km, it is unlikely that the location category is "Dining". Thus, we consider all POIs around \((x(l_{i}^{u}),y(l_{i}^{u}))\) within a specified radius \(r\), denoted as the set \(P(x,y,r)\)8, and again compute the count of each category. Footnote 8: for brevity, we omit \(l_{i}^{u}\) here \[\Big{[}\sum_{p\in P(x,y,r)}\mathbb{1}[c_{p}(p)=\psi_{1}],\ \ \sum_{p\in P(x,y,r)}\mathbb{1}[c_{p}(p)=\psi_{2}],\ \ \ldots\Big{]}\] (9) In addition, we consider the minimum distance of POIs of one category to the location: \[\Big{[}\min_{\begin{subarray}{c}p\in P(x,y,r)\\ c_{p}(p)=\psi_{1}\end{subarray}}\|\binom{x}{y}-\binom{x(p)}{y(p)}\|,\ \ \min_{ \begin{subarray}{c}p\in P(x,y,r)\\ c_{p}(p)=\psi_{2}\end{subarray}}\|\binom{x}{y}-\binom{x(p)}{y(p)}\|,\ \ \ldots\Big{]}\] (10) We set the radius to \(200\)m based on the results of preliminary experiments. If a category does not appear within the radius, we fill the corresponding vector field by the radius \(r\). As an example, consider that three POIs are found within radius \(r=200\)m of the location: \(p_{1}\) of category \(\psi_{3}\) with 50m distance, \(p_{2}\) of category \(\psi_{2}\) with 10m distance, and \(p_{3}\) of category \(\psi_{2}\) with 80m distance. The resulting vectors (assuming there are only three categories) are \([0,2,1]\) and \([200,10,50]\). * **Space2vec:** In contrast to hand-crafted features based on distance and category counts, there is the option to _learn_ coordinate representations. The task of finding an efficient and informative representation of points, dependent on their coordinates and POI context, was tackled recently in work on space embeddings. We employ the state-of-the-art _space-to-vec_ approach by [48]. Inspired by word embeddings in natural language processing, the idea is to learn a compact vector representation for points. The training is based on a supervised learning task, namely to distinguish surrounding points from unrelated, arbitrary distant samples that were drawn as negative samples. We deploy their public code base9 to train the algorithm on our POI datasets \(\mathcal{P}\), including the first _two_ category levels. Specifically, we split \(\mathcal{P}\) into training, validation (10%), and testing set (10%) and employ the _joined_ approach by [48]; i.e., training a location decoder and a spatial context decoder jointly. We set the embedding size to 16 but retained all other parameters as suggested by the authors. The model, which was trained only on \(\mathcal{P}\), can be applied on a new location given its coordinates and its spatial context (coordinates and categories of the surrounding POIs) as input. ## Model training ### Machine learning model We chose the XGB approach over other machine learning models for its interpretability and its suitability for unbalanced data, rendering it superior in many applications. Nevertheless, we also implemented a multi-layer perceptron (MLP) for comparison. A model was implemented with two layers of 128 neurons respectively, with dropout regularization, ReLU activation and a softmax function in the output layer. The network was trained with the AdamOptimizer (learning rate 0.001) and with early stopping. For the XGBoost model, we utilize the XGBoost implementation in the xgboost Python package10 and only tune the parameter that determines the maximum depth of the base learners. A depth of 10 turned out most suitable in our experiments. The MLP also exhibits good place categorization ability, but was consistently inferior to XGB. For example, with the Foursquare data for NYC and an obfuscation radius of 100m, the accuracy is 52.2% for the MLP compared to 59.4% for XGB (41.6% vs 49.8% for 200m obfuscation, etc.). We, therefore, only report the results for XGB in this study. Footnote 10: [https://xgboost.readthedocs.io/en/stable/python/python_intro.html](https://xgboost.readthedocs.io/en/stable/python/python_intro.html) ### Location masking A simple protection method for the use of location-based services is a random displacement of the coordinates to mask the real location. For example, iPhone users can withhold the precise locations from applications and only allow them to access the "approximate" location. Here, we utilize location obfuscation to model imprecise GNSS data or basic data protection. The user's location is simply replaced by a new location sampled from a uniform distribution within a given radius \(r\) (see Figure 0(a)). Note that we focus on the obfuscation of the spatial information and leave the possibility of masking temporal information as in [54] for future work on semantic privacy. After the location masking step (Figure 0(a)), the raw (and obfuscated) spatio-temporal data are featurized (Figure 0(b)) by deriving temporal features from the check-in time and spatial features from the coordinates matched with public POI data. ### Data split We test for the attacker's accuracy by splitting the data into train and test sets, as shown in Figure 0(c). By default, the dataset is split by user, i.e., 10% of the users are taken as the test set while the model is trained on 90%. In practice, we report all results upon _10-fold cross validation_ such that all users were part of the test set once. The results simulate the scenario where the attacker obtains a labeled train dataset from a specific region and utilizes it to train an ML model with the goal to infer location profiles of new users but in the same region. However, the attacker may not always have labeled data from exactly the same spatial region. To analyze this scenario, we additionally simulate the attack with a _spatial split_. In detail, the dataset is divided by separating the x- and y- coordinates in a \(3\times 3\) grid, to yield nine roughly equal-sized subsets. The samples from each grid cell are used as the test set once. ### Availability of data and materials All source code for reproducing our results is published on GitHub: [https://github.com/mie-lab/trip_purpose_privacy](https://github.com/mie-lab/trip_purpose_privacy). The Foursquare data is publicly available at [https://sites.google.com/site/yangdingqi/home/foursquare-dataset?pli=1](https://sites.google.com/site/yangdingqi/home/foursquare-dataset?pli=1). ### Competing interests The authors declare that they have no competing interests. ### Authors' contributions N.W., O.K. and K.J. conceptualized the project. N.W. and O.K. performed the literature research. N.W. developed the methodology, implemented the algorithms, prepared all visualizations and wrote the main manuscript draft. K.J. revised the manuscript. O.K. and M.R. supervised the project and reviewed the manuscript.
2305.13693
Automated Metrics for Medical Multi-Document Summarization Disagree with Human Evaluations
Evaluating multi-document summarization (MDS) quality is difficult. This is especially true in the case of MDS for biomedical literature reviews, where models must synthesize contradicting evidence reported across different documents. Prior work has shown that rather than performing the task, models may exploit shortcuts that are difficult to detect using standard n-gram similarity metrics such as ROUGE. Better automated evaluation metrics are needed, but few resources exist to assess metrics when they are proposed. Therefore, we introduce a dataset of human-assessed summary quality facets and pairwise preferences to encourage and support the development of better automated evaluation methods for literature review MDS. We take advantage of community submissions to the Multi-document Summarization for Literature Review (MSLR) shared task to compile a diverse and representative sample of generated summaries. We analyze how automated summarization evaluation metrics correlate with lexical features of generated summaries, to other automated metrics including several we propose in this work, and to aspects of human-assessed summary quality. We find that not only do automated metrics fail to capture aspects of quality as assessed by humans, in many cases the system rankings produced by these metrics are anti-correlated with rankings according to human annotators.
Lucy Lu Wang, Yulia Otmakhova, Jay DeYoung, Thinh Hung Truong, Bailey E. Kuehl, Erin Bransom, Byron C. Wallace
2023-05-23T05:00:59Z
http://arxiv.org/abs/2305.13693v1
# Automated Metrics for Medical Multi-Document Summarization Disagree with Human Evaluations ###### Abstract Evaluating multi-document summarization (MDS) quality is difficult. This is especially true in the case of MDS for biomedical literature reviews, where models must synthesize contradicting evidence reported across different documents. Prior work has shown that rather than performing the task, models may exploit shortcuts that are difficult to detect using standard \(n\)-gram similarity metrics such as ROUGE. Better automated evaluation metrics are needed, but few resources exist to assess metrics when they are proposed. Therefore, we introduce a dataset of human-assessed summary quality facets and pairwise preferences to encourage and support the development of better automated evaluation methods for literature review MDS. We take advantage of community submissions to the Multi-document Summarization for Literature Review (MSLR) shared task to compile a diverse and representative sample of generated summaries. We analyze how automated summarization evaluation metrics correlate with lexical features of generated summaries, to other automated metrics including several we propose in this work, and to aspects of human-assessed summary quality. We find that not only do automated metrics fail to capture aspects of quality as assessed by humans, in many cases the system rankings produced by these metrics are anti-correlated with rankings according to human annotators.1 Footnote 1: Dataset and analysis are available at [https://github.com/allenai/mslr-annotated-dataset](https://github.com/allenai/mslr-annotated-dataset). ## 1 Introduction Multi-document summarization (MDS) requires models to summarize key points across a set of related documents. Variants of this task have drawn significant attention in recent years, with the introduction of datasets in domains like newswire Fabbri et al. (2019), Wikipedia Gholipour Ghalandari et al. (2020), science Lu et al. (2020), medical literature reviews DeYoung et al. (2021); Wallace et al. (2020), and law Shen et al. (2022); and substantial methodological work to design model architectures tailored to this task Xiao et al. (2022); Pasunuru et al. (2021); Liu and Lapata (2019). In this work, we focus on MDS for literature reviews (MSLR), a challenging variant of the task in which one attempts to synthesize all evidence on a given topic. When manually performed, such reviews usually take teams of experts many months to complete. Good review summaries aggregate the results of different studies into a coherent passage, while the evidence presented in the input studies will often be in conflict Wallace et al. (2020); DeYoung et al. (2021); Wadden et al. (2022), complicat Figure 1: Spearman correlations between rankings produced by human-assessed quality facets (F1-F4), automated metrics (M1-M7), and combined pairwise system rankings (PW-combined) on the Cochrane MSLR dataset. Rankings from automated metrics are highly correlated as a group except for PIO-Overlap (A). PIO-Overlap rankings are strongly correlated with rankings from human-assessed facets, especially PIO agreement (B). Metrics most strongly associated with PW-Combined rankings are Delta-EI and PIO-Overlap (C). Rankings from commonly reported automated metrics like ROUGE and BERTScore are not correlated or _anti_-correlated with human-assessed system rankings (D). ing the synthesis task.2 Footnote 2: Indeed, reviews conducted by different teams may themselves conflict (Ioannidis, 2016), reflecting the inherent difficulty of the task; however this may owe to differing methods of selecting input studies, a complication we ignore here, though which has been explored in recent work (Giorgi et al., 2022). Evaluating conditional text generation models is notoriously difficult, impeding progress in the field. Prior work on summarization evaluation has proposed various lexical and modeling-based approaches to assess generation quality, but these metrics predominately use correlation with human-assessed quality facets over relatively small numbers of examples to demonstrate utility (Fabbri et al., 2021; Wang et al., 2020; Deutsch and Roth, 2020; Yuan et al., 2021). This limitation of current metric evaluation implies that existing automated measures may not generalize well. Further, evaluation in the multi-document setting adds additional complexity, e.g., prior work has shown that MDS models may sometimes exploit shortcuts that do not reflect as detectable changes in automated metrics (Wolhandler et al., 2022; Giorgi et al., 2022). To address these challenges, we collect human annotations to evaluate current models and to support automated metrics development for the medical MDS task. We construct a dataset of such evaluations using public submissions from the 2022 MSLR shared task on literature review MDS.3 Selecting top-performing models, we label the summary quality of a sample of these models' outputs on the Cochrane subtask (Wallace et al., 2020). As part of our analysis, we compare system rankings produced by automated metrics and human evaluations. Strikingly, our results highlight consistent and significant disagreements between automated metrics and humans, motivating the need for better automated evaluation metrics in this domain. Footnote 3: [https://github.com/allenai/mslr-shared-task](https://github.com/allenai/mslr-shared-task) We contribute the following: * A dataset of summaries and quality annotations on participant submissions to the MSLR shared task. We include human annotations for 6 models on 8 individual quality facets (SS3.2) and pairwise preferences provided by five raters (SS3.3). * An analysis of lexical features among inputs, generated, and target summaries (SS4), showing a large amount of undesirable copying behavior. * An analysis of correlations between automated evaluation metrics and human-assessed quality (SS5), and the differences in system rankings produced by automated metrics versus human evaluation (SS6). We propose several novel evaluation metrics based on desired features of MSLR summaries (SS5). We find that system rankings derived from commonly reported automated metrics are _not_ correlated or even _anti_-correlated with rankings produced by human assessments of quality, though some of the metrics we propose demonstrate promise in capturing certain quality facets. ## 2 Background The MSLR shared task was introduced to bring attention to the challenging task of MDS for literature reviews. The shared task comprised two subtasks, based on the Cochrane (Wallace et al., 2020) and MS^2 (DeYoung et al., 2021) datasets. The Cochrane dataset consists of 4.6K reviews from the Cochrane database of systematic reviews. Inputs are abstracts of papers cited by the review and target summaries are the _Authors' Conclusions_ subsections of review abstracts. The MS^2 dataset includes 20K reviews and is semi-automatically constructed from biomedical literature reviews indexed by PubMed. We refer the reader to the original publications for details concerning dataset construction (Wallace et al., 2020; DeYoung et al., 2021). Shared task organizers provided training and validation splits for both datasets, and solicited model submissions to two public leaderboards, where models were evaluated on a hidden test split. Models were ranked on the leaderboard using ROUGE (-1, -2, -L; Lin 2004), BERTScore (Zhang et al., 2020), and Delta-EI (DeYoung et al., 2021; Wallace et al., 2020), a metric based on evidence inference (Lehman et al., 2019) classifications. ## 3 Dataset We construct our dataset from system submissions to the Cochrane subtask leaderboard for the 2022 MSLR shared task (provided to us by task organizers). We only sample from the Cochrane subtask due to the greater number and variety of successful submissions. We include all summaries from the leaderboard, though we only perform human evaluation on summaries generated by 6 models (discussion in SS3.1). We define and apply two human evaluation protocols to a sample of summaries from these 6 systems. The first (SS3.2) is a facet-based evaluation derived from the analysis conducted in Otmakhova et al. (2022) and the second (SS3.3) is a pairwise preference assessment. ### MDS systems We perform human evaluation on the outputs of 6 MDS systems. Five of these are community submissions to the MSLR-Cochrane leaderboard,4 while a sixth is a baseline system (BART-Cochrane) included for reference. These systems represent different Transformer model architectures (BART, BART-large, Longformer, BigBird), input selection strategies (Shinde et al., 2022), and differential representation/attention on input tokens (Otmakhova et al., 2022; DeYoung et al., 2021). We exclude some systems from human evaluation due to poor summary quality (disfluent) or being baselines. We briefly describe our 6 systems below. Footnote 4: [https://leaderboard.allenai.org/mslr-cochrane/](https://leaderboard.allenai.org/mslr-cochrane/) IITC-1 / IITC-2Otmakhova et al. (2022) fine-tuned PRIMERA (Xiao et al., 2022) for the Cochrane subtask and exploited the use of global attention to highlight special entities and aggregate them across documents. We include two settings from the leaderboard, one that adds global attention to special entity marker tokens (ITTC-1) and one that adds global attention to entity spans (ITTC-2). BART-largeTangsali et al. (2022) fine-tuned BART-large (Lewis et al., 2020) for the subtask. SciSpaceShinde et al. (2022) defined an _extract-then-summarize_ approach, combining BERT-based extraction of salient sentences from input documents with a BigBird PEGASUS-based summarization model (Zaheer et al., 2020). LED-base-16kGiorgi et al. (2022) fine-tuned Longformer Encoder-Decoder (Beltagy et al., 2020) for the Cochrane subtask following a similar protocol described in Xiao et al. (2022). BART (baseline)The baseline follows the protocol in DeYoung et al. (2021) to fine-tune BART (Lewis et al., 2020) for the Cochrane subtask. Model rankings originally reported on the MSLR-Cochrane leaderboard are provided in Table 1. ### Facet-based Human Evaluation We adapt a facet-based human evaluation procedure from the analysis in Otmakhova et al. (2022). In their work, the authors analyzed baseline model outputs from MS~2 (DeYoung et al., 2021) with respect to fluency, PIO alignment, evidence direction, and modality (or strength of claim). PIO stands for Population (who was studied? e.g. women with gestational diabetes), Intervention (what was studied? e.g. metformin), and Outcome (what was measured? e.g. blood pressure), and is a standard framework for structuring clinical research questions (Huang et al., 2006). These are important elements that _must_ align between generated and target summaries for the former to be considered accurate. Evidence direction describes the effect (or lack thereof) that is supported by evidence (e.g., the treatment shows a positive effect, no effect, or a negative effect, comparatively). The strength of the claim indicates how much evidence or how strong the evidence associated with the effect might be. We derive 8 questions based on this analysis: 1. [leftmargin=*,noitemsep,topsep=0pt] 2. _Fluency_: if the generated summary is fluent 3. _Population_: whether the population in the generated and target summaries agree 4. _Intervention_: as above for intervention 5. _Outcome_: as above for outcome 6. _Effect-target_: effect direction in the target 7. _Effect-generated_: effect direction in the generated summary 8. _Strength-target_: strength of claim in the target 9. _Strength-generated_: strength of claim in the generated summary Of the 470 reviews in the Cochrane test set, we sample 100 reviews per system for facet annotations (600 summaries in total). For 50 reviews, we fully annotate all summaries from the 6 systems (the overlapping set); for the other 50 reviews per system, we sample randomly from among the remaining reviews for each system (the random set). All together, at least one system's outputs are annotated for 274 reviews in the test set. We elect for this sampling strategy to balance thoroughness (having sufficient data points to make direct comparisons between systems) and coverage (having annotations across more review topics). For each sampled instance, we show annotators a pair of (target, generated) summaries from a review and ask them to answer 8 questions regarding these (details in App. A). A sample of 10 reviews from the overlapping set (60 summary pairs) and 10 from the random set (10 summary pairs) are annotated by two annotators. We compute inter-annotator agreement from these and report Cohen's Kappa and agreement proportions for all eight facets in Table 2. Several facets have lower agreement (Population, Outcome, and Strength-target), though most disagreements are between similar classes (e.g. par tial agree vs. agree); more on this in App. A. Two annotators with undergraduate biomedical training annotated these samples. We arrived at the final annotation protocol following two rounds of pilot annotations on samples from the MS^2 dataset and discussing among authors to resolve disagreements and achieve consensus. ### Pairwise Human Evaluation We perform pairwise comparisons to elicit human preferences between system-generated summaries and to study how facet-based quality maps to holistic summary quality. We sample pairs of system generations from our dataset, half from the overlapping set of reviews annotated for facet evaluations, and half from other reviews. A different subsample of these pairwise comparisons is provided to each of 5 raters, who are asked to complete up to 100 judgments each. For each comparison, the annotator is given the target summary, the system A summary, the system B summary, and asked "Which of A or B more accurately reflects the content of the target summary?" where the options are A, B, or Neither. All annotators are knowledgable in BioNLP and one annotator has biomedical training. Four annotators completed 100 pairwise comparisons; a fifth completed 50 comparisons. We first determine system rankings per individual annotator. To tally annotations: if A is preferred over B, system A gets 1 point; if B over A, system B gets 1 point; if Neither is preferred, neither system gets a point. Systems are ranked by total points; tied systems receive the same ranking. To determine a combined ranking based on the preferences of all 5 annotators, we adopt the Borda count (Emerson, 2013), a ranked choice vote counting method that maximizes the probability of selecting the Condorcet winner.5 In this method, for each annotator (voter), we award each system the number of points corresponding to the number of systems ranked below it, e.g., for a set of systems ranked 1-6, the rank 1 system receives 5 points, the rank 2 system 4 points, and so on. System rankings resulting from the Borda count are shown in Table 1 under Pairwise-Combined. Footnote 5: The Condorcet winner is the candidate that would win a head-to-head election against each of the other candidates assuming a plurality vote. We perform bootstrapping over each annotator's pairwise annotations to estimate the error of the overall system rankings. We resample each individual's pairwise preferences with replacement and compute a new combined ranking. Over 10000 bootstrap samples, the average Spearman \(\rho\) of the resampled rankings against the initial rankings is 0.716 (s/d = 0.197). ### Dataset Statistics Our final dataset consists of 4658 summaries generated by 10 systems over 470 review instances from MSLR-Cochrane. Of these summaries, 597 from 6 systems are annotated on 8 quality facets. We also include 452 pairwise comparisons from five annotators. In addition to annotations, we compute and include automated metrics for each generated summary to facilitate analysis (more in SS5). \begin{table} \begin{tabular}{l c c c c c c c c c c c c} \hline \hline System & ROUGE* & BERTS. & \(\Delta\)EI & ClaimV. & NLI & STS & PIO-Over. & Flu. & PIO & Dir. & Str. & PW-Comb. \\ \hline ITTC-1 & 5 (4) & 5 (2) & 4 (6) & 4 & 4 & 4 & 1 & 3 & 1 & 3 & 3 & 1 \\ ITTC-2 & 1 (2) & 2 (1) & 1 (2) & 2 & 2 & 2 & 5 & 1 & 4 & 6 & 6 & 2 \\ BART-large & 3 (6) & 3 (5) & 2 (4) & 3 & 3 & 3 & 4 & 4 & 5 & 2 & 2 & 3 \\ LED-base-16k & 4 (3) & 4 (3) & 5 (5) & 5 & 5 & 5 & 2 & 2 & 2 & 1 & 1 & 4 \\ SciSpace & 2 (1) & 1 (6) & 3 (3) & 1 & 1 & 1 & 6 & 6 & 6 & 4 & 4 & 6 \\ BART (baseline) & 6 (5) & 6 (4) & 6 (1) & 6 & 6 & 6 & 3 & 5 & 3 & 5 & 5 & 5 \\ \hline \hline \end{tabular} \end{table} Table 1: System rankings based on automated metrics and human evaluation (best in green). Original system ranks from the MSLR leaderboard as assessed based on ROUGE-L, BERTScore, and Delta-EI are provided in parentheses. The ranks in this table are produced over subsamples of reviews from the Cochrane test split (and macro-averaged for ROUGE and BERTScore), causing ranks to differ from leaderboard rankings. *Ranking for ROUGE is based on Avg-ROUGE-F, while leaderboard rank is based on ROUGE-L. \begin{table} \begin{tabular}{l c c c} \hline \hline Question & Classes & \(\kappa\) & Agreement \\ \hline Fluency & 3 & 0.52 & 0.87 \\ Population & 4 & 0.33 & 0.56 \\ Intervention & 4 & 0.60 & 0.77 \\ Outcome & 4 & 0.24 & 0.36 \\ Effect-target & 4 & 0.85 & 0.90 \\ Effect-generated & 4 & 0.78 & 0.90 \\ Strength-target & 4 & 0.30 & 0.54 \\ Strength-generated & 4 & 0.77 & 0.90 \\ \hline \hline \end{tabular} \end{table} Table 2: Inter-annotator agreement between experts on facets (Cohen’s \(\kappa\) and proportion of agreement). ## 4 Analysis of generated summaries We perform lexical analysis of input abstracts, system generated summaries, and target summaries in our dataset, summarizing our findings below. Input copying and synthesisTo assess similarity between inputs and summaries, we first apply the evidence inference pipeline Lehman et al. (2019); DeYoung et al. (2020)6 to identify an evidence statement in each input document and classify it with an effect direction. Between each input evidence statement and the target and generated summaries, we compute ROUGE-1 scores. We compute the _Synthesis_ rate as how often the effect direction agrees between the most similar evidence statement (by ROUGE-1 score) and the generated summary. In Table 3, we find that system generations match the effect of the closest input at a high rate (0.41-0.46), though no more frequently than we would expect based on the synthesis rate for the target summaries (0.48). Using ROUGE-1 scores, we also determine how often a generated summary is closer to an input document than the target (_Input Match_), which might indicate whether a system is performing an implicit synthesis by selecting an input and copying it. We find that systems sometimes copy inputs, but not in any consistent way. Footnote 6: [https://github.com/pwallocate/RRnlp](https://github.com/pwallocate/RRnlp) tuning. Finally, we observe that though the distributions of self-repeating \(n\)-grams in the target summaries of the _Test_ set and _Train_ set are very similar (Figure 3; left), in generated summaries the rate of self-repetition increases up to 500x compared to occurrence in the _Train_ set summaries (Figure 3; right). Models amplify repeating patterns from the _Train_ set to unnatural proportions! ## 5 Automated evaluation metrics We compute automated metrics for each generated summary and include instance-level scores in our dataset. We investigate how these metrics correlate with other metrics (SS5.1) and with human evaluation facets (SS5.2). **Metrics from the MSLR leaderboard:** **ROUGE**: The leaderboard reported system-level ROUGE-1, ROUGE-2, and ROUGE-L F-scores (Lin, 2004). We report these same three metrics; in some plots, due to space constraints, we show the average of these three ROUGE metrics, which we call Avg-ROUGE-F. **BERTScore**: We compute and report BERTScore-F (Zhang et al., 2020) for each generated summary as computed using the RoBERTa-large model. **Delta-EI**: We compute Delta-EI as introduced by Wallace et al. (2020) and modified by DeYoung et al. (2021) for the MSLR shared task. The metric computes the probability distributions of evidence direction for all intervention-outcome (I/O) pairs between inputs and the target and generated summaries. The final score is a sum over the Jensen-Shannon Divergence of probability distributions over all I/O pairs. Lower values indicate higher similarity to the target summary. **Other metrics we propose and examine:** **NLI/STS/ClaimVer**: These metrics leverage Sentence-BERT (Reimers and Gurevych, 2019) and are computed as the cosine similarity between the embedding of the target summary and the embedding of the generated summary when encoded with trained SBERT models. We use three pre-trained variants of SBERT: RoBERTa fine-tuned on SNLI and MultiNLI (NLI); RoBERTa fine-tuned on SNLI, MultiNLI, and the STS Benchmark (STS); and PubMedBERT fine-tuned on MS-MARCO and the SciFact claim verification dataset (ClaimVer). **PIO-Overlap**: Following Otmakhova et al. (2022), we employ a strong PIO extractor (Bio-LinkBERT (Yasunaga et al., 2022) trained on EBMNLP (Nye et al., 2018)) to extract PIO spans. For each target-generated pair, we define PIO-Overlap as the intersection of the two extracted sets of PIO spans normalized by the number of PIO spans in the target summary. Spans are only considered to overlap if they have the same label and one span is a subspan of the other. ### Correlation between automated metrics We compute Pearson's correlation coefficients between pairs of metrics (Figure 8 in App. E). Most automated metrics are significantly correlated (p \(<\) 0.01), except Delta-EI and PIO-Overlap. ROUGE and BERTScore show a strong positive correlation (r = 0.75), and NLI and STS have a strong positive correlation (r = 0.92), unsurprising since the underlying models are trained on similar data. Delta-EI presents as bimodal, with two peaks around 0 and 1. Distributions of instance-level automated metrics per system are shown in App. D. System ranks (SS6) produced by automated metrics are highly correlated except for PIO-Overlap, which is anti-correlated (Figure 1). Ordering systems based on these metrics generally result in the same or similar rankings (\(\rho\geq\) 0.77 for all pairs of metrics besides PIO-Overlap), e.g., rankings from ClaimVer, NLI, and STS are identical (\(\rho\) = 1). ### Correlation between automated metrics and human judgements We investigate the relationship between automated metrics and human facet-based annotations. For this analysis, we normalize human facets to 4 agreement scores: Fluency, PIO, Direction, and Strength, each in the range [0, 1] (details in App. F). \begin{table} \begin{tabular}{l c c c c} \hline \hline Metric & Flu. & PIO & Dir. & Str. \\ \hline ROUGE & -0.014 & -0.010 & 0.007 & -0.035 \\ BERTScore & -0.000 & 0.022 & 0.036 & -0.033 \\ Delta-EI & 0.066 & -0.080 & -0.060 & -0.054 \\ ClaimVer & -0.051 & 0.142** & -0.017 & -0.093* \\ NLI & -0.026 & 0.053 & -0.011 & -0.063 \\ STS & -0.042 & 0.066 & 0.001 & -0.056 \\ PIO-Overlap & 0.043 & 0.358** & 0.033 & 0.050 \\ \hline \hline \end{tabular} \end{table} Table 4: Correlation coefficients between automated metrics and human evaluation facets. There is weak to no correlation between metrics and human-assessed facets (aside from between PIO-overlap and PIO). Statistical significance at \(\alpha\) = 0.05 is marked with *, 0.01 with **, though these thresholds for significance do not account for multiple hypothesis testing. Correlation coefficients between automated metrics and these four agreement scores are given in Table 4; PIO correlations are plotted in Figure 10 in App E. In general, there is weak to no correlation between metrics and human-assessed Fluency, PIO, Direction, and Strength, suggesting that automated metrics may not be adequately capturing aspects of summaries that humans determine to be important. The exception is PIO-Overlap, which has a statistically significant correlation with human-assessed PIO agreement, and presents as a promising future metric for the MSLR task; ClaimVer is also weakly correlated with PIO agreement. Disappointingly, Delta-EI does not correlate with human-assessed Direction agreement. We investigate this further by computing empirical cumulative distribution functions (ECDFs) for each of the metrics w.r.t. Direction agreement (App. E). Delta-EI exhibits a small but desirable difference between instances where Direction agrees and instances where Direction disagrees (Agrees is more likely to have lower Delta-EI scores than Disagrees). In sum, Delta-EI shows some promise in detecting differences in Direction agreement, though further refinement of the metric is needed. ## 6 Comparing system rankings Evaluation metrics for summarization can be used in two settings, to judge performance at the _instance_ level (comparing individual summaries) or at the _system_ level (comparing model performance over many instances). Here, we compare system-level rankings produced by automated metrics, human facet evaluation, and pairwise preference annotations to determine whether automated metrics effectively rank systems as humans would. System rankings are computed by averaging the instance-level metric values or scores across all review instances for each system, and ranking from best to worst average score (direction depends on metric; higher is better for all scores except Delta-EI). We only average metrics over the subset of reviews for which we have human annotations. This ensures a fair comparison in the circumstance where we have selected an annotation sample that a system performs particularly well or poorly on. By doing this, the system rankings we present here are different than those computed using the same metrics from the MSLR leaderboards. We do not intend our computed rankings to be interpreted as the true system ranking; our analysis focuses on whether automated metrics and human evaluation are able to produce _similar_ rankings of systems. Table 1 shows rankings as assessed by all automated metrics and human scores; Figure 1 shows Spearman correlation coefficients. **Rankings by automated metrics are not correlated with rankings by human evaluation** In general, system rankings from commonly reported automated metrics are not correlated or anti-correlated (lighter blue) with system rankings produced by human judgments. System rankings from automated metrics are highly correlated among themselves (\(\rho\) close to 1), aside from PIO-Overlap. PIO-Overlap rankings are strongly correlated with rankings from human PIO agreement. PIO-Overlap and Delta-EI ranks also correlate with the combined pairwise rankings, again suggesting that these two metrics may be the most promising for capturing human notions of summary quality. **Pairwise assessments do not weigh facets equally** Pairwise-combined rankings are correlated with facet-based rankings for Fluency and PIO, but not Direction and Strength of claim. This may indicate that Fluency and PIO are more detectable problems, or that issues in Fluency and PIO are more prevalent in our data. The rank correlations also show that Direction and Strength are highly correlated and may capture similar aspects of system-level summary quality, making the case for dropping one of the two (likely Strength) in future annotations. **Pairwise preferences suggest that annotators weigh facets differently** In Figure 4, we show Spearman correlation coefficients of facet-based rankings against the rankings of five pairwise annotators and the combined pairwise ranking. These Figure 4: Spearman rank correlations between system ranks for each pairwise annotator and ranks derived from facet-based annotation. Annotators weigh quality facets differently when performing pairwise judgments. coefficients suggest that annotators weigh facets differently when comparing system output. Annotator 1 ranks similarly to Fluency and PIO facets, Annotators 2 and 5 rank similarly to PIO and Direction facets, while Annotators 3 and 4's rankings are uncorrelated with most facets. ## 7 Related work Beyond ROUGE (Lin, 2004) and BERTScore (Zhang et al., 2020), an extensive list of \(n\)-gram (Papineni et al., 2002; Banerjee and Lavie, 2005) and model-based (Zhao et al., 2019; Gao et al., 2020; Martins et al., 2020; Sellam et al., 2020; Yuan et al., 2021) summarization evaluation metrics have been proposed in the literature. In particular, model-based approaches that use question generation and question answering (Wang et al., 2020; Durmus et al., 2020; Deutsch et al., 2021) or NLI-based models (Kryscinski et al., 2020) have been proposed to assess summary factual consistency. Fabbri et al. (2021) and Deutsch et al. (2022) provide more thorough evaluations of many of these metrics on select summarization tasks. We perform evaluations using metrics previously reported on the MSLR task, and leave a systematic evaluation of metrics on this task and others to future work. In Zhang et al. (2020), the authors performed fact verification on generated radiology reports using an information extraction module, by aligning the extracted entities with entities found in the reference summary. Our PIO-Overlap metric similarly uses a PIO entity extraction module to assess concept overlap between generated and reference summaries. Falke et al. (2019) proposed to use NLI models to rank summaries by average entailment score per sentence against the input documents; this shares similarities with the Delta-EI score we evaluated, which attempts to quantify agreement relative to the reference summary with respect to the direction of evidence reported. Deutsch et al. (2022) investigated system-level rankings produced by automated metrics and human evaluation and found minimal correlation between them, a finding corroborated by our work. Liu et al. (2022) introduced the robust summarization evaluation (RoSE) benchmark, containing human judgments for system outputs on the CNN/DM, XSum, and SamSum datasets. We extend such work into a novel domain (medical MDS for literature review) and demonstrate differences in automated metric performance and human evaluation in our domain and task. For example, though ROUGE correlates with human preferences in single-document (CNN/DM) and multi-document (MultiNews) news summarization, we find that it is poorly correlated with human judgments and preferences in the MSLR task. Recent developments in large language modeling have also shifted the goalposts for evaluation. Goyal et al. (2022) found that although humans overwhelmingly prefer zero-shot GPT-3 summaries for news summarization, automated metrics were unable to capture this preference; they introduced a benchmark of human judgments and rationales comparing system outputs on the single-document news summarization task. More recently, Shaib et al. (2023) demonstrated that GPT-3 can be adapted for the MSLR task, and though the model outputs are generally found by human annotators to be faithful to the inputs, in the MDS setting the evidence direction often disagrees with the reference. Detecting these disagreements and developing automated metrics that can capture such disagreements are valuable pursuits and one of the motivations for our work. Further investigation into whether automated metrics developed using limited human evaluation benchmarks such as the dataset we introduce here will be a goal for future work. ## 8 Discussion MDS for literature review may involve notions of summary quality not readily captured by standard summarization evaluation metrics. For example, our lexical analysis of generated summaries reveals a concerning level of self-repetition behavior, which is not penalized by standard metrics. Through two independent human evaluations (facet-based and pairwise preferences), we also show that automated metrics such as ROUGE and BERT-Score are poorly correlated or even anti-correlated with human-assessed quality. This is not to say that these metrics do not provide any utility. Rather, further work is needed to understand what aspects of summary quality these metrics capture, and how to use them in combination with other metrics, novel metrics yet unintroduced, as well as human evaluation to better assess progress. We note that ours is not a systematic analysis of all automated summarization evaluation metrics, but is a focused study on evaluation metrics reported for the MSLR shared task and which we introduce under the hypothesis that they may be useful for capturing some quality facets associated with this task. For those interested in the former, please refer to studies such as Fabbri et al. (2021) or Deutsch et al. (2022). A positive finding from our work is the promise of the PIO-Overlap and Delta-EI metrics. Delta-EI shows some potential to capture evidence directional agreement between summaries, though the metric as currently implemented is noisy and does not cleanly separate summaries that agree and disagree on direction. PIO-Overlap, a metric we introduce, correlates with human-assessed PIO agreement, suggesting that it could be a performant, scalable alternative to human evaluation of this quality facet. Still, more work is needed to probe how variants of these metrics could be adapted to evaluate performance on MSLR and other MDS tasks. Finally, we note that human evaluation is difficult because people value different qualities in summaries. The rank-based analysis we perform does not account for interactions between related quality facets and is unable to elicit relationships between overall quality and individual quality facets. The majority of pairwise preference annotations in our dataset also include short free text justifications for preference decisions, which could be used to further study this problem. Other promising directions for future work involve studying how to optimally elicit human preferences, such as how to sample instances for labeling to maximize our confidence in the resulting system-level rankings. ## 9 Conclusions There have been major recent advances in the generative capabilities of large language models. Models like ChatGPT,8 GPT-3 Brown et al. (2020), and PubmedGPT9 demonstrate aptitude on many tasks but have also been shown to confidently produce factually incorrect outputs in specialized and technical domains.10 Medicine is a specialized domain where incorrect information in generated outputs is difficult to identify and has the potential to do harm. There is therefore a pressing need for the community to develop better methods to assess the quality and suitability of generated medical texts. Our investigation confirms that there is significant room for improvement on medical MDS evaluation. We hope that the resources and findings we contribute in this work can assist the community towards this goal. Footnote 8: [https://openai.com/blog/chatgpt](https://openai.com/blog/chatgpt) Footnote 9: [https://hai.stanford.edu/news/stanford-crfm-introduces-pubmedgpt-27b](https://hai.stanford.edu/news/stanford-crfm-introduces-pubmedgpt-27b) Footnote 10: Stack Overflow banned ChatGPT responses due to the high rate of inaccurate and misleading information. ### Limitations Though we include 6 systems in our annotation which reflect the current state-of-the-art, all of the models are Transformer-based and fine-tuned on just the Cochrane dataset, which may limit the diversity of our generated summaries. Additionally, none of the systems are generating summaries that approach the accuracy of human-written summaries. As a consequence, though the summaries in our dataset span the spectrum of quality, they may have less coverage on the higher end of quality (summaries approaching the accuracy and utility of human-written review summaries). Our analysis of evaluation metrics also assumes the existence of reference summaries. In many real-world summarization scenarios, reference summaries do not exist, and reference-free evaluation metrics are needed for assessment. We refer the reader to related work in reference-free summarization evaluation Vasilyev et al. (2020); Gao et al. (2020); Luo et al. (2022), which have been found in some settings by Fabbri et al. (2021) to exhibit even lower correlation with human notions of summary quality; the performance of these metrics on MSLR evaluation is unknown and is left to future work. Our notions of summary quality also do not necessarily correspond to clinical utility. As with anything in the medical setting, it is of utmost importance to verify correctness and the quality of evidence before using any generated text to make or guide clinical decisions. ### Ethical Considerations As with other applications of NLP in the medical domain, results of MSLR systems must be verified by domain experts before they should be considered for use in clinical guidance. We do not intend the system outputs included in our dataset and analysis to be used for such end applications, as this would be clearly premature given the low quality of generated summaries and our lack of ability to assess the prevalence of factuality errors in these summary texts. Nonetheless, we believe that medical MDS holds eventual promise, and it is of vital importance that we study its challenges and how to measure and detect quality issues in generated text. ## Acknowledgements This research was partially supported by National Science Foundation (NSF) grant RI-2211954, and by the National Institutes of Health (NIH) under the National Library of Medicine (NLM) grant 2R01LM012086. YO and THT are supported by the Australian Government through the Australian Research Council Training Centre in Cognitive Computing for Medical Technologies (project number ICI70200030).
2302.07641
Fuzzification of Fractal Calculus
In this manuscript, fractal and fuzzy calculus are summarized. Fuzzy calculus in terms of fractal limit, continuity, its derivative, and integral are formulated. The fractal fuzzy calculus is a new framework that includes fractal fuzzy derivatives and fractal fuzzy integral. In this framework, fuzzy number-valued functions with fractal support are the solutions of fractal fuzzy differential equations. Different kinds of fractal fuzzy differential equations are given and solved.
Alireza Khalili Golmankhaneh, Kerri Welch, Cristina Serpa, Palle E. T. Jørgensen
2023-02-13T19:19:24Z
http://arxiv.org/abs/2302.07641v1
# Fuzzification of Fractal Calculus ###### Abstract In this manuscript, fractal and fuzzy calculus are summarized. Fuzzy calculus in terms of fractal limit, continuity, its derivative, and integral are formulated. The fractal fuzzy calculus is a new framework that includes fractal fuzzy derivatives and fractal fuzzy integral. In this framework, fuzzy number-valued functions with fractal support are the solutions of fractal fuzzy differential equations. Different kinds of fractal fuzzy differential equations are given and solved. **Keywords:** Fractal fuzzy differential equations, fuzzy number-valued functions, fractal fuzzy derivatives, fractal fuzzy integral **2010 Mathematics Subject Classification:** 26E50, 34A07, 28A80 ## 1 Introduction Fractal geometry mathematically describes complex shapes that are not described by Euclidean geometry [1]. These shapes are found in nature such as clouds, mountains, lightning and etc. which are called fractals [2]. The most important properties of fractals are self-similarity, and have non-integer dimensions. Fractals are non-differentiable in the sense of ordinary calculus since they have a rough structure rather smooth. Their fractal dimensions exceed their topological dimensions and appear similar at various scales [3, 4]. Fractals have different measures like Hausdorff's measure. In this context, ordinary calculus which is based on length, area, and volume fails to define derivatives and integrals on them [5]. Many researchers have tried to formulate analysis on fractals in order to explain their physical properties [6], i.e. harmonic analysis [7, 8, 9], measure theory [10], fractional Brownian motion and probability-theoretical approaches [11, 12], fractional space [13], fractional calculus [14, 15]. In seminal papers, ordinary calculus was adopted to define their derivatives and integrals of functions with fractal support, like Cantor sets and Koch curves [16, 17, 18]. This new framework which is a generalization of ordinary calculus is called fractal calculus or \(F^{\alpha}\)-calculus. Fractal calculus is simple, constructive, and algorithmic and applied in physics [19]. Fractal calculus was developed in different branches such as stability of solutions of fractal differential equations, nonlocal reverse Minkowski's fractal integral inequalities, and properties of staircase function [20, 21, 22, 23, 24, 25]. Fractal calculus was generalized on fractal cubes and tartan Cantor spaces and Laplace equations on fractal cubes were solved [26, 27]. Fractal derivatives and integrals were worked out fractal interpolation functions and Weierstrass functions [28]. Random variables, stochastic process and stable distributions on fractal were defined, and corresponding stochastic differential equations were solved [29, 30]. Fractal Laplace, Fourier and Sumudu transforms were defined in order to solve fractal differential equation and applied in electrical circuits, economy and in dynamics [31, 32, 33, 34, 35, 36, 37]. Fractal anomalous diffusion has been formulated as a diffusion process in fractal media and it has a power law relationship between the mean squared displacement and time [38, 39, 40]. Fuzzy sets, numbers, fuzzy-valued functions, fuzzy derivatives, and integrals were introduced and applied to model the processes with uncertainty in science, physical science, engineering, and social science [41, 42, 43, 44]. A linear second-order differential equation with constant coefficients with boundary values expressed by fuzzy numbers have been solved [45]. The fuzzy optimal control problem has been considered to optimize the expected values of the appropriate objective fuzzy functions [46]. The differentiability of fuzzy number-valued functions based on the Hausdorff distance between fuzzy numbers has been suggested [47]. First order linear fuzzy differential equations under differential inclusions and strongly generalized differentiability approaches have been studied [48]. Linear fuzzy differential equations applying the concept of generalized differentiability and conditions for the existence of solutions have been investigated [49]. A First-order fuzzy differential equation (FDE) with fuzzy initial value was solved [50]. In this paper, we introduce a new framework which is a generalization of fractal calculus to include fuzzy-valued functions. The plan of the paper is as follows: In Section 2, we summarize the fractal calculus and fuzzy calculus. Fractal fuzzy calculus is formulated and defined in Section 3. In Section 4, \(\alpha\)-order fractal fuzzy differential equations are suggested and solved. Section 5 is devoted to conclusion. ## 2 Preliminaries In this section we summarize the fractal calculus on fractal curves [16, 17, 18, 19]. ### Fractal calculus on fractal curves **Definition 1**.: _For a fractal curve \(F\) and a subdivision \(P_{[a,b]},[a,b]\in[a_{0},b_{0}]\in\mathbb{R}\), the mass function is defined by_ \[\gamma^{\alpha}(F,a,b)=\lim_{\delta\to 0}\inf_{|P|\leq\delta}\sum_{i=0}^{n-1} \frac{|\mathbf{w}(t_{i+1})-\mathbf{w}(t_{i})|^{\alpha}}{\Gamma(\alpha+1)}, \tag{1}\] _where \(|.|\) denotes the Euclidean norm on \(\mathbb{R}^{n}\), \(1\leq\alpha\leq n\), \(P_{[a,b]}=\{a=t_{0},...,t_{n}=b\}\), and \(|P|=\max_{0\leq i\leq n-1}(t_{i+1}-t_{i})\) for a subdivision \(P\)._ **Definition 2**.: _The \(\gamma\)-dimension of \(F\) is defined by_ \[\dim_{\gamma}(F) =\inf\{\alpha:\gamma^{\alpha}(F,a,b)=0\}\] \[=\sup\{\alpha:\gamma^{\alpha}(F,a,b)=\infty\} \tag{2}\] **Definition 3**.: _The rise function of a fractal curve \(F\) is defined by_ \[S_{F}^{\alpha}(u)=\left\{\begin{array}{ll}\gamma^{\alpha}(F,p_{0},u),&u\geq p _{0};\\ -\gamma^{\alpha}(F,u,p_{0}),&u<p_{0}.\end{array}\right. \tag{3}\] _where \(u\in[a_{0},b_{0}]\), and \(S_{F}^{\alpha}(u)\) gives the mass of the fractal curve \(F\) upto point \(u\)._ **Definition 4**.: _Let be a function \(f:F\rightarrow\mathbb{R}\). Then \(F\)-limit of \(f\) as \(\theta^{\prime}\rightarrow\theta\) through points of \(F\) is \(l\), if for given \(\epsilon\) there exists \(\delta>\) such that_ \[\theta^{\prime}\in F\ \ \text{and}\ \ |\theta^{\prime}-\theta|<\delta\Rightarrow|f( \theta^{\prime})-l|<\epsilon \tag{4}\] _or_ \[F_{\theta^{\prime}\rightarrow\theta}\underset{\theta^{\prime}\rightarrow\theta }{\text{lim}}f(\theta^{\prime})=l. \tag{5}\] **Definition 5**.: _A function \(f:F\rightarrow\mathbb{R}\) is said to be \(F\)-continuous at \(\theta\) if_ \[F_{\theta^{\prime}\rightarrow\theta}{\text{lim}}f(\theta^{\prime})=f(\theta). \tag{6}\] **Definition 6**.: _The fractal derivative \(F^{\alpha}\)-derivative is defined by_ \[D_{F}^{\alpha}f(\theta)=F_{\theta^{\prime}\rightarrow\theta}\underset{ \theta^{\prime}\rightarrow\theta}{\text{lim}}\ \frac{f(\theta^{\prime})-f(\theta)}{J(\theta^{\prime})-J(\theta)}, \tag{7}\] _where \(F_{-}lim\) indicates the fractal limit (see in [18]), \(\mathbf{w}(u)=\theta\) and \(S_{F}^{\alpha}(u)=J(\theta)\)._ _Remark 1_.: We note that the Euclidean distance from origin upto a point \(\theta=\mathbf{w}(u)\) is given by \(L(\theta)=L(\mathbf{w}(u))=|\mathbf{w}(u)|\). **Definition 7**.: _The fractal integral or \(F^{\alpha}\)-integral is defined by_ \[\int_{C(a,b)}f(\theta)d_{F}^{\alpha}\theta =\sup_{P[a,b]}\sum_{i=0}^{n-1}\inf_{\theta\in C(t_{i},t_{i+1})}f( \theta)(J(\theta_{i+1})-J(\theta_{i}))\] \[=\inf_{P[a,b]}\sum_{i=0}^{n-1}\sup_{\theta\in C(t_{i},t_{i+1})}f( \theta)(J(\theta_{i+1})-J(\theta_{i})), \tag{8}\] _where \(t_{i}=\mathbf{w}^{-1}(\theta_{i})\), and \(C(a,b)\) is the section of the curve lying between points \(\mathbf{w}(a)\) and \(\mathbf{w}(b)\) on the fractal curve \(F\)[18]._ ### Fuzzy calculus on real-line In this section, we review fuzzy calculus which will be used to fuzzification of the fractal calculus [41, 42, 43, 44]. A generalized Hukuhara difference for fuzzy sets and a new generalized differentiability concepts for fuzzy valued functions were given in [51, 52]. **Definition 8**.: _Let \(X\neq\emptyset\). Then, a set \(A\subset X\) is characterized by its membership function \(u_{A}(x):X\rightarrow[0,1]\). Thus \(u_{A}(x)\) is the degree of membership of element \(x\) in the fuzzy set \(A\) for each \(x\in X\)._ **Definition 9**.: _Let \(A\) be a fuzzy subset of a real number \(u_{A}(x):\mathbb{R}\rightarrow[0,1]\). Then \(A\) is called fuzzy number if it satisfies the following axioms_ 1. \(A\) _is normal. It means that there exists_ \(x_{0}\) _in_ \(\mathbb{R}\)_, such that_ \(u_{A}(x_{0})=1\)_._ 2. \(A\) _is convex, namely,_ \[u_{A}(tx+(1-t)y)\geq\min\{u_{A}(x),u_{A}(y)\},\ \forall t\in[0,1],\ x,\ y\in \mathbb{R}.\] (9) 3. \(u_{A}(x)\) _is upper semi continuous on_ \(\mathbb{R}\)_, viz, for given_ \(\epsilon>0\)_, there exists_ \(\delta>0\) _such that_ \[|x-x_{0}|<\delta\Rightarrow u_{A}(x)-u_{A}(x_{0})<\epsilon.\] (10) 4. _The support of_ \(u_{A}(x)\) _is compact. e.g._ \[supp(u_{A}(x))=cl_{\mathbb{R}}\{x\in\mathbb{R};u_{A}(x)>0\}\] (11) _is compact._ **Definition 10**.: _A fuzzy number \(A\) is determined by a pair of functions \(A=(A^{-}(r),A^{+}(r))\), with \(A^{-}(r),A^{+}(r):[0,1]\rightarrow\mathbb{R}\) that satisfies the following condition:_ 1. \(A^{-}(r)=A^{-}_{r}\in\mathbb{R}\) _is a bounded, monotonic, increasing, left continuous function in_ \((0,1]\) _and it is right-continuous at_ \(0\)_._ 2. \(A^{+}(r)=A^{+}_{r}\in\mathbb{R}\) _is a bounded, monotonic, decreasing, left continuous function in_ \((0,1]\) _and it is right-continuous at_ \(0\)_._ 3. _For_ \(r\in(0,1]\) _we have_ \(A^{-}(r)\leq A^{+}(r)\)_._ _The Definition 10 is called parametric form of fuzzy numbers._ **Definition 11**.: _The \(r\)-cut of fuzzy number \(A\) is defined and called level wise form by_ \[[A]_{r}=A_{r}=\{x\in\mathbb{R};u_{A}(x)\geq r\} \tag{12}\] _where \(A_{r}\) is closed interval \(A_{r}=[A^{-}_{r},A^{+}_{r}]\) for any \(r\in[0,1]\). We note that \([A]_{0}=supp(u_{A}(x))\) and \(F_{\mathbb{R}}\) denote space of fuzzy number._ **Definition 12**.: _For every \(A,B\in F_{\mathbb{R}}\) and \(\lambda\in\mathbb{R},\ r\in[0,1]\) the addition and scalar multiplication is defined by_ \[(A\oplus B)_{r}=A_{r}+B_{r},\quad(\lambda\odot A)_{r}=\lambda A_{r}. \tag{13}\] **Definition 13**.: _The Hausdorff distance between two fuzzy numbers \(A,\ B\) using their \(r\)-cuts is defined by_ \[d_{H}(A,B)=\sup_{0\leq r\leq 1}\max\{|A_{r}^{-}-B_{r}^{-}|,|A_{r}^{+}-B_{r}^{+}|\} \tag{14}\] _Remark 2_.: The set of fuzzy numbers \((F_{\mathbb{R}},d)\) with addition and scalar multiplication given in Definition 12, is a complete metric space. **Definition 14**.: _Consider \(A,\ B\in F_{\mathbb{R}}\). The Hukuhara difference of \(A,B\) is defined by_ \[C=A\ominus B, \tag{15}\] _if \(A=B\oplus C\)._ **Definition 15**.: _Consider a fuzzy number valued function \(f:\mathbb{R}\to F_{\mathbb{R}}\) and \(x_{0}\in\mathbb{R}\), then \(l\) is called limit of \(f\) at point \(x_{0}\) if for every given \(\epsilon>0\), there exist \(\delta>0\) such that [42, 43, 53]_ \[0<|x-x_{0}|<\delta\Rightarrow d_{H}(f(x),l)<\epsilon, \tag{16}\] _or,_ \[\lim_{x\to x_{0}}f(x)=l, \tag{17}\] _if it exists, where \(d_{H}\) is the Hausdorff distance._ **Definition 16**.: _The fuzzy function \(f\) is called fuzzy continuous if [43, 53]_ \[\lim_{x\to x_{0}}f(x)=f(x_{0}). \tag{18}\] **Definition 17**.: _A fuzzy number valued function \(f:\mathbb{R}\to F_{\mathbb{R}}\) is called Hukuhara differentiable if there exist \(f^{\prime}(x)\in F_{\mathbb{R}}\) such that_ * _Case 1.(_\(I\)_-differentiable)_ \[f^{\prime}(x)=\lim_{y\to x}\frac{f(y)\ominus f(x)}{y-x},\quad y>x\] (19) * _Case 2.(_\(II\)_-differentiable)_ \[f^{\prime}(x)=\lim_{y\to x}\frac{f(x)\ominus f(y)}{y-x},\quad y>x\] (20) _where_ \(f^{\prime}(x)\) _is called the fuzzy derivative of_ \(f\) _at_ \(x\)_._ _Theorem 1_.: Let a fuzzy number valued function \(f:\mathbb{R}\to F_{\mathbb{R}}\) be denoted by \(f(x)=(\underline{f}(x,r),\overline{f}(x,r))\) for each \(r\in[0,1]\)[54]. Then [49, 55] 1. If \(f\) is \(I\)-differentiable, then we have \[f^{\prime}(x)=(\underline{f}^{\prime}(x,r),\overline{f}^{\prime}(x,r)). \tag{21}\] 2. If \(f\) is \(II\)-differentiable, then we have \[f^{\prime}(x)=(\overline{f}^{\prime}(x,r),\underline{f}^{\prime}(x,r)). \tag{22}\] **Definition 18**.: _Let \(f(x)\) be a fuzzy number-valued function. Then the fuzzy Riemann integral is defined as [42]_ \[J=FR\int_{a}^{b}f(x)dx=\oplus\sum_{i=0}^{n}\Delta x_{i}\odot f(x_{i}), \tag{23}\] _where \(\Delta x_{i}=x_{i+1}-x_{i}\) and \(\{a=x_{0}<x_{1}<...<x_{n}=b\}\) is a partition of \(I=[a,b]\). The fuzzy Riemann integral of \(f(x)\) is \(J\) if for every given \(\epsilon>0\), there exist \(\delta>0\) such as_ \[d_{H}\bigg{(}\oplus\sum_{i=0}^{n}\Delta x_{i}\odot f(x_{i}),J\bigg{)}<\epsilon. \tag{24}\] _where \(J\) is a fuzzy number._ **Definition 19**.: _Let \(f:I\to F_{\mathbb{R}}\) be a triangular number-valued function and \(f(x)=(f_{1}(x),f_{2}(x),f_{3}(x))\) and \(x_{0}\in I\). Then the fuzzy integral is defined by [53]_ \[\int_{a}^{b}f(x)dx=\bigg{(}\int_{a}^{b}f_{1}(x)dx,\int_{a}^{b}f_{2}(x)dx,\int_ {a}^{b}f_{3}(x)dx\bigg{)} \tag{25}\] Fuzzy fractal calculus on fractal curves In this section, we introduce fractal fuzzy calculus. **Definition 20**.: _Let \(f(\theta):F\to F_{\mathbb{R}}\) be a number-valued function on a fractal curve \(F\). Then the fuzzy \(F\)-limit of \(f\) at \(\theta_{0}\) through \(F\) is \(l\), if for a given \(\epsilon>0\), there exist \(\delta>0\), such that_ \[\theta\in F,\quad and\quad|\theta-\theta_{0}|<\delta\Rightarrow d_{H}(f( \theta),l)<\epsilon, \tag{26}\] _or_ \[\underset{\theta\rightarrow\theta_{0}}{FF\_lim}f(\theta^{\prime})=l \tag{27}\] _where \(d_{H}\) is the Hausdorff distance._ **Definition 21**.: _Let \(f(\theta):F\to F_{\mathbb{R}}\) be a number-valued function on \(F\). Then, \(f\) is called fuzzy \(F\)-continuous if_ \[\underset{\theta\rightarrow\theta_{0}}{FF\_lim}f(\theta)=f(\theta_{0}). \tag{28}\] **Definition 22**.: _Let \(f(\theta):F\to F_{\mathbb{R}}\) be a number-valued function. Then fractal Hukuhara difference at \(\theta_{0}\in F\) is defined by_ * _Case 1. (_\(I\)_-_\(F^{\alpha}\)_-differentiable)_ \[D^{\alpha}_{F,H}f(\theta_{0})=\underset{\theta\rightarrow\theta_{0}}{FF\_lim }\ \frac{f(\theta)\ominus f(\theta_{0})}{J(\theta)-J(\theta_{0})},\quad\theta> \theta_{0}.\] (29) * _Case 2. (_\(II\)_-_\(F^{\alpha}\)_-differentiable)_ \[D^{\alpha}_{F,H}f(\theta_{0})=\underset{\theta\rightarrow\theta_{0}}{FF\_lim }\ \frac{f(\theta_{0})\ominus f(\theta)}{J(\theta)-J(\theta_{0})},\quad\theta> \theta_{0}.\] (30) _where \(D^{\alpha}_{F,H}f(\theta_{0})\) is a fuzzy number._ **Definition 23**.: _Let \(f(x)\) be a fractal fuzzy number valued function. Then the fractal fuzzy Riemann integral is defined as [42]_ \[J=FFR\int_{C(a,b)}f(\theta)d^{\alpha}_{F}\theta=\oplus\sum_{i=0}^{n}\Delta J_ {i}\odot f(\theta_{i}), \tag{31}\] _The fractal fuzzy Riemann integral of \(f(\theta)\) is \(J\) if for a given \(\epsilon>0\), there exist \(\delta>0\) such as_ \[d_{H}\bigg{(}\oplus\sum_{i=0}^{n}\Delta J_{i}\odot f(\theta_{i}),J\bigg{)}<\epsilon. \tag{32}\] **Definition 24**.: _Let \(f:F\to F_{\mathbb{R}}\) be a fractal triangular number-valued function, \(f(\theta)=(f_{1}(\theta),f_{2}(\theta),f_{3}(\theta))\), and \(\theta_{0}\in F\), then_ \[\int_{C(a,b)}f(\theta)d^{\alpha}_{F}\theta=\bigg{(}\int_{C(a,b)}f_{1}(\theta) d^{\alpha}_{F}\theta,\int_{C(a,b)}f_{2}(\theta)d^{\alpha}_{F}\theta,\int_{C(a,b)}f _{3}(\theta)d^{\alpha}_{F}\theta\bigg{)}. \tag{33}\] ## 4 Fractal fuzzy differential equations First order linear fuzzy differential equations by using the generalized differentiability concept were solved [49, 55]. In this section, a \(\alpha\)-order fuzzy differential equation (F.D.E) is given. Then it is changed by its equivalent parametric form, and a new system, which contains two fractal differential equations, is solved. Consider the following fractal fuzzy differential equation with initial condition: \[D^{\alpha}_{F,H}x(\theta)=f(J(\theta),x(\theta)),\quad\tilde{x}(\theta_{0})= \tilde{x}_{0},\quad\theta\in F, \tag{34}\] where \(f:F\times F_{\mathbb{R}}\to F_{\mathbb{R}}\) is a fuzzy-valued function and \(\tilde{x}_{0}\in F_{\mathbb{R}}\). To solve Eq.(34), first we solve 1-cut and 0-cut of Eq.(34) as the following form \[\left\{\begin{array}{l}(D^{\alpha}_{F,H}x)^{[1]}(\theta)=f^{[1]}(J(\theta),x (\theta)),\\ \\ x^{[1]}(\theta_{0})=\tilde{x}_{0}^{[1]}\ \ \theta_{0}\in[0,\Theta].\end{array}\right. \tag{35}\] and \[\left\{\begin{array}{l}(D^{\alpha}_{F,H}x)^{[0]}(\theta)=f^{[0]}(J(\theta),x( \theta)),\\ \\ x^{[0]}(\theta_{0})=\tilde{x}^{[0]}_{0}\ \ \theta_{0}\in[0,\Theta].\end{array}\right. \tag{36}\] Then by solving Eqs.(35) and (36), we can find \(\tilde{x}(\theta)\) which is the solution of the fractal fuzzy differential equation Eq.(34). Here we consider two cases: Case (I): Suppose that \(\tilde{x}(\theta)\) is \(I\)-\(F^{\alpha}\)-differentiable. Then, we can write \[D^{\alpha}_{F,H}x(\theta)=[D^{\alpha}_{F,H}\underline{x}(\theta,r),D^{\alpha }_{F,H}\overline{x}(\theta,r)]. \tag{37}\] In view of Eqs.(34) and (37) for \(r\in[0,1]\), we have \[\left\{\begin{array}{l}D^{\alpha}_{F,H}\underline{x}(\theta,r)=\underline{ f}(\theta,r)\ \ \ \theta_{0}\leq\theta\leq\Theta\\ \\ D^{\alpha}_{F,H}\overline{x}(\theta,r)=\overline{f}(\theta,r)\ \ \ \theta_{0}\leq\theta\leq\Theta.\end{array}\right. \tag{38}\] Hence \[\left\{\begin{array}{l}[(1-r)D^{\alpha}_{F,H}\underline{x}^{[0]}(\theta)+rD ^{\alpha}_{F,H}\underline{x}^{[1]}(\theta)=(1-r)\underline{f}^{[0]}(\theta)+ r(\underline{f}^{[1]})(\theta),\\ \\ (1-r)D^{\alpha}_{F,H}\overline{x}^{[0]}(\theta)+rD^{\alpha}_{F,H}\overline{x}^ {[1]}(\theta)=(1-r)\overline{f}^{[0]}(\theta)+r(\overline{f}^{[1]})(\theta), \\ \\ \underline{x}(\theta_{0},r)=(1-r)\underline{x}^{[0]}(\theta_{0})+r\underline{ x}^{[1]}(\theta_{0}),\\ \\ \overline{x}(\theta_{0},r)=(1-r)\overline{x}^{[0]}(\theta_{0})+r\overline{x}^ {[1]}(\theta_{0}).\end{array}\right. \tag{39}\] It follows that \[\left\{\begin{array}{l}D^{\alpha}_{F,H}\underline{x}^{[0]}(\theta)= \underline{f}^{[0]}(\theta),\\ \\ D^{\alpha}_{F,H}\overline{x}^{[0]}(\theta)=\overline{f}^{[0]}(\theta)\\ \\ \underline{x}^{[0]}(\theta_{0})=\underline{x}_{0}^{[0]}\\ \\ \overline{x}^{[0]}(\theta_{0})=\overline{x}_{0}^{[0]},\end{array}\right. \tag{40}\] and \[\left\{\begin{array}{l}D^{\alpha}_{F,H}\underline{x}^{[1]}(\theta)= \underline{f}^{[1]}(\theta),\\ \\ D^{\alpha}_{F,H}\overline{x}^{[1]}(\theta)=\overline{f}^{[1]}(\theta)\\ \\ \underline{x}^{[1]}(\theta_{0})=\underline{x}_{0}^{[1]}\\ \\ \overline{x}^{[1]}(\theta_{0})=\overline{x}_{0}^{[1]},\end{array}\right. \tag{41}\] One can find \(\underline{x}^{[0]}(\theta),\overline{x}^{[0]}(\theta),\underline{x}^{[1]}( \theta),\overline{x}^{[1]}(\theta)\) by solving Eqs.(40) and (41). Therefore we obtain the solution of Eq.(34) using 0-cut and 1-cut solutions as follows: \[\tilde{x}(\theta)=[\underline{x}(\theta,r),\overline{x}(\theta,r)]=[(1-r) \underline{x}^{[0]}(\theta)+r\underline{x}^{[1]}(\theta),(1-r)\overline{x}^{ [0]}(\theta)+r\overline{x}^{[1]}(\theta)]. \tag{42}\] Case (II). Let \(\tilde{x}(\theta)\) be \(II\)-\(F^{\alpha}\)-differentiable. Then, we can write \[D^{\alpha}_{F,H}x(\theta)=[D^{\alpha}_{F,H}\overline{x}(\theta,r),D^{\alpha}_{F,H}\underline{x}(\theta,r)]. \tag{43}\] Likewise, Case (I), we have \[\left\{\begin{array}{l}D^{\alpha}_{F,H}\underline{x}^{[0]}(\theta)=\overline{ f}^{[0]}(\theta),\\ \\ D^{\alpha}_{F,H}\overline{x}^{[0]}(\theta)=\underline{f}^{[0]}(\theta)\\ \\ \underline{x}^{[0]}(\theta_{0})=\overline{x}_{0}^{[0]},\end{array}\right. \tag{44}\] and \[\left\{\begin{array}{l}D^{\alpha}_{F,H}\underline{x}^{[1]}(\theta)=\overline{f^{ [1]}}(\theta),\\ \\ D^{\alpha}_{F,H}\overline{x^{[1]}}(\theta)=\underline{f^{[1]}}(\theta)\\ \\ \underline{x}^{[1]}(\theta_{0})=\overline{x}_{0}^{[1]}\\ \\ \overline{x^{[1]}}(\theta_{0})=\underline{x}_{0}^{[1]},\end{array}\right. \tag{45}\] By solving the ordinary fractal differential equations (44)) and (45), one may obtain the solution of FDE (34) which is \(II\)-\(F^{\alpha}\)-differentiable as \[\tilde{x}(\theta)=[\overline{x}(\theta,r),\underline{x}(\theta,r)]. \tag{46}\] _Example 1_.: Consider the fractal fuzzy differential equation as \[D^{\alpha}_{F,H}x(\theta)=x(\theta)+\tilde{c}, \tag{47}\] with the conditions \[x(0,r)=[r,2-r],\ \ \tilde{c}=[r-1,1-r],r\in[0,1]. \tag{48}\] Here we consider two cases. Case I. Let \(\tilde{x}(\theta)\) be \(I\)-\(F^{\alpha}\)-differentiable. Then by using Eqs.(40) and (41) we arrive at \[\left\{\begin{array}{l}D^{\alpha}_{F,H}\underline{x}^{[0]}(\theta)= \underline{x}^{[0]}(\theta)-1\\ \\ D^{\alpha}_{F,H}\overline{x^{[0]}}(\theta)=\overline{x^{[0]}}(\theta)+1\\ \\ \underline{x}^{[0]}(\theta_{0})=0\\ \\ \overline{x^{[0]}}(\theta_{0})=2\end{array}\right. \tag{49}\] and \[\left\{\begin{array}{l}D^{\alpha}_{F,H}\underline{x}^{[1]}(\theta)= \underline{x}^{[1]}(\theta)\\ \\ D^{\alpha}_{F,H}\overline{x^{[1]}}(\theta)=\overline{x^{[1]}}(\theta)\\ \\ \underline{x}^{[1]}(\theta_{0})=1\\ \\ \overline{x^{[1]}}(\theta_{0})=1\end{array}\right. \tag{50}\] By solving Eqs.(49) and (50), we obtain \[\underline{x}^{[0]}(\theta)=-\exp(J(\theta))+1,\ \ \ \ \overline{x^{[0]}}(\theta)=3\exp(J(\theta))-1\] \[\underline{x}^{[1]}(\theta)=\exp(J(\theta))+1,\ \ \ \ \ \ \overline{x^{[1]}}(\theta)=\exp(J(\theta)) \tag{51}\] By substituting Eq.(51) into Eq.(42), we get \[\tilde{x}(\theta)=[\exp(J(\theta))(2r-1)-r+1,r-\exp(J(\theta))(2r-3)-1]. \tag{52}\] Case II. Let \(\tilde{x}(\theta)\) be \(II\)-\(F^{\alpha}\)-differentiable. Then, by utilizing Eqs.(44) and (45) we get \[\left\{\begin{array}{l}D^{\alpha}_{F,H}\underline{x}^{[0]}(\theta)=\overline {x^{[0]}}(\theta)+1,\\ \\ D^{\alpha}_{F,H}\overline{x^{[0]}}(\theta)=\underline{x}^{[0]}(\theta)-1\\ \\ \underline{x}^{[0]}(\theta_{0})=0\\ \\ \overline{x^{[0]}}(\theta_{0})=2,\end{array}\right. \tag{53}\] and \[\left\{\begin{array}{l}D_{F,H}^{\alpha}\underline{x}^{[1]}(\theta)=\overline{x^{[1 ]}}(\theta),\\ \\ D_{F,H}^{\alpha}\overline{x^{[1]}}(\theta)=\underline{x}^{[1]}(\theta)\\ \\ \underline{x}^{[1]}(\theta_{0})=1\\ \\ \overline{x^{[1]}}(\theta_{0})=1,\end{array}\right. \tag{54}\] Solving Eqs.(53) and (54) gives \[\underline{x}^{[0]}(\theta) =\exp(J(\theta))-r+\frac{(2r-2)}{\exp(J(\theta))}+1,\ \ \ \overline{x^{[0]}}(\theta)=r+\exp(J(\theta))-\frac{(2r-2)}{\exp(J(\theta))}-1\] \[\underline{x}^{[1]}(\theta) =\exp(J(\theta)),\ \ Figure 1: Graph of Eq.(56) for \(r=0.3\) Conclusion In this paper, we have formulated the fractal fuzzy calculus which is a generalization of fractal calculus on fuzzy number-valued functions. Fractal calculus is a generalization of ordinary calculus which involves functions with a fractal domain such as Cantor set and Koch curves. Fractal fuzzy differential equations can be used to model uncertainty in the initial condition or dynamic of media with a fractal structure. The research in this direction is in progress. **Acknowledgements:** Cristina Serpa acknowledges partial funding by national funds through FCT-Foundation for Science and Technology, project reference: UIDB/04561/2020.
2303.10014
Reliability of Tumour Classification from Multi-Dimensional DCE-MRI Variables using Data Transformations
Summary mean DCE-MRI variables show a clear dependency between signal and noise variance, which can be shown to reduce the effectiveness of difference assessments. Appropriate transformation of these variables supports statistically efficient and robust comparisons. The capabilities of DCE-MRI based descriptions of hepatic colorectal tumour classification was assessed, with regard to their potential for use as imaging biomarkers. Four DCE-MRI parameters were extracted from 102 selected tumour regions. A multi-dimensional statistical distance metric was assessed for the challenging task of comparing intra- and inter- subject tumour differences. Statistical errors were estimated using bootstrap resampling. The potential for tumour classification was assessed via Monte Carlo simulation. Transformation of the variables and fusion into a single chi-squared statistic shows that inter subject variation in hepatic tumours is measurable and significantly greater than intra-subject variation at the group level. However, reliability analysis shows that, at current noise levels, individual tumour assessment is not possible. Appropriate data transforms for DCE-MRI derived parameters produce an improvement in statistical sensitivity compared to conventional approaches. Reliability analysis shows, that even with data transformation, DCI-MRI variables do not currently facilitate good tumour discrimination and a doubling of SNR is needed to support non-trivial levels of classification
S. V. Notley, N. A. Thacker, L. Horsley, R. A. Little, Y. Watson, S. Mullamitha, G. C. Jayson, A. Jackson
2023-03-17T14:37:53Z
http://arxiv.org/abs/2303.10014v1
Reliability of Tumour Classification from Multi-Dimensional DCE-MRI Variables using Data Transformations. ## Abstract Summary mean DCE-MRI variables show a clear dependency between signal and noise variance, which can be shown to reduce the effectiveness of difference assessments. Appropriate transformation of these variables supports statistically efficient and robust comparisons. The capabilities of DCE-MRI based descriptions of hepatic colorectal tumour classification was assessed, with regard to their potential for use as imaging biomarkers. Four DCE-MRI parameters were extracted from 102 selected tumour regions. A multi-dimensional statistical distance metric was assessed for the challenging task of comparing intra- and inter- subject tumour differences. Statistical errors were estimated using bootstrap resampling. The potential for tumour classification was assessed via Monte Carlo simulation. Transformation of the variables and fusion into a single chi-squared statistic shows that inter subject variation in hepatic tumours is measurable and significantly greater than intra-subject variation at the group level. However, reliability analysis shows that, at current noise levels, individual tumour assessment is not possible. Appropriate data transforms for DCE-MRI derived parameters produce an improvement in statistical sensitivity compared to conventional approaches. Reliability analysis shows, that even with data transformation, DCI-MRI variables do not currently facilitate good tumour discrimination and a doubling of SNR is needed to support non-trivial levels of classification. ## Introduction Knowledge of the gene mutations that drive cancer has led to the development of a large number of mechanism-based therapeutics (MBT). However, there is a clear need to improve trial design to limit patient exposure to ineffective drugs and to accelerate the decision making for new agents. Imaging biomarkers are particularly attractive as they allow interrogation of the whole tumour, repeated measurements over time and support studies of inter- and intra-tumoural heterogeneity ([1, 2, 3, 4, 5, 6]). Dynamic contrast enhanced MRI (DCE-MRI) has been widely used in clinical trials of new agents and allows estimation of a number of parametric variables describing the tumour vascular micro-environment ([2]). Previous studies have described correlations between DCE-MRI characteristics and tumour grade and histological subtype ([7; 8]). These are typically weak associations, inadequate for effective patient/tumour stratification even though statistical differences between tumour types may be seen in group comparison studies. There is a clear need to optimise the phenotypic information that we can extract from imaging data to improve the specificity and power of clinical trials using imaging biomarkers and, ideally, to provide sufficient statistical power to support personalised therapeutic decisions in individual patients. One common approach is the development of image acquisition and analysis strategies to improve the biological specificity and reliability of individual imaging derived parameters. In this work we take a complementary approach, to ensure that the information content of this multi-parametric data is fully leveraged to support decision-making by the use of efficient statistical approaches. In this study we use hepatic metastatic colorectal cancer as a model to assess the ability of multi-parametric DCE-MRI data to support stratification of phenotypic tumour variation. We show that individual DCE-MRI derived parameters typically have a low information content and are not capable of robustly classifying tumour sub-types with any fidelity. Fusion of the four DCE-MRI parameters into a single discriminating measure, to make better use of the available information, is problematic since the variables are non-commensurate. Further to this the parameters also show a dependency between the variance of the signal measurement noise and the mean value of the variable. The use of non-linear transformations to generate variables with uniform independent measurement noise has been described previously ([9; 10; 11; 12; 13; 14; 15]). Although this technique has been recently applied to medical images ([16]) it is not widely recognised in the medical and radiologic literature. The method transforms the variables into a homoscedastic space where measurement noise is independent of the parameters. A common approach, found in the machine learning and pattern recognition literature, attempts to address these issues by scaling variables by their respective ranges or an estimated standard deviation based on the raw variables ([17; 18; 19; 20]). This does not make any attempt to model the error characteristics of the individual parameters and does not facilitate a conventional statistical difference test. In this work we use an approach presented by Notley _et al_ ([21]) that stabilizes parameter noise estimates, based on difference of repeats, transforming variables to a homoscedastic space. This allows the estimation of appropriate summary statistics ([14]), combination of non-commensurate variables and robust identification of outliers. This allows a multi-dimensional approach, based upon the construction of a chi-squared statistic, supporting the fusion of information from multiple parameters. The method is evaluated on whole tumour data sets in the analysis of inter and intra-subject variation. Detecting such differences is more challenging than the more common task of detecting differences between normal tissue and pathology, but in line with the use of pharmacokinetic data for the assessment of whole tumour heterogeneity ([4]). The evaluation is restricted to varying degrees to demonstrate the general validity of the approach and to avoid drawing false conclusions due to statistical biases in the data. We hypothesise that: 1) the use of appropriate modelling of the parameter dependent characteristics of measurement error will allow transformation of parametric variables to more closely match the assumptions of standard statistical tests and: 2) that this will result in an increase in discriminatory power over the commonly used analysis approaches. Surrogate Monte Carlo datasets for varying numbers of tumour types were constructed based on the observed estimated signal and noise characteristics of the transformed variables. The reliability of correctly classifying tumour types was estimated, for varying signal-to-noise ratios, as a function of classification density (granularity) both with and without variable transformations. We believe that these results are indicative of the ability of current DCE-MRI variables to quantify phenotypic whole tumour heterogeneity both within and across subjects and conclude that to achieve any level of practical consistency the SNR of the these variable needs to be improved. ## Methods The data used in this study is from a data set collected as part of a larger drug trial. However, in this work we use the baseline repeatability dataset to investigate the ability of DCE-MRI parameters to discriminate between tumours. ### Patient Selection The patients included in this study were undergoing imaging with Dynamic Contrast Enhanced Magnetic Resonance Imaging (DCE-MRI) as a part of a clinical trial running at our institution and had given written informed consent to participate in the study. The study had ethical approval and was carried out in accordance with standards of GCP. Patients were eligible providing they were over eighteen years of age with biopsy confirmed metastatic colorectal cancer, without previous therapy for metastatic disease and disease measuring 3cm. All patients had undergone 2 baseline DCE-MRI scans, median 4 days (range 2-7 days) prior to treatment. It is this data which has been used in the current work to investigate biological variation within tumour tissues. To help control for tumour micro-environment, only patients with liver metastases were included in our analyses. This resulted in a sample of 29 subjects with numbers of metastases varying between 2 and 6, giving 102 tumours in total. ### MRI Data Acquisition and Analysis Data were acquired on a 1.5T Philips Intera system. The baseline T1 measurement consisted of 3 axial spoiled Fast Field Echo (gradient echo) volumes with flip angles 2, 10, 20 degrees, respectively and 4 signal averages. The dynamic series was acquired using the scanner whole body coil (Q body coil) for transmission and reception. The dynamic series consisted of 75 consecutively-acquired axial volumes with a flip angle of 20 degrees, 1 signal average, and a temporal resolution of 4.97 s. All studies maintained the same number of slices (25), field of view (375 mm \(\times\) 375 mm), matrix size (128 \(\times\) 128), TR (4.0 ms), and TE (0.82 ms) for the baseline T1 measurement images and the dynamic series itself. Slice thickness was 4 mm for small target lesions or 8 mm for larger lesions, giving superior-inferior coverage of 100 mm or 200 mm, respectively. Gadoterate Meglumine (Dotarem(r)) was injected intravenously (IV) by power injector at the time of the sixth dynamic acquisition at 0.2ml/kg, followed by a 20ml saline flush at a set rate of 3ml/sec. This was followed by acquisition of a post contrast T1-weighted image. VOIs were delineated by an experienced radiographer on co-registered high resolution T1- and T2-weighted images. Whole TV was measured for each lesion. An arterial input function was measured where possible; in circumstances where this was not appropriate, a population derived input function was used ([22]). Analysis was performed using in-house software (Manchester Dynamic Modelling) and the extended Tofts and Kermode pharmacokinetic model(25) was used to calculate the fractional volume of the extravascular extracellular space (\(v_{e}\)). The model free measurement, initial area under the gadolinium contrast curve at 60s (IAUC60) was calculated and voxels from tumour VOIs were included in the analysis if they demonstrated uptake of contrast, this was defined as an initial IAUC in the first 60 seconds (IAUC60)\(>\)0 mmol (See Appendix B for a description related to contrast agent concentration estimation). Where possible, measured Arterial Input Functions (AIFs) were used when data was of sufficient quality to allow derivation; in other cases a population derived AIF were used. Median values of the measured parameters \(K^{trans}\), \(v_{p}\) and \(v_{e}\) were extracted from distributions obtained from the 102 selected tumour regions. The enhancing fraction (\(E_{frac}\)) was also measured and then redefined as \(E^{\prime}_{frac}=100(1-E_{frac})\). For each tumour the process was repeated to generate a repeated measures dataset. ### Optimal Data Transforms In this work we directly consider the characteristics of the repeatability sample noise, in particular heteroscedasticity, which may be determined by comparison of matched pairs. The dependency of the noise on the measurement value may be visualised by plotting the differences of the repeat measurements as a function of the average of the average i.e. Bland-Altman plots. Following the method of Notley _et al_ ([21]) we use a power law transform of the form \[f(x)=x^{\theta} \tag{1}\] that was chosen empirically based upon observation of the Bland-Altman plots. The log-likelihood function was optimised with respect to \(\theta\) by exhaustive search over the range -5 to 5. ### Inter-Tumour Distance Measures The data was first analysed using the standard method of scaling each variable by its standard deviation. A matrix of the measured variables, \(\mathbf{V}\), is defined as: \[\mathbf{V}=\begin{pmatrix}k^{trans}(1)&v_{p}(1)&v_{e}(1)&E^{\prime}_{frac}(1)\\ \vdots&\vdots&\vdots&\vdots\\ k^{trans}(N)&v_{p}(N)&v_{e}(N)&E^{\prime}_{frac}(N)\end{pmatrix} \tag{2}\] where N is the number of tumours in the dataset (\(N=102\)). A matrix \(\mathbf{R}\) was similarly defined for the repeat measurements. A distance, \(D_{stan}\), between tumours was then defined as: \[D^{j,k}_{stan}=\sum_{i=1}^{4}\frac{(\mathbf{V}_{j,i}-\mathbf{R}_{k,i})^{2}}{ \sigma^{2}_{\mathbf{V}_{*,i}}} \tag{3}\] where \(D^{j,k}_{stan}\) is a distance between the \(j\)-th and the \(k\)-th tumours in the data set and \(\sigma_{\mathbf{V}_{*,i}}\) is the standard deviation of the \(i\)-th variable (\(i\)-th column of \(\mathbf{V}\)) as measured. Data transforms, \(f(\cdot)\), were estimated and applied to each raw variable to transform the data to the homoscedastic space. The S.D. of the _noise_ for the transformed data was then estimated by subtraction of the repeat measurements from the respective initial measurement. A chi-squared variable for 4 degrees of freedom for the measured difference between tumours \(i\) and \(j\) was defined as the sum of the squares of the difference between changes in each derived DCE MRI variable \(i\) divided by its reproducibility variance. \[\chi^{2}_{j,k}=\sum_{i=1}^{4}\frac{(f_{i}(\mathbf{V}_{j,i})-f_{i}(\mathbf{R}_{ k,i}))^{2}}{\sigma^{2}_{\mathbf{n}_{i}}} \tag{4}\] where \(f_{i}(\cdot)\) is the transformation function for the \(i\)-th variable and \(\sigma_{\mathbf{n}_{i}}\) is the standard deviation of the \(i\)-th _noise_ signal, \(\mathbf{n}_{i}\), a column vector defined as \(\mathbf{n}_{i}=f_{i}(\mathbf{V}_{*,i})-f_{i}(\mathbf{R}_{*,i})\), where \(\mathbf{V}_{*,i}\) and \(\mathbf{R}_{*,i}\) are the \(i\)-the columns of the matrices \(\mathbf{V}\) and \(\mathbf{R}\) respectively. Although equations 3 and 4 look similar it is important to note that in equation 4 the denominator is an estimate of the level of _homogeneous noise_. The \(\chi^{2}\) distribution had Poisson like noise characteristics in that the variance of the estimate scales with the value. As discussed above, for statistical efficiency the variable needs to be transformed to a space where the distribution of the distance metric does not change with the value of the metric i.e. the homoscedastic space. In this case the method that gives the homoscedastic space is given by the square root transform (as an approximation to the Anscombe transform). Thus, the quantity \(D^{j,k}_{tran}~{}=~{}\sqrt{\chi^{2}_{j,k}/4}\) is expected to have a mean value of 1 for data \(j\) and \(k\) which differs only due to the presence of the modelled level of measurement error (\(\sigma_{\textbf{n}}\)). The distance measures \(D_{stan}\) and \(D_{tran}\) were calculated for differences between tumours of each subject, and also for differences between tumours from different subjects. Differences were always taken to the measurement from the alternative baseline study so that the estimated reproducibility was correctly incorporated. Bootstrap resampling (1000 resamples) was used to estimate the mean and standard deviation of these distances. For each bootstrap resampled dataset, the scaling parameters were re-estimated. The statistical distance between group means was then calculated as: \[D_{stat}=\sqrt{\frac{(\mu_{inter}-\mu_{intra})^{2}}{\sigma_{inter}^{2}+\sigma_{ intra}^{2}}} \tag{5}\] where \(\mu_{inter}\) and \(\mu_{intra}\) are the inter and intra group means respectively; \(\sigma_{inter}\) and \(\sigma_{intra}\) are the respective group standard deviations. For a large number of samples, this statistical distance may be interpreted as a z-score and significance levels (p-values) were calculated from this using the standard integration of the error function. Errors on the statistical distance and z-scores were estimated using error propagation [(28)]. With such a small dataset, consisting of only 29 subjects, bias may be introduced, especially in the inter-subject distances, due to combinatorial effects of some subjects having more tumours than others. To reduce this effect we defined a Maximum Number of Tumours Per Subject (MNTSP). Distances were computed for a MNTSP ranging from 2 to 6. This approach was also used to gain insight into the performance gains made with smaller data sets. ### Reliability Analysis The reliability of DCE-MRI parameters to stratify tumour variation was investigated by formulating the problem as a classification task based on the multi-dimensional distance measure (see above). Monte Carlo datasets were generated to match the distributions found on the measured signal and noise characteristics. The average correct classification of tumours was assessed at varying levels of categorisation and noise. For simulation purposes, metastases are assumed to be genetic clones, within each subject, with phenotypic biological variation due to variations in the microenvironment. For each tumour type/class, a clonal 'center', in the transformed variable space, is randomly chosen, from a normal distribution, based on the measured mean and standard deviation of each variable. A ground truth dataset of individual tumours, with biological variation, was generated as random samples, with normal distribution, around each clonal center. A full repeated measures dataset was then generated by inclusion of additive, normally distributed, measurement noise to the ground truth dataset. The corresponding heteroscedastic dataset was then generated by applying the inverse of the transform functions estimated from the real dataset. Surrogate repeated measure datasets were generated with the number of tumour types/classes in the range 2 to 30 for both homoscedastic and heteroscedastic datasets; two thousand surrogate datasets were generated for each. Each tumour of the noisy datasets were classified based on the closest clonal center measured using the statistical distance described above. The average percentage of correct classifications across the two thousand surrogate datasets was computed. The analysis was repeated with the measurement noise halved (the signal-to-noise ratio doubled). ## Results Figure 1 shows histograms and Bland-Altman plots, figure 2, constructed from the repeat data show a parameter dependent measurement repeatability. As these repeat measurements were obtained from different scan sessions we can assume these estimates include all important aspects of the variation intrinsic to the process of measurement and consequently the accuracy with which we can quantify biological change. Table 1 shows the results of the computation of statistical measures on both the distance measures \(D_{stan}\) and \(D_{tran}\). The mean and standard deviation of the mean, calculated using bootstrap resampling, are shown for both intra- and inter-subject groups. The z-score and corresponding p-value for the distance between the two groups (equation 5) is also shown. \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c|c|} & Size & & & & & & & \\ MNTSP & (intra/inter) & \(\mu_{intra}\) & \(\sigma_{intra}\) & \(\mu_{inter}\) & \(\sigma_{inter}\) & Stat. Dist & p-value & \\ \hline \(D_{stan}\) & & & & & & & & & \\ \hline 2 & 58/59 & 1.54 & 0.738 & 2.96 & 0.488 & 1.6\(\pm\)0.03 & 0.0548 & \\ 3 & 75/76 & 1.46 & 0.417 & 2.83 & 0.384 & 2.4\(\pm\)0.03 & 0.0082 & \\ 4 & 85/86 & 1.34 & 0.312 & 2.69 & 0.342 & 3.0\(\pm\)0.03 & 0.0013 & \\ 5 & 92/93 & 1.27 & 0.272 & 2.71 & 0.319 & 3.4\(\pm\)0.03 & 0.0003 & \\ 6 & 96/97 & 1.17 & 0.225 & 2.64 & 0.294 & 3.9\(\pm\)0.03 & \(<\)0.0001 & \\ \hline \(D_{tran}\) & & & & & & & & \\ \hline 2 & 58/59 & 1.41 & 0.193 & 2.27 & 0.173 & 3.3\(\pm\)0.03 & 0.0004 & \\ 3 & 75/76 & 1.52 & 0.156 & 2.21 & 0.144 & 3.4\(\pm\)0.03 & 0.0003 & \\ 4 & 85/86 & 1.49 & 0.131 & 2.10 & 0.141 & 3.1\(\pm\)0.03 & 0.0010 & \\ 5 & 92/93 & 1.44 & 0.118 & 2.09 & 0.136 & 3.6\(\pm\)0.03 & 0.0001 & \\ 6 & 96/97 & 1.43 & 0.112 & 2.11 & 0.127 & 4.0\(\pm\)0.03 & \(<\)0.0001 & \\ \end{tabular} \end{table} Table 1: _Statistical measures made on distance measures \(D_{stan}\) and \(D_{tran}\)._ Figure 2: _Bland-Altman plots, showing reproducibility for median values of \(k^{trans}\), \(v_{p}\), \(v_{e}\) and \(E_{frac}\). A clear parameter dependant accuracy is seen for all variables._ ### Standard Distance Measure From table 1, it can be seen that for a small data set with a MNTSP of 2 the standard method fails to find a significant difference between the groups. As the MNTSP is increased the statistical distance between the groups increases and becomes statistically significant. ### Homoscedastic Distance Measure The data transform method was applied to each measured variable and the optimal transform values shown in table 2 were found. Figure 3 show the histograms of the transformed variables and figure 4 shows the corresponding Bland-Altman plots. These figures show data distributions more closely conforming to a Gaussian and no evidence of any dependency of repeatability on parameter values. The statistical distance, \(D_{tran}\), constructed for differences between tumours of each subject, and also for differences between tumours from different subjects (Figure 5) gave mean values of 1.46 \(\pm\) 0.14 and 2.16 \(\pm\) 0.18. Both of these values are statistically significantly different to the null hypothesis, that differences are entirely due to measurement noise. This provides evidence at the population level for not only inter-tumoural heterogeneity but also for inter-tumoural heterogeneity within single subjects. Table 2 shows the standard deviation of each of the transformed variables. The distribution of the transformed and scaled measurements from all 29 subjects were found to have standard deviations between 1.7 and 2.7. As the expected value for pure noise is 1.0, this indicates that the signal pertaining to biological variation for each individual measurement is quite weak, and insufficient to allow effective separation of this tumour data. The results of the computation of statistical measures on the \(D_{tran}\) distance metric are shown in the lower part of table 1. In this case, application of the transforms results in significant increases in statistical efficiency (Stat. Dist.) over that found using \(D_{stan}\). ### Reliability Analysis Figure 6 shows the results of the reliability analysis. For the raw data with parameter dependent noise the average classification accuracy for two tumour \begin{table} \begin{tabular}{c c c} \hline \hline & Transform Parameter \(A\) & \\ Variable & \((f(x)=x^{A})\) & Standard Deviation \\ \hline k-trans & 0.2 & 2.7 \\ \(v_{e}\) & 0.3 & 2.4 \\ \(v_{p}\) & 0.1 & 1.7 \\ 1-\(E_{frac}\) & 0.6 & 1.9 \\ \hline \hline \end{tabular} \end{table} Table 2: _Transform parameters for each variable_ classes is approximately 77%. The classification accuracy decreases in a monotonic fashion as the classification density is increased. With 30 tumour classes the average accuracy is around 40%. Results are also shown for the doubling of the SNR. Transformation of the variables to the homoscedastic space significantly improves the classification accuracy in both cases. ## Discussion In this work we have presented a statistically motivated approach to comparing multi-dimensional intra-subject whole tumour measurements to inter-subject measurements from DCE-MRI. Hepatic metastatic colorectal cancer was used as a model to assess the ability of multi-parametric DCE-MRI data to support stratification of phenotypic whole tumour variation. The liver is a preferen Figure 3: _Transformed variables scaled to reproducibility. Unlike the original parameter distributions (Figure 1), the spread of each variable (e.g. variance) is a measure of the associated information content. Gaussian random variables containing no signal (only noise) are expected to have a Gaussian distribution with unit variance, see Table 2._ tial site for metastases in colorectal cancer, multiple lesions are common and metastatic disease is a common target in therapeutic trials of novel agents. Genetic heterogeneity has been described between primary tumours and metastases ([23]) and there is evidence that phenotypic and genetic heterogeneity also exists between metastatic lesions within the same patient. Goasguen and colleagues ([24]) reported significant variation in treatment response in 64% of tumour fragments derived from different metastases within a single patient. This was also associated with considerable inter-metastatic heterogeneity in levels of gene expression. This is supported by retrospective analysis of the CAIRO and CAIRO II trials ([25]) which demonstrated mixed response to therapy in 36% of patients with multiple metastases, associated with a decreased median survival of 23.7 months compared with 36 months in patients with homogeneous response. Figure 4: _Bland-Altman plots, showing reproducibility for transformed variables derived from \(k^{trans}\), \(v_{p}\), \(v_{e}\) and \(100(1-E_{frac})\), scaled on the x axis to units of measured reproducibility \(\sigma\). For a successful transformation the residual distributions (distribution of scatter above and below zero) should be independent of the variable. The high density value in the \(100(1-E_{frac})\) plot is due to the quantisation of this variable at 100% which causes identical values which cannot be separated by a transformation._ Figure 5: _The distributions of statistical distances, \(D_{tran}\), for within subjects (left) and between subject (right) tumours. Computed using a chi squared statistic based on the transformed and scaled DCE MRI summary variables of median \(K^{trans}\),\(v_{e}\),\(v_{p}\) and \(100(1-E_{frac})\))._ Figure 6: _Results of the Monte-Carlo reliability analysis showing the fraction of correctly classified tumours with both the \(D_{stan}\) (heteroscedastic) and \(D_{tran}\) (homoscedastic) measures. Results are shown for both the measured repeatability noise levels and for a doubling of the signal-to-noise ratio._ The growing evidence that significant biological variation exists within and between metastatic deposits implies that heterogeneity of tumour response to different therapies might be observed if discriminatory biomarkers can be developed. Repeated or multiple tissue biopsies are clearly impractical giving rise to an increasing need for alternate non-invasive approaches. Imaging biomarkers (IB) provide a potential solution offering unique advantages over soluble or tissue-based biomarkers. Ideally, IB could be used to identify biological / genetic variations to support enrichment of clinical trial data and provide predictive information to guide therapy. However there remain substantial technical problems associated with the use of IBs in this context. Identification of biological variability within tumours requires the calculation of reliable and robust IBs from each voxel in the tumour. In practice such pixel-by-pixel mapping is complicated by with significant uncertainties (errors) related to physiological and biological variation within the tissue. For example, the accuracy of measurements of blood volume varies systematically with the measured value ([26]) and the error models associated with many IB demonstrate similar but more complex behaviour ([27]). A repeated measurements data set composed of 4 MR derived IB in hepatic liver colorectal metastases was collected from 29 subjects (102 tumours). The measurements have differing scales and dynamic ranges, but further to this, Bland-Altman plots show that the variables have parameter dependent noise characteristics. Consequently, use of the original variables will lead to large differences being identified inappropriately, due to the differences in noise characteristics rather than biology. This invalidates the simple use of combined data in their raw state. The overall variations seen in each DCI-MRI variable are most likely dependent on factors such as field strength, flip angle, tumour volumes etc. In this work we use a repeated measures dataset where the MRI parameters such as flip angles, field strength etc are constant across the dataset; tumour size and AIF were controlled as much as possible. Due to these controls it is valid to seek an empirical transformation that considers the net error distributions, based on the observed error characteristics. We assume that the error distributions observed from the repeated measures dataset contain all sources of error but that any error characteristics dependent on experimental factors are approximately constant and independent. Whilst a more sophisticated model could be constructed, based directly on all separate forms of perturbation, more data would be needed in order to estimate its parameters and any additional improvements in statistical efficiency may be negligible if the dominant trend has already been captured. A maximum-likelihood optimisation of a power law transform was used to transforms variables to a space where the noise distribution is independent of the parameter value itself. Thus, the transformation of the variables has allowed them to be robustly combined in a single distance metric that operates over a Euclidean space. Any _significant_ changes in this distance may now be more accurately attributed to measurable changes. The intra-subject results show a high level of similarity between tumours with some evidence of inter-tumoral heterogeneity within single subjects. This has implications when using multiple tumours from individual subjects to boost summary statistics that assume independence. Thus, in this work, results limited to 2 metastasis per subject are the most meaningful. With this limit enforced, the method is able to detect significant differences between the inter- and intra- groups, showing more heterogeneity between subjects than within. With a view to assessing the ability of the derived IB to quantify whole tumour heterogeneity, surrogate datasets with known ground truth were generated by Monte Carlo simulation. These datasets were used to investigate the reliability of the derived IB in terms of correctly classifying individual tumours to their known class at varying levels of classification resolution. The results of this analysis show that, for a reasonable level of classification resolution of around 20 tumour types, with the raw variables fused in the standard fashion, the derived IB are only capable of correct classification with an average rate of 50%. Combining the information from the variables as described in this work improves the classification results at all levels of classification density and is agreement with our hypothesis. (1) Homogeneous noise more closely matches the assumptions made in statistical tests based on analysis of variance. Summary statistics, such as mean and variance, generated from the analysis are now a truer reflection of the properties found in the data. (2) Furthermore, by conforming better to these assumptions, the statistical efficiency of the tests is improved ([10]). The Monte Carlo shows that a doubling of the SNR ([28]) for each variable significantly improves the classification rate to around 83% for 20 tumour types and is above 85% for 10 tumour types and less (1 in 10 error rate). In terms of individuals this would give a level of confidence that may be of practical value in clinical use, especially with regard to subjects with multiple metastatic tumours. In conclusion, we have demonstrated the use of appropriate data transforms and combination of DCE-MRI derived parameters that ensures the credible interpretation of statistical differences. The characteristics of the transformed variables allow principled combination of data from multiple IB to characterise individual tumour deposits and produces a significant improvement in discrimination when compared to conventional approaches. With current levels of SNR in derived IB for hepatic metastatic tumours robust stratification/classification of tumours is not reliable. However, the work of _Krokos,2017_ has shown that with improved models and fitting procedures the doubling of the SNR is possible. Further work in this area will investigate the validity of using standard corrections to account for the destabilising effects of haematocrit variation on DCE-MRI parameters (See appendix A). ## Appendix A: Estimation of Contrast Agent Concentration from Signal Intensity For a sequence that spoils the transverse magnetisation and produces \(T_{1}\) contrast, the signal intensity is given by \[S=S_{0}\frac{sin(\alpha)(1-e^{-\frac{TR}{T_{1}}})}{(1-e^{-\frac{-TR}{T_{1}}})cos( \alpha)} \tag{6}\] where \(TR\) is the repetition time and \(\alpha\) is the flip angle. Gd induces a shift in the bulk magnetic susceptibility (BMS), the resonance frequency of the water protons. This phenomenon is caused by the local variations in the magnetic field due to a Gd inhomogeneous distribution within the vessel and in particular at the boundaries of the tissues. The \(T_{2}\) and \(T_{2}^{*}\) relaxation times are shortened and the equivalent sequences benefit from this effect. However, for a disrupted blood brain barrier, when the contrast agent is leaked in the extra-vascular extra-cellular space, from the vessels where there is a much larger water concentration, the BMS effect is reduced and \(T_{1}\) times are the dominant effects. In that case, the relationship between the relaxation rate \(R_{1}=1/T_{1}\) and the contrast agent concentration in blood \(C_{b}\), for standard contrast agent doses, is given by \[R_{1}=R_{10}+r_{1}C_{b} \tag{7}\] where \(r_{1}\) is the spin-lattice relaxivity constant (i.e. the ability of the contrast agent to enhance the detected signal according to the caused increase in the proton relaxation rate). For Gd it is assumed to be equal to the invitro value of \(4.5s^{-1}mM^{-1}\). \(R_{10}\) is the relaxation rate in the absence of the contrast agent (\(1/T_{10}\)). Pre- and post-contrast measurements allow for these equations to be solved to give the contrast agent blood concentration, \(C_{b}\). A detailed explanation of this process is beyond the scope of this article. For a more detailed description of the method used please see ([29]). The concentration of the contrast agent in blood can then be converted to the concentration in plasma by taking into account the haematocrit (since the contrast agent is distributed in plasma)([30]) giving \[C_{p}=\frac{C_{b}}{1-Hct} \tag{8}\] ## Acknowledgements The authors would like to acknowledge Dr Mark Saunders in the recruitment and referral of patients included in this study. The data used in this work was funded by an investigator-led research grant from F. Hoffmann-La Roche Ltd, the Manchester Experimental Cancer Medicine Centre. ## Funding This work was funded by CRUK (Grant C8742/A18097). The funding source had no part in the collection, analysis or the interpretation of data, in the writing or the decision to publish this manuscript.
2310.07450
Broadband Terahertz Generation in a Corrugated Waveguide with matched Phase and Group Velocities
\begin{abstract} We show that it is possible to design corrugated waveguides where phase and group velocities coincide at an inflection point of the dispersion relation, allowing an extended regime of interaction with a charge particle beam. This provides a basis for designing travelling slow-wave structures with a broadband interaction between relativistic charged particle beams and propagating terahertz waves allowing an energy exchange between beam and wave, amplifying terahertz radiation. We employ Fourier-Mathieu expansion, which gives approximate analytic solutions to Maxwell equations in a corrugated waveguide with periodically undulating cross-section. Being analytic, this enables quick design of corrugated waveguides, determined from desirable dispersion relations. We design a three dimensional waveguide with the desired dispersion and confirm the analytical predictions of the wave profile, using numerical simulations. Madey's theorem is used to analyse the strength of the wave-beam interaction, showing that there is a broad frequency interaction region.
Sergey S. Siaber, Jonathan Gratus, Rebecca Seviour, Steven P. Jamison, Taylor Boyd
2023-10-11T12:51:42Z
http://arxiv.org/abs/2310.07450v1
# Broadband Terahertz Generation in a Corrugated Waveguide with matched Phase and Group Velocities ###### Abstract We show that it is possible to design corrugated waveguides where phase and group velocities coincide at an inflection point of the dispersion relation, allowing an extended regime of interaction with a charge particle beam. This provides a basis for designing travelling slow-wave structures with a broadband interaction between relativistic charged particle beams and propagating terahertz waves allowing an energy exchange between beam and wave, amplifying terahertz radiation. We employ Fourier-Mathieu expansion, which gives approximate analytic solutions to Maxwell equations in a corrugated waveguide with periodically undulating cross-section. Being analytic, this enables quick design of corrugated waveguides, determined from desirable dispersion relations. We design a three dimensional waveguide with the desired dispersion and confirm the analytical predictions of the wave profile, using numerical simulations. Madey's theorem is used to analyse the strength of the wave-beam interaction, showing that there is a broad frequency interaction region. 1Dept Physics, Lancaster University, Lancaster, UK 2Cockcroft Institute of accelerator science, Daresbury, Warrington, WA4 4AD, UK 3Ion Beam Centre, University of Huddersfeld, Huddersfield, UK *[email protected] ## 1 Introduction Driven by the growing demand of applications, from material science to telecommunications, from biology to biomedicine, recent years have seen a rapid rise in the development of coherent terahertz (THz) sources. Technologies used to generate THz radiation include laser-driven emitters, solid state oscillators, gas and quantum cascade lasers. Laser driven emitters, the most widely used sources of pulsed THz radiation, are based on frequency down-conversion from the optical region. THz pulses energies exceeding 1 mJ and 10's MW peak power have been demonstrated in period poled nonlinear sources driven by high energy near-IR ultrafast lasers [1], while more main stream laser systems are capable of 10's of \(\mu\)J energies and efficiencies in region of 0.1% [2]. In solid state oscillators the transit time of carriers through semiconductor junctions limits the frequency and power that can be generated, generating around 100 mW at 100 GHz, but the power falls off as \(f^{-2}\)[3]. Optically pumped gas lasers are the oldest technologies for THz generation, generating between 0.3 to 5 THz at around 100mW [4]. Quantum cascade lasers are a relatively new approach to generating THz radiation generating between 1 to 4 THz at mW power levels [5]. We consider an electron beam driven approach to THz generation. Electron beam approaches are either non-relativistic/moderately relativistic vacuum electronics devices (VED) or ultra-relativistic accelerator based radiators, where a modulated electron beam generates THz via transition, Cherenkov, Smith-Purcell, or undulator radiation. We present a moderately relativistic VED approach that uses a novel corrugated metallic waveguide as a slow-wave structure to generate THz radiation via coherent spontaneous emission. More broadly metallic waveguides with corrugations or modulations on the metallic boundary are of wide interest for their ability to tune the EM propagation characteristics through the corrugation structure. Corrugations in the form of rectangular groves orthogonal to the propagation axis and sub-wavelength separation form slow-wave structures that give rise to dispersion relations similar to that of a dielectric lined waveguides with the groove geometry and spacing determine the effective dielectric properties. In such an arrangement the dispersion relation can be obtained through a continuing sequence of mode-matching along the waveguide with step-wise changes in cross-section. The mode-matching over the repeating structure leads to a global eigenvalue problem from which the dispersion relation is obtained. In this paper, we use a waveguide which has smooth sinusoidal undulations in the waveguide cross-section, its form is shown in figure 1. This waveguide, has symmetric undulations in cross-section height, given by the function \(\mathcal{L}_{x}(z)\), unlike sine-waveguide, considered in [6, 7], where cross-section shape and dimensions are constant, but its centre is oscillating. For this structure we can find explicit approximate solutions for the EM fields. The geometry of the waveguide structure is parametrised by four parameters. Two of these are traverse dimensions, one, denoted as \(\mathcal{L}_{x}(z)\), is varied along the waveguide and oscillates around the average value, marked as \(L_{0}\), and the other is fixed (denoted as \(L_{y}\)). The other two are the spatial period of corrugation, \(L_{z}\), and depth of the corrugations \(q\). Theoretical technique that predicts EM propagating mode in an analytic form enables us to quickly scan this parameter space for desired dispersion and EM field properties. We applied this approach to find the parameters where group and phase velocities coincide, and do so at a point of inflection of the dispersion relation. Such a point corresponds to particle-wave phase matching at a broad range of frequencies. We call this point the _coincident inflection point_ (CIP). Research presented in this manuscript originated from the ideas of EM waves propagation in corrugated wire media [8, 9, 10]. The concept of engineering the dispersion of VEDs using a corrugated waveguide is not new. For example, some gyrotron amplifier designs utilise corrugated waveguides [11] to engineer dispersion, matching phase velocity with that of an electron beam, using a longitudinal sinusoidal modulation in a cylindrical waveguide wall together with a 3-fold helical twist in the modulation. This dispersion relation can be obtained from a perturbative coupling between uniform cross-section waveguide guide modes, with an anti-crossing between mode dispersion arising from the coupling [12, 13]. While gyrotron structures achieve the desired dispersion, matching wave-particle velocity's, they also require a complex electron beam structure and helical propagation in high magnetic fields; the electron beam must have annular and helical propagation matching to couple to the EM mode [14]. More recently researchers have considered uniform rectangular cross-section waveguides to engineer the dispersion in VEDs, such as the piecewise sine waveguide [6] for high-power terahertz (THz) travelling wave tubes (TWTs). This topology results in multiple modes existing Figure 1: Rectangular Corrugated Waveguide, with periodically undulating cross-section. It is designed to vary in such a way that phase and group velocities are very close in an extended frequency range. simultaneously point in the dispersion curve (see figure 4 in Zhang [6]), which can result in mode competition and noise on the generated RF. Other researchers have considered a uniform cross-section dielectric lined waveguides have found application in supporting electron acceleration driven by terahertz-frequency laser pulses [15, 16, 17, 18]. The dielectric layers allow for phase-matching of the drive laser (at THz frequencies) \(LSM_{10}\) modes with sub-relativistic electron beams. However, unlike in the helical gyrotron structures, the phase-velocity matching intrinsically comes with a group-velocity mismatch, and the temporal walk-off between the EM pulse and electron beam is a limiting factor in application of these dielectric lined waveguides to particle acceleration. In this article, we present the design of a rectangular corrugated, with periodically undulating cross-section, waveguide that enables efficient THz generation through a number of dispersion relation characteristics. Achieving these characteristics in this waveguide is enabled by our analytical technique. We seek and find, a structure that has a dispersion that provides both group and phase-velocity matching like a gyrotron, yet has the potential to support axial beams like uniform cross section waveguides, without mode competition. Our approach provides a perturbative solution to the waveguide modes, from which we show that the dispersion relation is a solution to Mathieu's Floquet equation with two free parameters: a cut-off frequency \(\omega_{\rm c}\), determined by the relative dimensions \(L_{0}/L_{z}\) and \(L_{y}/L_{z}\) of the waveguide, and \(q\) which determines the relative height of the undulations. Solution of the associated eigenvalue problem provides the dispersion relation, and the Floquet coefficients describing the longitudinal spatial-harmonics of the propagating mode. Solutions with group and phase velocity simultaneously being equal, and subluminal, are shown to exist. This solution can be used for the coupling system as well as the beam/wave waveguide interaction region. Unlike previous approaches, in this paper we determine the waveguide profile from the desired dispersion relation and not set a priori, i.e. we determine a family of waveguide geometries that satisfy the dispersion relation. Solutions are of the form of a slowly varying sinusoidal-like longitudinal modulation in the waveguide cross-section, and an EM mode with a longitudinal electric field on axis. In addition to matching the phase velocity and electron beam velocity, to maximise energy transfer between the electron beam and EM wave we require the spatial components of the electric field of the EM wave to be parallel with the beam propagation. The paper is structured as follows. In section 2 we give the explicit approximate EM modes for the corrugated waveguide, based on Mathieu's functions. We then demonstrate, in section 3, a method of finding the appropriate parameters \(\hat{\omega}_{\rm c}\) and \(q\), which we apply this to our goal of finding the CIP. In section 4, we use numerical CST Microwave studio simulations confirm the accuracy of the analytic approximation and to find how the CIP moves slightly when going from this approximation to numerical simulations. In section 5 we show the force that a charged particle would experience, and apply Madey's theorem [19, 20, 21, 22] to estimate THz energy being emitted as a result of particle-wave interaction in the waveguide. We conclude by discussing future directions. In addition, in the Supplemental Document we show how to construct the other EM modes, analyse further Mathieu's equation and how it leads to interaction zones, perform error analysis on the approximate solutions, show that there are no subluminal modes in the first interaction zone, and expand on the subject of simulating numerical field patterns and comparing it to analytical field patterns. ## 2 Explicit Solution for a corrugated waveguide profile We consider a rectangular corrugated waveguide shown in figure 1. We assume it has two flat walls set a \(L_{y}\) distance apart. The other two walls have a symmetric undulating profile a distance \(\mathcal{L}_{x}(z)/2\) from the centre, i.e. the two walls are a distance \(\mathcal{L}_{x}(z)\) apart. The cross section dimensions are given by the constant width \(L_{y}\) and the variable height \(\mathcal{L}_{x}(z)\) which depends on the positions \(z\) along the waveguide. We choose the \((x,y)\) coordinates to be \((0,0)\) in the centre of the waveguide so that the faces of the waveguides are at \(y=\pm L_{y}/2\) and \(x=\pm\mathcal{L}_{x}(z)/2\). The undulations in \(\mathcal{L}_{x}(z)\) are periodic, with a period \(L_{z}\), oscillating around \(L_{0}\) value. The exact form of undulating height \(\mathcal{L}_{x}(z)\) of corrugations will be given below. Solving Maxwell's vacuum equations in the frequency domain (with factor \(e^{-i\omega t}\)) \[\nabla\times\tilde{\mathbf{B}}+i\omega\,c^{-2}\tilde{\mathbf{E}}=\mathbf{0}, \nabla\cdot\tilde{\mathbf{B}}=0, \tag{1}\] \[\nabla\times\tilde{\mathbf{E}}-i\omega\tilde{\mathbf{B}}=\mathbf{\mathcal{E}} _{\text{Max}}, \nabla\cdot\tilde{\mathbf{E}}=0. \tag{2}\] together with the boundary conditions \[\tilde{\mathbf{E}}_{\parallel}|_{\text{Bdd}}=\mathbf{\mathcal{E}}_{\text{bdd}}, \tilde{\mathbf{B}}_{\perp}|_{\text{Bdd}}=\mathbf{0}, \tag{3}\] where the errors \(\mathbf{\mathcal{E}}_{\text{Max}}\) and \(\mathbf{\mathcal{E}}_{\text{bdd}}\) for our waveguide, are small. Our solution is thus an approximation so it does not solve Maxwell's equations exactly. We consider \(\text{TM}_{p_{x}p_{y}}\) modes, with positive odd integers \(p_{x}\) and \(p_{y}\), as these are the modes with a longitudinal component of \(\tilde{\mathbf{E}}\) in the centre of the waveguide that we require. We start by specifying the ansatz for the magnetic field \(\tilde{\mathbf{B}}\), \[\tilde{\mathbf{B}}=B_{0}\,c^{-2}\,(-i\omega)\,\phi(\eta\,z)\big{(}\kappa_{y}\cos( \kappa_{x}\,x)\,\sin(\kappa_{y}\,y)\,\mathbf{e}_{x}-\kappa_{x}\,\sin(\kappa_{x}\, x)\,\cos(\kappa_{y}\,y)\,\mathbf{e}_{y}\big{)} \tag{4}\] where \(\big{\{}\mathbf{e}_{x},\mathbf{e}_{y},\mathbf{e}_{z}\big{\}}\) are the unit vectors along the axes, \[\kappa_{x}(z)=\frac{\pi\,p_{x}}{\mathcal{L}_{x}(z)},\quad\kappa_{y}=\frac{\pi \,p_{y}}{L_{y}},\,\text{ and }\quad\eta=\frac{\pi}{L_{z}}. \tag{5}\] Trivially, \(\tilde{\mathbf{B}}\) satisfies divergence equation in (1). Substituting \(\tilde{\mathbf{B}}\) into curl equations and combining these into one equation yields the 2nd order ODE for \(\phi(\eta\,z)\) \[\phi^{\prime\prime}(\eta\,z)+\eta^{-2}\,\big{(}c^{-2}\omega^{2}-\kappa_{x}^{2 }-\kappa_{y}^{2}\big{)}\,\phi(\eta\,z)=0. \tag{6}\] From Gauss's equation (1) the electric field \(\tilde{\mathbf{E}}\) is \[\begin{split}\tilde{\mathbf{E}}=B_{0}\,\Big{(}\big{(}\kappa_{x}^{ \prime}\,\sin&(\kappa_{x}\,x)+\kappa_{x}^{\prime}\kappa_{x}\,x\, \cos(\kappa_{x}\,x)\,\big{)}\phi(\eta\,z)+\eta\kappa_{x}\,\sin(\kappa_{x}\,x) \,\phi^{\prime}(\eta\,z)\Big{)}\cos(\kappa_{y}\,y)\,\mathbf{e}_{x}\\ &+B_{0}\,\kappa_{y}\,\Big{(}-\kappa_{x}^{\prime}\,x\,\sin(\kappa_ {x}\,x)\,\phi(\eta\,z)+\eta\cos(\kappa_{x}\,x)\,\phi^{\prime}(\eta\,z)\Big{)} \sin(\kappa_{y}\,y)\,\mathbf{e}_{y}\\ &-B_{0}\,(\kappa_{x}^{2}+\kappa_{y}^{2})\,\cos(\kappa_{x}\,x)\, \cos(\kappa_{y}\,y)\phi(\eta\,z)\mathbf{e}_{z}\end{split} \tag{7}\] We see \(\tilde{\mathbf{B}},\tilde{\mathbf{E}}\), given by eqs. (4) and (7), automatically satisfy divergence equation in (2) since \(\nabla\cdot\tilde{\mathbf{E}}=\nabla\cdot\big{(}c^{2}(i\omega^{-1})\nabla\times \tilde{\mathbf{B}}\big{)}=0\). In Supplemental Document, in the section SD2, we give the formula for the error \(\mathbf{\mathcal{E}}_{\text{Max}}\). We also calculate the error \(\mathbf{\mathcal{E}}_{\text{bdd}}\) in the boundary, and we show that these errors are small if waveguide profile have a shallow gradient. We note that in the centre of the waveguide, where \(x=0\), then \(\mathbf{\mathcal{E}}_{\text{Max}}|_{x=0}=0\). The general solution to Maxwell's equations for our waveguide is given by the sum of TM and TE modes. These are briefly discussed in the supplementary document, section SD4. We presume that most of the electrons are travelling along the centre of the waveguide, where solution is more accurate. Equation (6) can be rewritten in the form of Mathieu equation (here we defined scaled variable \(\zeta:=\pi z/L_{z}=\eta\,z\) ) \[\phi^{\prime\prime}(\zeta)+\big{(}a-2q\cos(2\zeta)\big{)}\phi(\zeta)=0. \tag{8}\] Indeed, by stipulating an identity between corresponding terms in (8) and (6), and substituting explicit form of \(\eta\), \(\kappa_{y}\) and \(\kappa_{x}\), we get \[\begin{split} a-2q\cdot\cos(2\zeta)&\equiv\eta^{-2} \,\big{(}c^{-2}\omega^{2}-\kappa_{x}^{2}-\kappa_{y}^{2}\big{)}\\ &=L_{z}^{2}\,\big{(}c^{-2}\pi^{-2}\omega^{2}-p_{x}^{2}\mathcal{L} _{x}(z)^{-2}-p_{y}^{2}L_{y}^{-2}\big{)}\\ &=L_{z}^{2}\,\big{(}c^{-2}\pi^{-2}\omega^{2}-p_{x}^{2}L_{0}^{-2}- p_{y}^{2}L_{y}^{-2}\big{)}-L_{z}^{2}\,\big{(}p_{x}^{2}\mathcal{L}_{x}(z)^{-2}-p_{x}^{2}L_{ 0}^{-2}\big{)},\end{split} \tag{9}\] where we choose the separation into the dimensionless Mathieu parameters \(a\) and \(q\) as \[a=L_{z}^{2}\,(c^{-2}\pi^{-2}\omega^{2}-p_{x}^{2}L_{0}^{-2}-p_{y}^{2}L_{y}^{-2}), \tag{10}\] \[2q\,\cos(2\zeta)=L_{z}^{2}\,(p_{x}^{2}\mathcal{L}_{x}(z)^{-2}-p_{x}^{2}L_{0}^{- 2}), \tag{11}\] In the following we choose a constant \(q\), which defines the wave-guide profile \[\mathcal{L}_{x}(\zeta)=\left(L_{0}^{-2}+2L_{z}^{-2}\,p_{x}^{-2}\,q\,\,\cos(2 \zeta)\right)^{-1/2} \tag{12}\] Under the assumption that \(q\,L_{0}^{2}\,L_{z}^{-2}\) is small, waveguide profile \(\mathcal{L}_{x}\) takes simply a form of sinusoidally undulating function \[\mathcal{L}_{x}(\zeta)\approx L_{0}-L_{0}^{3}\,L_{z}^{-2}\,q\,\,\cos(2\zeta) \tag{13}\] It this form, it is evident that \(L_{0}\) is the average height of the waveguide and parameter \(q\) determines the height of the corrugations. For the example waveguides to be discussed here we will choose \(q=0.1\) and \(L_{0}/L_{z}\approx 1\). Solutions of the equation (8) are the Mathieu special functions. These solutions, as stated in Floquet's Theorem [23], can be presented in the form \[\phi_{a,q}(\zeta)=e^{ik\hat{\xi}}\,\Psi_{a,q,\hat{k}}(\zeta), \tag{14}\] where \(\Psi_{a,q,\hat{k}}(\zeta)\) is periodic, \[\Psi_{a,q,\hat{k}}(\zeta+\pi)=\Psi_{a,q,\hat{k}}(\zeta), \tag{15}\] where we have introduced the subscripts on \(\phi_{a,q}(\zeta)\) to make explicit the dependencies. \(\hat{k}\) is the Mathieu exponent, with a value that is determined by \(a\) and \(q\). In this form we see that \(\hat{k}\) may be considered as a dimensionless wavevector describing the wave propagation, related to the wavevector by \(k=\eta\hat{k}=\pi\hat{k}/L_{z}\). The relationship between \(k\) and \(\omega\), or equivalently between \(k\) and \(a\), therefore defines the dispersion relationship for the traveling wave solutions to equations (6) and (8), while \(\Psi_{a,q,\hat{k}}(\zeta)\) provides a structure function for the field variation within a single period or cell of the undulating waveguide. The shape of the corrugated waveguide is determined by four parameters: the length of one period of the corrugation \(L_{z}\), the width of the waveguide \(L_{y}\), the separation of the corrugated surfaces at the mid point of the corrugation \(L_{0}\) and \(q\) a parameter related to the depth of the corrugation. Once these four parameters are chosen this defines a dispersion relation between the angular frequency \(\omega\) and the wavevector \(k\) for the corresponding EM wave. In practice, rather than specifying a frequency \(\omega\), or equivalently \(a\), and then evaluating the corresponding \(\hat{k}\), we perform the calculation in reverse. Specifying \(\hat{k}\) and the structure geometry the corresponding value of \(a\) can be determined from an eigenvalue evaluation [24]. An example of the dispersion relation obtained for \(q=0.1\) is shown in Figure 2. To highlight the generality we have displayed dispersion relation with the following dimensionless parameters, \[\hat{\omega}=\frac{L_{z}}{\pi\,c}\,\omega\qquad\text{and}\qquad\hat{\omega}_{ \text{c}}=\left(\frac{L_{z}^{2}}{L_{0}^{2}}+\frac{L_{z}^{2}}{L_{y}^{2}}\right) ^{1/2}. \tag{16}\] so that equation (10) the Mathieu parameter \(a\) becomes \[a_{\hat{k},q}=\hat{\omega}^{2}-\hat{\omega}_{\rm c}^{2} \tag{17}\] Thus, the number of parameters for the dispersion relation in the waveguide reduces to just two, \(\hat{\omega}_{\rm c}\) and \(q\). For small values of \(q\), such as we are considering here, \(\hat{\omega}_{\rm c}\) may be regarded as the normalised cut-off frequency. Given these two parameters we can quickly generate the scaled dispersion relation for \(\hat{\omega}\) and \(\hat{k}\). One of the main advantages of this approach is that we can quickly scan the parameter space in order to find a desirable dispersion relationship. A wide range of behavior can be found through the parameterisation of equations (16-17) and a numerically straightforward eigenvalue evaluation of \(a_{\hat{k},q}\). ## 3 Particle-wave synchronism. For extended wave-particle interaction it is necessary to find propagating EM mode in which charged particle and electromagnetic phase wave remain in phase, with particle (\(c\beta_{e}\)) and phase-velocity (\(c\hat{k}/\hat{\omega}\)) being equal. We aim to find more stringent solutions where, for ceratin \(\hat{k},\hat{\omega}(\hat{k}),\hat{\omega}_{\rm c}\), \(q\) values, the the phase-velocity and group velocity both match the particle velocity \[\frac{\hat{\omega}}{\hat{k}}=\frac{d\hat{\omega}}{d\hat{k}}=\beta_{e} \tag{18}\] where \(\hat{\omega}(\hat{k})\) depends on the values of the parameters \(\hat{\omega}_{\rm c}\) and \(q\). In order for the interaction to occur in a range of frequencies (as opposed to a single frequency), we seek to simultaneously satisfy a Figure 2: Corrugated waveguide dispersion relation (blue line). The light line is shown in pink and the three black lines represent three particle beam velocities that would interact with monochromatic EM wave in: the first zone (corresponding hypothetical particle beam velocity would have been \(1.8c\)), second zone (similarly, hypothetical \(\beta_{0}\) would have been \(1.29\)), and third zone where matching velocity is subluminal (particle beam \(\beta_{0}=0.53\)). third constraint: zero group velocity dispersion (GVD), formulated as having an inflection point on the dispersion when phase and group velocities are matched, i.e. \[\frac{\hat{\omega}}{\hat{k}}=\frac{d\hat{\omega}}{d\hat{k}}\quad\text{and}\quad \frac{d^{2}\hat{\omega}}{d\hat{k}^{2}}=0 \tag{19}\] Such EM mode maintains in-phase interaction, is free of EM pulse walk-off, and minimizes the pulse dispersion within the waveguide. We refer to frequencies and wavenumbers that satisfy these conditions as the coincident inflection point (CIP). We regard, the CIP as an important design and operating point that enables enhanced broadband interaction with an electron beam, and provided initial synchronicity, continuous energy transfer from beam to wave or from wave to beam. In the dispersion diagram of figure 2 we show the \(\hat{\omega}\)-\(\hat{k}\) relationship for particles that would be phase-synchronised at normalized frequency of \(\hat{\omega}=1.5\) (this corresponds to \(f\simeq 474\,\text{GHz}\) in the example structure described in detail later), and for normalised cut-off frequency \(\hat{\omega}_{\text{c}}=1.255\) and corrugation parameter \(q=0.1\). The first zone corresponds to a particle traversing a single cell in one oscillation period; the 2nd zone to two oscillations per cell traversal, and the 3rd zone to three oscillations per traversal. Synchronism in the first zone is generally as it requires a superluminal particle, as is also the case for the 2nd zone in this example. For the 3rd zone however, synchronism can be obtained for particle velocity of \(\beta_{e}=0.53\) (corresponding approximately \(92\,\text{keV}\) electrons). The intercept of the particle and wave dispersion lines (matched phase-velocity), with parallel tangents (matched group velocity), occurs at the point of inflection in the wave dispersion (zero GVD). It is therefore represents a CIP for \(\beta_{e}=0.53\), and \(\hat{\omega}=1.5,\hat{\omega}_{\text{c}}=1.255,q=0.1\). More generally, for a given waveguide undulation scale parameter \(q\), we find, one unique CIP. That is, for a given \(q\), we can find the corresponding unique values of \(\hat{\omega}\), \(\hat{k}\) and \(\hat{\omega}_{\text{c}}\) for the CIP. For a \(q\) range from \(0.0\) to \(0.3\), corresponding CIP synchronous particle velocity extends from \(0.56c\) to \(0.47c\). Third zone of the dispersion relation for the ends of this range are shown on Fig.3 (a) toghether with synchronous particle lines. Correspondence between particle velocities and parameters \(q\) and \(\hat{\omega}_{\text{c}}\) for which CIP occurs is shown on Fig.3 (b) and (c) accordingly. We also observe that for \(q\to 0\), there is an absolute maximum velocity for the CIP, \(v=0.5754c\). ## 4 Numerical simulation Using numerical simulations is an alternative way of calculating dispersion relation of the waveguide, as well as checking the validity of the analytical predictions. We employ commercial Figure 3: End of range dispersion relations (black lines) and CIP particle lines (blue lines) are shown on subfigure (a). CIP velocity dependence on \(q\) values is shown on subfigure (b). Relation between CIP parameters \(q\) and \(\hat{\omega}_{\text{c}}\) is given on subfigure (c). numerical simulation software package CST that solves discretised Maxwell integral equations on a tetrahedral mesh. We simulate corrugated waveguiding structure described above which is one or several periods long, depending on the particular aim of the simulation. We aim to numerically find the CIP theoretically predicted in the previous section, and to numerically confirm behaviour of the EM modes suitable for the particle interaction. We start by simulating ten period (10 \(L_{z}\)) long waveguide using an eigenmode solver. We set boundary conditions as perfect electric conductor (PEC) on the walls of the waveguide, and as quasiperiodic conditions at the entrance and exit of the waveguide, i.e. \[\boldsymbol{\hat{E}}_{\parallel}|_{\text{Bdd}}=0,\qquad\boldsymbol{\hat{E}} \left(z=0\right)=e^{i\theta}\boldsymbol{\hat{E}}\left(z=10L_{z}\right). \tag{20}\] Firstly, in simulation results, we find field pattern that correpsonds to TM mode, - \(\mathbf{E}\)-field concentrated in the centre of the waveguide and collinear with \(z\) axis, and \(\mathbf{B}\)-field confined in the cross-section of the waveguide. An example of such field pattern, namely crossections of \(\mathbf{E}\)-field in \(x-z\) and \(y-z\) planes and \(\mathbf{B}\)-field in \(x-y\) plane, found in CST simulations is given on Figure 4. Geometrical parameters of the waveguide used in the simulations correspond to numerical CIP (parameter values are \(L_{z}=0.475\)mm, \(L_{y}=1\)mm, \(L_{0}=0.5\)mm and \(q=0.1\), and synchronous particle velocity in this case is \(0.46\)). We proceed to compare analytical and numerical predictions for dispersion relations and field profiles in the centre of the waveguide. Analytical dispersion (shown as a black line on the figure 5(d)) is calculated as a Mathiue exponent, while numerical dispersion (orange line) \(\omega(k(\theta))\) is Figure 4: Example of the simulated electric and magnetic field for the corrugated structure. We see that \(\mathbf{E}\)-field is strongest at the centre of the waveguide; it is also collinear with \(z\)-axis. found by varying phase \(\theta\) in quasiperiodic boundary conditions (20) and using wavenumber-phase relation \(\hat{k}=2+\theta/(10\pi)\). The third zone of dispersion relation, as predicted by analytical calculation and numerical simulations is given in the Figure 5(d). Line of synchronous particle, travelling at 0.46 speed of light, is shown in blue. We note good agreement between numerical and analytical dispersion, with an exception of a region close to the bandgap. To compare field profiles in the centre of the waveguide obtained in numerical calculations and those predicted by analytical theory we need set a definite phase of the EM mode in the waveguide. To this end, we consider a 6-period long corrugated structure with PEC conditions at the entrance and exit of the waveguide, i.e. find standing modes of the resonator, rather than a waveguide. This enables us to find shapes of the longitudinal electric field \(E_{z}\) (\(z\)) in the centre of the waveguide for several points in the third zone of dispersion, with wavenumbers \(\hat{k}=2.17\), \(\hat{k}=2.5\) and \(\hat{k}=2.83\). Results, calculated for analytical CIP parameters (\(L_{z}=0.475\)mm, \(L_{y}=1\)mm, \(L_{0}=1\)mm, and \(q=0.1\)), are shown in Figure 5 (a)-(c). We observe very good agreement for \(\hat{k}=2.17\) and \(\hat{k}=2.5\), and acceptable agreement in the field shape even for the point \(\hat{k}=2.83\), where the discrepancy between analytical and numerical dispersion curves becomes greater, as we are nearing the bandgap. Finally, to check how well the CIP conditions (19) are met, we calculate phase and group Figure 5: Longitudinal field profiles are shown along the centre of the waveguide \(E_{z}\) (\(x\!=\!0,y\!=\!0,z\)) on the subfigures (a)-(c). On the left longitudinal component of the electric field along the centre of the waveguide for the 6 periods long structure at 3 different wavenumber values. Numerical (orange) versus predicted analytical (black dashed). Three pairs of corresponding points on the numerical (orange) and analytical(black solid) dispersion curves are shown on the right (d). We observe good agreement between the field patterns for the most of the zone, apart from the region of higher frequencies as we start approaching the bandgap. velocities from analytical and numerical dispersions,- shown in Figure 6(a). On Fig. 6(b) we show how numerical phase velocity (orange dashed line), numerical group velocity (orange solid line), and particle velocity (blue line) are matched in the CIP. Analytical results are calculated for the parameters that correspond to numerical CIP(\(q=0.1\), \(\hat{\omega}_{\mathrm{c}}=1.06\)). It is evident that analytical phase velocity (black dashed line) is close to the numerical results, whereas there is a considerable discrepancy in the group velocity. We observe that exact CIP is a strong condition on the shape of the dispersion curve. Achieving exact CIP for numerical 3D dispersion we had to change value of \(L_{0}\) from 0.41mm to 0.5mm comapred to analytical CIP (i.e. changing \(\hat{\omega}_{\mathrm{c}}\) from 1.255 to 1.06). This results in CIP synchronous particle velocities and frequencies being different, \(\beta_{e}=0.46\) and 394 GHz for numerical CIP, and \(\beta_{e}=0.53\) and 474 GHZ for analytical CIP. Both numerical and anlytical estimations relying on different assumptions and are approximations of the real system. ## 5 Waveguide Mediated Particle Wave Interaction In this section, we employ two approaches to examine the particle-wave interaction as described in section 2 above. The first is based on a single particle model that remains in phase with the EM wave. This demonstrates the relationship between the dispersion relation and the velocity of the particle. The second approach is based on a perturbation technique, commonly referred to as Madey's theory [21]. This approach combines a perturbation expansion of the Lorentz equation and averaging over the initial phases to study energy exchange between beam and wave. As suggested in the previous section \(E_{z}\) field shape obtained from analytical model closely resembles field shape produced in numerical simulations, and in this section we will use analytical field expressions to estimate wave particle interaction in the structure. To examine the interaction between charged particles and EM waves in our structure we start by considering a single particle (charge \(Q\)) moving along a trajectory in the centre of the waveguide. In general the on-axis ((\(x=0\), \(y=0\)) electric field \(E(t,z)\) within the waveguide at time \(t\) and Figure 6: Dispersion relations, numerical (orange) and analytical (black), for numerical CIP parameters are shown on subfigure (a). On subfigure (b) numerical phase (dashed orange) and group (solid orange) velocities are shown, and synchronous particle line (solid blue) are shown toghether with analytical predictions ( pemeter values \(q=0.1\), \(\hat{\omega}_{\mathrm{c}}=1.06\)) of phase and group velocities (dashed and solid black lines correspondingly). Numerical CIP itself is marked as a blue point. position \(z\) is given by inverse fourier transform \[E(t,z)=\frac{1}{2\pi}\int\,\tilde{\mathbf{E}}(\omega,z)e^{-i\omega t}d\omega\] A particle that enters the waveguide at time \(t_{o}\), the particle will be at position \(z_{p}\) at time \(t=t_{o}+z/c\beta_{e}\equiv t_{p}\). Therefore the field experienced by the particle as it travels through the waveguide is \[E(t_{p},z_{p})=\frac{1}{2\pi}\int\,\tilde{\mathbf{E}}(\omega,z_{p})e^{-i\omega(t_{0 }+z/c\beta_{e})}d\omega\] Substituting the the on-axis field from equation 7, \[\tilde{\mathbf{E}}(\omega,z_{p})=B_{0}(\kappa_{x}^{2}+\kappa_{y}^{2})\int\,e^{ikz }\Psi_{a,q,\hat{k}}\big{(}\pi\,L_{z}^{-1}\,z\big{)},\] the field experienced becomes \[E(t_{p},z_{p})=\frac{1}{2\pi}B_{0}(\kappa_{x}^{2}+\kappa_{y}^{2})\int\,e^{ikz _{p}}\Psi_{a,q,\hat{k}}\big{(}\pi\,L_{z}^{-1}\,z_{p}\big{)}e^{-i\omega(t_{0}+z _{p}/c\beta_{e})}\rho(\omega)d\omega \tag{21}\] where we have introduced the spectral density of the waveguide field, \(\rho(\omega)\). For a monochromatic field with \(\rho(\omega)=\delta(\omega-\omega_{s})+\delta(\omega-\omega_{s})\) at the phase-matched frequency \(\omega_{s}\) we obtain the effective field to be \[\begin{split} E(t_{p},z_{p})&=\frac{1}{\pi}B_{0}( \kappa_{x}^{2}+\kappa_{y}^{2})\text{Re}\left[\Psi_{a,q,\hat{k}}\big{(}\frac{ \pi\,z_{p}}{L_{z}}\big{)}\right]\cos(\omega_{s}t_{0})\\ &=\pi B_{0}L_{z}^{-2}\left(\hat{\omega}_{\text{c}}^{2}+2\,q\,\cos (2\pi\,L_{z}^{-1}\,z_{p})\right)\text{Re}\left[\Psi_{a,q,\hat{k}}\Big{(}\frac{ \pi\,z_{p}}{L_{z}}\Big{)}\right]\cos(\omega_{s}t_{0})\end{split} \tag{22}\] Figure 7: The force felt by an in-phase electron in the first four zones. The strength of the electric field is \(1\text{Vm}^{-1}\) at the peak part of the waveguide. The zone is determined by the speed of the particle. The first and second zones are inaccessible since they corresponds to superluminal particles. Here \(q=0.1\), \(\hat{\omega}_{\text{c}}=1.255\) and \(\hat{\omega}=1.513\). First zone (black), \(\hat{k}=0.857\), \(\beta_{e}=1.766\). Total Force \(f_{\text{P}}=4.81\). Observe that the force \(>0\) for all \(\zeta\). Second zone (blue), \(\hat{k}=1.143\), \(\beta_{e}=1.324\), \(f_{\text{P}}=-0.516\). Third zone (red). \(\hat{k}=2.857\), \(\beta_{e}=0.530\), \(f_{\text{P}}=0.244\). This is at the CIP. Fouth zone (green), \(\hat{k}=3.142\), \(\beta_{e}=0.481\), \(f_{\text{P}}=-0.043\). where we have used the velocity phase-matched condition \(k=\omega_{s}/c\beta_{e}\), and the expansion of \(\kappa_{x}\) from equations (5) and (12). The energy gain of a particle traveling on-axis over one structure period is therefore \[U =Q\int_{0}^{L_{z}}E(t_{p},z_{p})\,dz_{p}\] \[=\frac{Q\,B_{0}\pi\,\cos(\omega_{s}t_{0})}{L_{z}^{2}}\int_{0}^{L_ {z}}\left(\hat{\omega}_{\rm c}^{2}+2\,q\,\cos\left(\frac{2\pi\,z_{p}}{L_{z}} \right)\right){\rm Re}\left[\Psi_{a,q,\xi}\Big{(}\frac{\pi\,z_{p}}{L_{z}}\Big{)} \right]\,dz_{p} \tag{23}\] From figure 7 we see the energy depends on which zone the interaction occurs in, determined by the velocity of the particle. In all cases \(f_{\rm P}\) is highest when we are in the first zone, since there is no counter force. However, we show in Supplemental Document, in the section SD3 that this is impossible, as it requires superluminal particle velocities. The total interaction of the backward wave (second zone) is comparable to the forward wave (third zone). In this case the backward wave would also need superluminal particle velocities. By comparison the forward wave is subluminal, and has a very good interaction between wave and particles, equal to 12% of the first zone interaction. Over several periods the energy of the particle shows a net decreases as seen in figure 8, and a particle synchronised with the opposite phase would exhibit a net increase in energy. We extend this examination by considering the energy transfer between a charged particle beam and the EM wave using Madeys theorem [20, 21], which has been used in systems, from free electron lasers [21, 25], to conventional traveling wavetubes [21], to metamaterial vacuum electronic devices [22]. Madeys theorem relates the phase averaged energy spread to the phase averaged energy change experienced by a charged particle as it propagates through a system. The theory uses the first two terms of a perturbation expansion of the Lorentz force based on a single charged particle analysis where (ignoring space charge) a uniformly distributed charged particle beam is injected into the system. The theorem draws links between the spontaneous emission of photons by a single electron passing through the structure to stimulated emission of photons. The first perturbation energy term of the Lorentz equation \(\gamma_{1}\) is taken at entry and exit from the Figure 8: Change in kinetic energy of a single electron, from 92KeV(i.e. synchronous beam, \(\beta_{0}=0.53\), third zone in figure 7), in an RF field of 1Vm\({}^{-1}\), over four periods in the waveguide. Here the waveguide is at the analytical CIP for \(q=0.1\), and geometric parameters \(L_{z}=0.475\)mm, \(L_{y}=1\)mm, \(L_{0}=1\)mm. The blue line is the average energy loss. structure, the difference \(\Delta\gamma_{1}\) is averaged over the phase of the EM field to yield equation (24). This change in energy \(\Delta\gamma_{1}\) relates to the classical spontaneous power spectrum from Maxwell's equations from the beam [21], \[\left\langle\Delta\gamma_{1}^{2}\right\rangle=\left\langle\left(\int\!dz\frac{- QE_{z}\beta_{0}}{m_{0}c}\right)^{2}\right\rangle. \tag{24}\] The second order term relates the energy change in the beam due to a stimulated emission response, a consequence of a generalized framework in Hamiltonian mechanics [19], given by, \[\left\langle\Delta\gamma_{2}\right\rangle=\frac{1}{2}\frac{d}{d\gamma}\left \langle\Delta\gamma_{1}^{2}\right\rangle. \tag{25}\] In a simplistic model we can consider the electron beam as \(N\) electrons entering the system every second, this enables us to write the power change in the EM wave as, \[\Delta P=-\frac{1}{2}\frac{d}{d\gamma}\left\langle\Delta\gamma_{1}^{2}\right\rangle m _{0}c^{2}N. \tag{26}\] Using (26) we calculate power change in the 10 period long waveguide (i.e. structure is 10 \(L_{z}\) long). We assume that electron beam is in the centre of the waveguide and comprises of \(N=10^{4}\) electrons; value EM field amplitude is taken as 1 V/m. We perform calculation for two energies of the beam: \(\beta_{0}=0.53\) - which corresponds to the CIP as on Fig.2, and \(\beta_{0}=0.55\) - which correspond to a single intersection point in the 3rd zone. Third zone dispersion relation and beam line for \(\beta_{0}=0.53\), shown in 9 (a), demonstrate considerable interval beam-wave interaction frequency interval. This interaction corresponds to the beam generating EM wave, as can be seen on 9 (c), where, in the \(\hat{\omega}\) frequency interval from approximately 1.32 to 1.54, positive sign of \(\Delta P\) corresponds to the power transferred from Figure 9: On subfigures (a) and (b) beam lines (black) and third zone waveguide dispersion Relations (blue) are shown for coincident inflecton point and for a simple single intersection correspondingly. EM wave Power Change, calculated using Madey Theorem, is shown for a CIP (c) and CIP (d). the electrons to EM wave. For the geometric dimensions of the waveguide used in the CST simulation corresponding frequency generation interval is 77GHz: from 419 to 496 GHz. Graph \(\Delta P\) (\(\hat{\omega}\)) terminates as \(\hat{\omega}\) frequency approaches band gaps. The beam line for \(\beta_{0}=0.55\) beam, shown in 9 (b), crosses dispersion at a single point, without considerable interaction frequency interval. The EM power \(\Delta P\) (shown on 9 (d)) is generated in a shorter frequency interval. ## 6 Conclusion We have shown it is possible to design corrugated waveguides where the phase velocity and group velocity of a wave coincide at a point of inflection in the phase velocity, which we refer to as the coincident inflection point (CIP). Using Madey's theorem we have shown the CIP creates an extended regime of interaction between the EM wave and a charged particle beam. Even for the case where the CIP is close to the band gap (as seen in figure 2) the benefit of having a CIP is the frequency range of interaction extends over almost the whole band, as seen in figure 9. The CIP has been found using a novel approach, based on an explicit analytic approximate solution (4) and (7) to Maxwell's equations, which involves Mathieu's equation. This allows us to estimate the geometrical parameters for waveguides with an engineered dispersion curve. We observe good agreement between the analytical dispersion curve and the CST numerical simulations. We show that waveguiding structure and synchronous particle beam at a CIP have potential for efficient broadband THz generation. We note again that waveguiding structures discussed in this manuscript have corrugations which mirror each other, figure 1, and are distinct from sine waveguides in which crossection remains constant in shape and size and its centre undulates along the waveguide. In contrast, in our approach, the crossection of the waveguide varies and its centre remains constant. We present a waveguide based traveling slow-wave structure that allows for an interaction between a charged particle beam with a velocity of 0.53c and a propagating EM wave with a longitudinal electric field. This work can be extended to more general waveguides since the EM field given by equations (4) and (7) are still good approximate a solution to Maxwell's equations even if the corrugation height is not given by equation (13) or equation (12). Replacing equation (13) with a more general periodic function may offer the possibility of extending the CIP for higher velocities. One goal being to extend the maximum velocity, figure 3 to \(c\), in order to exchange energy with ultra-relativistic particles. Furthermore, we can replace \(\mathcal{L}_{x}(z)\) with any other functions as long as \(\mathcal{L}_{x}^{\prime}(z)\) and \(\mathcal{L}_{x}^{\prime\prime}(z)\) are small, and hence we can apply these solutions for the EM field in the system that couples the corrugated waveguide to the external structure. Corrugated waveguides could also be used for the acceleration of charged particles. This work will be extended to investigate the use of our corrugated structures for particle acceleration, which would only require a slight change in the synchronisation between wave and beam to flip this interaction from EM wave generation to particle acceleration [22]. ## 7 Backmatter Funding.SSS, JG, SPJ and TB are grateful the support provided by STFC (the Cockcroft Institute ST/P002056/1 and ST/V001612/1). JG is particularly grateful to the Peter Ratoff, director of Cockcroft (2014-2023), for supporting this research. RS gratefully acknowledges support from the AFRL Directed Energy Chief Scientist Office and the EOARD, grant FA8655-20-1-7002. Disclosures.The authors declare no conflicts of interest. Data availability.Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request. Supplemental document.See Supplemental Document for supporting content. Author contribution statement.JG suggested the original idea and provided the theoretical work. SJ and RS proposed applying the idea to THz generation. SJ proposed the idea of searching for the CIP. RS proposed the idea of using Madey's theorem, with SSS implementing it. The numerical simulations was principally undertaken by SSS based on initial work by TB. SSS lead the writing of the article, with help from all the authors.
2303.09545
WebSHAP: Towards Explaining Any Machine Learning Models Anywhere
As machine learning (ML) is increasingly integrated into our everyday Web experience, there is a call for transparent and explainable web-based ML. However, existing explainability techniques often require dedicated backend servers, which limit their usefulness as the Web community moves toward in-browser ML for lower latency and greater privacy. To address the pressing need for a client-side explainability solution, we present WebSHAP, the first in-browser tool that adapts the state-of-the-art model-agnostic explainability technique SHAP to the Web environment. Our open-source tool is developed with modern Web technologies such as WebGL that leverage client-side hardware capabilities and make it easy to integrate into existing Web ML applications. We demonstrate WebSHAP in a usage scenario of explaining ML-based loan approval decisions to loan applicants. Reflecting on our work, we discuss the opportunities and challenges for future research on transparent Web ML. WebSHAP is available at https://github.com/poloclub/webshap.
Zijie J. Wang, Duen Horng Chau
2023-03-16T17:56:02Z
http://arxiv.org/abs/2303.09545v1
# WebSHAP: Towards Explaining Any Machine Learning Models Anywhere ###### Abstract. As machine learning (ML) is increasingly integrated into our everyday Web experience, there is a call for transparent and explainable web-based ML. However, existing explainability techniques often require dedicated backend servers, which limit their usefulness as the Web community moves toward an-browser ML for lower latency and greater privacy. To address the pressing need for a client-side explainability solution, we present WebSHAP, the first in-browser tool that adapts the state-of-the-art model-agnostic explainability technique SHAP to the Web environment. Our open-source tool is developed with modern Web technologies such as WebGL that leverage client-side hardware capabilities and make it easy to integrate into existing Web ML applications. We demonstrate WebSHAP in a usage scenario of explaining ML-based loan approval decisions to loan applicants. Reflecting on our work, we discuss the opportunities and challenges for future research on transparent Web ML. WebSHAP is available at [https://github.com/poloclub/webshap](https://github.com/poloclub/webshap). WebsHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAPAP, WebSHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAPAP, WebSHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAP, WebSHAPAP, WebSHAP, WebSHAP, WebSHAP, WebSHAPAP, WebSHAP, WebSHAP, WebSHAP, WebSHAPAP, WebSHAPAP, WebSHAPAP, WebSHAPAP, WebSHAPAP, WebAPAP, WebSHAPAP, WebAPAP, WebAPAP, WebAPAP, WebAPAPAP, WebAPAP, WebAPAP, WebAPAPAP, WebAPAPAP, WebAPAPAP, WebAPAPAP, WebAPAPAP, WebAPAPAPAP, Web ## CCS CONCEPTS * [topsep=0pt,parsep=0pt,partopsep=0pt,parsep=0pt,partopsep=0pt,parsep=0pt,partopsep=0pt,parsep=0pt,partopsep=0pt,parsep=0pt,partopsep=0pt,parsep=0pt,partopsep=0pt,parsep=0pt,partopsep=0pt,parsep=0pt,partopsep=0pt,parsep=0pt,partopsep=0pt,parsep=0pt,partopsep=0pt,parsep=0pt,partopsep=0pt,parsep=0pt,partopsep=0pt,parsep=0pt,partopsep=0pt,parsep=0pt,partopsep=0pt,parsep=0pt,partopsep=0pt,parsep=0pt,partopsep=0pt,parsep=0pt,partopsep=0pt,parsep=0pt,partopsep=0pt,parsep=0pt,partopsep=0pt,parsep=0pt,partopsep=0pt,parsep=0pt,partopsep=0pt,parsep=0pt,partopsep=0pt,parsep=0pt,partopsep=0pt,parsep=0pt,partopsep=0pt,parsep=0pt,partopsep=0pt,parsep=0pt,partopsep=0pt,parsep=0pt,partopsep=0pt,parsep=0pt,partopsep=0pt,parsep=0pt,partopsep=0pt,parsep=0pt,partopsep=0pt,parsep=0pt,partopsep=0pt,parsep=0pt,partopsep=0pt,parsep=0pt,partopsep=0pt,parsep=0pt,partopsep=0pt,parsep=0pt,partopsep=0pt,parsep=0pt,partopsep=0pt,partopsep=0pt,parsep=0pt,partopsep=0pt,parsep=0pt,partopsep=0pt,partopsep=0pt,parsep=0pt,partopsep=0pt,partopsep=0pt,parsep=0pt,partopsep=0pt,partopsep=0pt,parsep=0pt,partopsep=0pt,partopsep=0pt,parsep=0pt,partopsep=0pt,parsep=0pt,partopsep=0pt,partopsep=0pt,parsep=0pt,partopsep=0pt,partopsep=0pt,parsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsepsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsepsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsepsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsep=0pt,partopsepsep=0pt,partopsepsep=0pt,partopsep=0pt,partopsepsep=0pt,partopsepsep=0pt,partopsepsep=0pt,partopsepsepsep=0pt,partopsep=0pt,partopsepsepsep=0pt,partopsep=0pt,partopsepsep=0pt,partopsepsep=0pt,partopsepsepsep=0pt,partopsepsep=0pt,partopsepsepsep=0pt,partopsepsep=0pt,partopsepsep=0pt,partopsepsep=0pt,partopsepsepsep=0pt,partopsep the state-of-art explainability technique Kernel SHAP (SS 3.1) and harnesses modern Web technologies, such as WebGL, WebAssembly, and Web Workers (SS 3.2). To help researchers and developers easily adopt WebSHAP, we open-source our implementation and provide comprehensive documentation and tutorials (SS 3.3). ### Adapting Kernel SHAP SHAP is a state-of-art ML explainability framework popularized by Lundberg and Lee (2013). It uses the concept of Shapley values, originally applied in cooperative game theory for credit allocation (Lundberg and Lee, 2013), to calculate attribution scores for each feature for an individual ML prediction. Shapley values are the average contribution of a player in all possible game coalitions. In an ML context, players are input features, and game coalitions are permutations of input features compared to a baseline value. Computing Shapley values is exponentially expensive, as one needs to iterate through all \(2^{M}\) coalitions (permutations) of \(M\) players (input features). To more efficiently compute Shapley values, Lundberg and Lee (2013) introduce Kernel SHAP, a model-agnostic method to approximate the Shapley values of feature \(x\) by solving a least squares problem \(L\) through a linear regression \(g\) with a kernel weight \(\pi\). \[\begin{split}& L\left(f,g,\pi_{x}\right)=\sum_{x^{\prime}\in Z} \left[f\left(h_{x}\left(x^{\prime}\right)\right)-g(z^{\prime})\right]^{2}\pi_{ x}\left(z^{\prime}\right)\\ & g\left(z^{\prime}\right)=\phi_{0}+\sum_{j=1}^{M}\phi_{j}z^{ \prime}_{j}\ \ \ \ \pi_{x}\left(z^{\prime}\right)=\frac{\left(M-1\right)}{\left(\left. \left|x^{\prime}\right|\right)\left|z^{\prime}\right|\left(M-\left|z^{\prime} \right|\right)}\end{split} \tag{1}\] Here, \(f\) is the ML model that we want to explain, and \(f\left(h_{x}\left(z^{\prime}\right)\right)\) is the model's predictions on the sampled data \(h_{x}\left(z^{\prime}\right)\), where each row contains features masked as missing (\(z^{\prime}_{j}=0\)). Users can specify the values to represent missing features through \(h_{x}\), such as filling them with zeros, a subset of training data, or the median of training data. The explanation model \(g\left(z^{\prime}\right)\) is a linear function of binary variables \(z^{\prime}\in\left\{0,1\right\}^{M}\), where \(M\) is the number of input features. The kernel \(\pi_{x}\left(z^{\prime}\right)\) assigns a scalar weight to each sampled instance \(z^{\prime}\) based on the number of non-missing features \(\left|z^{\prime}\right|\). Finally, the least squares problem's estimated solutions \(\phi_{j}\) are the Shapley values. We use Kernel SHAP as the explainability technique in WebSHAP because it is state-of-the-art (Beng et al., 2017) and _model-agnostic_. This means with our tool, users can explain the predictions of _any_ ML model available on the Web, regardless of its architecture or implementation details. Additionally, Kernel SHAP is the most favored explainability technique among ML practitioners according to a recent survey (Shen et al., 2017). By using Kernel SHAP, we aim to make it easier for developers and researchers to adopt WebSHAP and to provide them with the best explanations for their Web ML models. ### Optimizing for the Web **Dataset Sampling.** Solving the weighted least square problem in Equation 1 can be computationally challenging, as there are \(2^{M}\) feature permutations (\(\left|Z\right|=2^{M}\)). WebSHAP tackles this challenge by implementing a dataset sampling strategy as described in (Shen et al., 2017) and leveraging modern Web technologies. First, WebSHAP avoids sampling permutations for features with identical input and missing values, as their Shapley values will always be zero. When dealing with input data with many features (\(M>30\)), WebSHAP does not sample all feature permutations: it prioritizes permutations with a large or small \(\left|z^{\prime}\right|\), as these instances have a larger kernel weight \(\pi_{x}\left(z^{\prime}\right)\), and thus provide more contribution to the solutions \(\phi_{j}\). **Leveraging Modern Web Technologies.** WebSHAP employs the latest advancements in Web technologies and tooling to provide efficient and effective explanations of Web ML models. For example, when solving the weighted least square problem, WebSHAP uses WebGL to accelerate matrix multiplications through _TensorFlowjs_(Shen et al., 2017), where matrices are stored as WebGL textures and matrix multiplication is implemented in a WebGL shader. For instance, using the FireFox browser on a MacBook, WebSHAP only takes about 600ms to multiply two matrices with dimensions of \(2134\times 2134\) through WebGL, a significant improvement from the 18 seconds it would take without WebGL. Additionally, we provide examples that use Web Workers to run WebSHAP in background threads to ensure that the Shapley value computation does not block the UI thread in browsers. Finally, WebSHAP is model-agnostic and capable of explaining any ML models available on the Web, including models compiled from non-Web languages. One can even use WebSHAP to explain an ML model running in a WebAssembly sandbox environment (e.g., the ML models in SS 2 and in \(\ddagger\) A). ### Open-source and Easy to Use To help Web developers and researchers easily adopt WebSHAP, we open source our implementation and design an API similar to the Kernel SHAP's Python implementation (Shen et al., 2017). With WebSHAP, explaining a Web ML model's prediction is as simple as two lines of code by passing a JavaScript prediction function, the missing feature values, and the data point. Users also have the option to easily configure the number of feature permutations to sample (\(\left|Z\right|\) in Equation 1). Developed with TypeScript, our tool offers maintainable and scalable code, allowing users to easily extend and adapt it for their existing applications. We provide detailed documentation and tutorials.2 We publish WebSHAP in the popular Web package repository npm Registry.3 Users can easily install our tool and use it in both browser and _Node.js_(Beng et al., 2017) environments. Footnote 2: WebSHAP documentation: [https://poloclub.github.io/webshap/doc](https://poloclub.github.io/webshap/doc) Footnote 3: WebSHAP npm repository: [https://www.npmjs.com/package/webshap](https://www.npmjs.com/package/webshap) ## 4. Related Work **Model-agnostic Explanation Methods.** Researchers have proposed a wide array of model-agnostic explanation techniques (e.g., 13; 16; 17). Given a trained ML model and a data point, these techniques aim to explain how different features contribute to the model's prediction. Users can apply these techniques to any model class. A recent survey with ML practitioners (Shen et al., 2017) shows Kernel SHAP (Shen et al., 2017), which approximates feature attributions using a game theoretic approach, is the most favored technique. Based on Kernel SHAP, researchers have proposed methods such as SAGE (Beng et al., 2017) for estimating global feature importance and shapr(Shen et al., 2017) for models with many dependent features. Advancing these related tools, our work is the first adaptation of Kernel SHAP for the Web. **Explainable ML on the Web.** The Web is a popular platform for explainable ML tools. To help _ML novices_ learn about the inner workings of modern ML technologies, researchers develop Web-based visualization tools to interactively explain how different ML models work, such as GAN Lab (Shen et al., 2017) and CNN Explainer (Shen et al., 2017). Researchers have also built web-based visual analytics tools to empower _ML experts_ to interpret their models (e.g., 22; 26; 27). However, these tools often require dedicated backend servers to run ML models. More recently, there is a growing number of explainability tools that can run entirely in the user's browser. For example, with pre-computation, Microscope (19) allows users to analyze neuron representations in their browsers. GAM Changer (24), a web-based tool to help users vet and fix the inherently interpretable Generalized Additive Models by running mode inference with WebAssembly. In contrast, WebSHAP does not need backend servers or pre-computation, providing complete in-browser model explanations for any model class. ## 5. Discussion and Future Work Reflecting on our development of WebSHAP, we highlight the advantages and limitations of transparent and explainable Web ML. **Advantages and Opportunities.** The key benefits of enabling ML explainability on the Web are _privacy_, _ubiqu_, and _interactivity_. WebSHAP empowers users to interpret Web ML models directly on their devices, keeping sensitive model inputs secure (e.g., financial and medical information). As the Web is ubiquitous, users can use WebSHAP on their computers, tablets, phones, and even IoT devices (e.g., smart refrigerators). Using the Web as a platform, WebSHAP makes it easier for developers to deploy explainable ML systems and enable user interactions. **Future research opportunities** include: * **Enhancing WebSHAP with new Web APIs** such as Service Worker for offline explainability, WebSocket for collaborative interpretations, and Web Crypto for verifiable explanations. * **Integrating WebSHAP directly into browsers** such as through the Web Inspector tools. It will allow users to easily view and interpret any ML models running on a Web page. * **Developing web-based interactive visualization tools** to help end-users easily digest model explanations. **Limitations and Challenges.** We first acknowledge the limitations of using a post-hoc explainability technique, as it can produce inaccurate and unstable explanations (18). Also, developing explainable ML models for the Web faces unique challenges, including limited computation resources in browsers, varying capacities among edge devices, and a lack of established Web ML APIs and libraries. With the ML model in SS 2, we compare average SHAP computation times between WebSHAP and Kernel SHAP (Python) across different background data size N on a 64GB RAM MacBook (see right). WebSHAP is slower than Kernel SHAP especially when \(N\) is large, and the main factor is the XGBoost inference time difference. ## 6. Conclusion We present WebSHAP, an in-browser, open-source explainability library for Web ML. Our tool adapts Kernel SHAP and leverages modern Web technologies for easy integration into existing Web ML applications. To demonstrate its potential, we present a usage scenario demonstrating real-time explanation of a web-based loan approval prediction model. In pursuit of the "View Source" ethos of the Web, we aim for WebSHAP to be a stepping stone towards transparent, explainable, and trustworthy ML on the Web. ## Acknowledgments This work was supported in part by a J.P. Morgan PhD Fellowship, Apple Scholars in AI/ML PhD fellowship, and DARPA GARD.
2307.14785
Improving Aspect-Based Sentiment with End-to-End Semantic Role Labeling Model
This paper presents a series of approaches aimed at enhancing the performance of Aspect-Based Sentiment Analysis (ABSA) by utilizing extracted semantic information from a Semantic Role Labeling (SRL) model. We propose a novel end-to-end Semantic Role Labeling model that effectively captures most of the structured semantic information within the Transformer hidden state. We believe that this end-to-end model is well-suited for our newly proposed models that incorporate semantic information. We evaluate the proposed models in two languages, English and Czech, employing ELECTRA-small models. Our combined models improve ABSA performance in both languages. Moreover, we achieved new state-of-the-art results on the Czech ABSA.
Pavel Přibáň, Ondřej Pražák
2023-07-27T11:28:16Z
http://arxiv.org/abs/2307.14785v1
# Improving Aspect-Based Sentiment with ###### Abstract This paper presents a series of approaches aimed at enhancing the performance of Aspect-Based Sentiment Analysis (ABSA) by utilizing extracted semantic information from a Semantic Role Labeling (SRL) model. We propose a novel end-to-end Semantic Role Labeling model that effectively captures most of the structured semantic information within the Transformer hidden state. We believe that this end-to-end model is well-suited for our newly proposed models that incorporate semantic information. We evaluate the proposed models in two languages, English and Czech, employing ELECTRA-small models. Our combined models improve ABSA performance in both languages. Moreover, we achieved new state-of-the-art results on the Czech ABSA. ## 1 Introduction In recent years, the pre-trained BERT-like models based on the Transformer (Vaswani et al., 2017) architecture demonstrated their performance superiority across various natural language processing (NLP) tasks. In this paper, we study the possibility of a combination of two seemingly unrelated NLP tasks: Aspect-Based Sentiment Analysis (ABSA) and Semantic Role Labeling (SRL). We believe that the structured semantic information of a sentence extracted from an SRL model can enhance the performance of an ABSA model. We investigate our assumption on the ELECTRA (Clark et al., 2020) model architecture since it is a lighter and smaller alternative to the popular and commonly used models such as BERT (Devlin et al., 2019) or RoBERTa (Liu et al., 2019). Because the ELECTRA model is smaller in terms of the number of parameters, it does require less GPU memory and time to be fine-tuned. Sentiment analysis (SA) is an essential part of NLP. The most prevalent SA task is the _Sentiment Classification_, where the objective is to classify a text fragment (e.g., sentence or review) as _positive_ or _negative_, eventually as _neutral_. In this type of task, we assume that there is only one opinion in the text. In reality, as illustrated in Figure 1, this assumption often does not hold true (Liu, 2012). Aspect-Based Sentiment Analysis (Liu, 2012; Pontiki et al., 2014) focuses on detecting aspects (e.g., food or service in the restaurant reviews domain) and determining their polarity, enabling more detailed analysis and understating of the expressed sentiment. As shown by Pontiki et al. (2014), the ABSA task can be further divided into four subtasks: _Aspect term extraction_ (TE), _Aspect term polarity_ (TP), _Aspect category extraction_ (CE), and _Aspect category polarity_ (CP). We aim at the CE and CP subtasks,1 and we treat them as a single classification task, see Section 3.2. As depicted in Figure 1, the goal of the CE subtask is to detect a set of aspect categories within a given sentence, i.e., for a given text \(S=\{w_{1},w_{2},\ldots w_{n}\}\) assign set \(M=\{a_{1},a_{2},\ldots,a_{m}\}\) of \(m\) aspect categories, where \(m\in[0,k]\), \(M\subset A\) and \(A\) is a set of \(k\) predefined aspect categories \(A=\{a_{1},a_{2},\ldots,a_{k}\}\). The goal of CP is to assign one of the predefined polarity labels \(p\) for each of the given (or predicted) aspect categories of the set \(M\) for the given text \(S\), where \(p\in P=\{positive,negative,neutral\}\). Footnote 1: See (Pontiki et al., 2014) for a detailed description of all the subtasks. The Semantic Role Labeling task (Gildea and Jurafsky, 2002) belongs among shallow semantic Figure 1: Example of CE and CP subtasks of ABSA. parsing techniques. The SRL goal is to identify and categorize semantic relationships or _semantic roles_ of given _predicates_. Verbs, such as "believe" or "cook", are natural predicates, but certain nouns are also accepted as predicates. The simplified definition of semantic roles is that semantic roles are abstractions of predicate arguments. For example, the semantic roles for "believe" can be _Agent_ (a believer) and _Theme_ (a statement) and for "cook" _Agent_ (a chef), _Patient_ (a food), _Instrument_ (a device for cooking) - see examples in Figure 2. The theory of predicates and their roles is very well established in several linguistic resources such as PropBank Palmer et al. (2005) or FrameNet Baker et al. (1998). In this work, we introduce a novel end-to-end SRL model that offers enhanced compatibility with other NLP tasks. Unlike other BERT-based models Shi and Lin (2019); Papay et al. (2021), our proposed approach integrates the complete semantic information into the hidden state of the Transformer. This end-to-end SRL model is particularly well-suited for combination with the Aspect-Based Sentiment Analysis task, as it encapsulates the entire predicate-argument structure of the sentence within a single hidden state, in contrast to the approach of Shi and Lin (2019), which encodes each argument separately and requires gold arguments on input. Our model, on the other hand, only requires the input text. We assume that leveraging the syntax and semantic information extracted from SRL can significantly enhance the performance of the aspect category polarity subtask. This assumption is grounded in the notion that the SRL information has the potential to unveil valuable and pertinent relations between entities within a given sentence, which play a crucial role in accurate aspect category polarity predictions. This holds particularly true for longer and more complex sentences, where a broader contextual understanding becomes essential. For a concrete illustration, please refer to Appendix B. To combine the SRL and ABSA models effectively, we propose three different approaches. Through their integration, we demonstrate performance improvements on the ABSA task for both English and Czech languages, employing ELECTRA-small models. Moreover, we achieved new state-of-the-art (SotA) results on the Czech ABSA task. We publicly release our source codes2. Footnote 2: [https://github.com/pauli31/srl-aspect-based-sentiment](https://github.com/pauli31/srl-aspect-based-sentiment) ## 2 Related Work The early studies Hu and Liu (2004); Ganu et al. (2009); Kiritchenko et al. (2014); Hercig et al. (2016) focusing on the English ABSA task relied on word n-grams, lexicons, and other feature extraction techniques in combination with supervised machine learning algorithm such as support vector machine classifiers. These approaches were surpassed by deep neural network (DNN) models Tang et al. (2016); Ma et al. (2017); Chen et al. (2017); Fan et al. (2018) that typically employed recurrent neural network e.g., Long Short-Term Memory (LSTM) Hochreiter and Schmidhuber (1997). Recently, the BERT-like models were successfully applied to the ABSA task. Sun et al. (2019) solve the CE and CP subtasks at once by introducing auxiliary sentences and transforming the problem to a sentence-pair classification task. Xu et al. (2019) and Rietzler et al. (2020) improved results by pre-training the model on the task domain data. Liu et al. (2021) treated the ABSA task as a text generation task outperforming the previous SotA results. Zhang et al. (2019); Liang et al. (2022) employed graph convolutional networks. Another related work can be found in Li et al. (2020). In Sido et al. (2021); Priban and Steinberger (2021); Lehecka et al. (2020); Priban and Steinberger (2022) the BERT-like models were used for sentiment classification and subjectivity classification, to the best of our knowledge, there is no application of BERT-like models for ABSA in the Czech language. Steinberger et al. (2014) introduced the first Czech ABSA dataset from the restaurant reviews domain. They used a Maximum Entropy classifier and Conditional Random Fields for their baselines. Hercig et al. (2016) extended this dataset and improved the baseline by adding semantic features. Lenc and Hercig (2016) applied a convolutional neural network for the CP task and RNN for the CP task to the dataset from Hercig et al. (2016). The pioneered approaches of the SRL Gildea and Jurafsky (2002) task used standard feature engineering methods Moschitti et al. (2008). Since SRL is closely bounded with syntax, adding syntactic information is very helpful. In 2008 CoNLL Figure 2: Examples of SRL annotations. shared task (Surdeanu et al., 2008) syntax-based SRL task was proposed. In more recent years (with DNNs), the attention was drawn back to standard span-based SRL, where we form SRL as (linear) tagging. Many approaches are based on LSTMs (He et al., 2017). Later, Tan et al. (2018), inspired by the Transformer, proposed a self-attention-based model. Several end-to-end models for all SRL subtasks were also introduced. He et al. (2018) abandon the BIO tagging scheme, and they are rather predicting predicate-argument span tuples by searching through the possible combinations. They use a multi-layer bi-LSTM to produce contextualized representations of predicates and argument spans. The most recent approaches use BERT-like pretrained models. Shi and Lin (2019) proposed a simple BERT approach for argument identification and classification. This means, in their setting, the gold predicates are known. Papay et al. (2021) propose regular-constrained conditional random fields (CRF) decoding on top of the same model. There are many other complex deep models (Zhang et al., 2021; Wang et al., 2021) For our experiments, we need an end-to-end SRL model which encodes most of the information in the Transformer's hidden state. However, to the best of our knowledge, there is no such model. As a result, we introduce our end-to-end model later in this paper to fulfil this need. Various approaches have been made to enhance one task through the integration of another, usually using multi-task learning techniques. Hashimoto et al. (2016) proposed a joint model for learning the whole NLP stack (POS tagging, chunking, parsing, semantic relatedness, entailment). They train a single model for all tasks in a sequence (chunking after POS tagging etc.). At each layer (for each task), they use regularization on the difference from previous layer weights. They show that the tasks help each other significantly. Li et al. (2021) use dependency neighbourhood prediction and part-of-speech tagging as auxiliary tasks for ABSA. They introduced the new dependency neighbourhood prediction task to utilize the syntactic dependency information to improve the performance of the sentiment classification task. They train the auxiliary tasks together with the main sentiment classification task. The task classifies each token as either in the dependency neighbourhood or not. The dependency neighbourhood for a given token in a sentence is defined as the tokens in the sentence that are linked to the given token through, at most, \(n\)-hop dependency relations. Zhang et al. (2020) pretrain BERT model on semantic role labeling task and show, that the pretraining helps for many natural language understanding tasks. These examples of multi-task learning demonstrate the potential benefits of incorporating additional tasks in NLP models. ## 3 Models To find an effective way to combine the models, we first fine-tune the individual models separately to find the optimal set of hyper-parameters for individual tasks. Moreover, we need SRL fine-tuned model as the input for the combined models. For ABSA, we adopt the model proposed by (Sun et al., 2019). We propose a new SRL end-to-end model, specifically designed for seamless integration with other tasks. ### Semantic Role Labeling Our goal is to train a universal encoder that effectively captures SRL information from a plain-text input. To accomplish this, we propose an end-to-end model with a single projection layer on the top of the ELECTRA encoder (or any other pre-trained language model). This way, all the information useful to predict role labels is encoded in the last hidden state of the encoder. Consequently, we can use this representation in other tasks. Although our end-to-end model exhibits lower performance than the commonly used BERT SRL model (Shi and Lin, 2019; Sido et al., 2021), we believe it is more suitable for this task. In our end-to-end model, we first encode the whole sentence and then iterate over all possible word pairs (the first word is a potential predicate and the second is a potential argument). For each potential predicate-argument pair, we first concatenate the representations of predicate and argument and then classify the argument role. If the potential predicate is not a real predicate word or the potential argument is not an argument of the predicate, the role of the pair is set to _Other_. If a word is represented by multiple subword tokens, only the first token is classified. This is common practice in tagging tasks where the model learns to encode the semantics of a multi-token word into the first subword, then each word has a single token on the output for its classification. Our approach differs from that of Zhang et al. (2021) in terms of how the predicate-argument structure of the sentence is encoded within the transformer model. While Zhang et al. (2021) encodes each argument separately and requires gold arguments on input, our model only requires plain text as input. In other words, our model requires only text as input, but the model proposed by Zhang et al. (2021) operates on pairs of text-predicate, producing representations solely for the input pair rather than the entire SRL output encompassing all predicates within the sentence. Figure 3 shows the schema of our end-to-end SRL model. For our approach, it is necessary to have the same format of input (i.e., plain text) for both tasks that are combined. This is the reason why we need our end-to-end SRL model. For multitask learning, we need a general-purpose model, the same for both tasks. The task-specific models may yield better results on the SRL task, but they are specifically oriented only on the SRL task and makes their integration with ABSA or utilization in multitask learning challenging, if not impossible. ### Aspect-Based Sentiment As we mentioned in the introduction, we tackle the CE and CP subtasks of ABSA, as one classification task. We adopt the same approach as Sun et al. (2019), and we construct auxiliary sentences and convert the subtasks to a binary classification task. We use the NLI-B approach from Sun et al. (2019) to build the auxiliary sentences. For each sentence, we build multiple auxiliary pseudo sentences that are generated for every combination of all polarity labels and aspect categories3. Each example has a binary label \(l\in\{0,1\}\); \(l=1\) if the auxiliary sentence corresponds to the original labels, \(l=0\) otherwise. We also add the artificial polarity class _none_ that has assigned binary label \(l=1\) if there is no aspect category for a given sentence. The pseudo auxiliary sentence consists only of a polarity label and aspect category in a given language. For example, the auxiliary sentences for all aspects of the sentence "_The burger was excellent but the waitress was unpleasant_" are shown in Figure 4. Footnote 3: For English we have four polarity labels plus artificial label _none_ and five aspect categories, i.e. \(25\) possible auxiliary sentences. For Czech there is \(20\) possible sentences (\(3+1\) polarity labels and five aspect categories). Each auxiliary sentence is combined with the original sentence and separated with [SEP] token and forms one training example, e.g., [CLS]_positive - food_[SEP]_the burger was excellent but the waitress was unpleasant_[SEP]. We fine-tune the pretrained transformer model for the binary classification task on all generated training examples as Sun et al. (2019). ### Combined Models We propose several models designed to use SRL representation to enhance ABSA performance. The first type of model predicts aspect and sentiment Figure 4: Example of auxiliary sentences. Figure 3: End-to-end SRL model architecture. using concatenated representations from both the SRL and ABSA encoders. The SRL encoder is pre-trained (pre-fine-tuned) on the SRL data, and its weights remain fixed during sentiment training. Since SRL is a token-level task, we need to reduce the sequential dimension before performing the concatenation step. To address this, we employ two approaches: simple average-over-time pooling (named _concat-avg_) and a convolution layer followed by max-over-time pooling (named _concat-conv_). Figure 5 shows the model architecture. The last model uses standard multi-task learning. We utilize a single Transformer encoder with two classification heads: one for the sentiment (standard head for sequence classification) and the other for SRL (the head architecture is presented in the previous section with the end-to-end SRL model). The model is trained using alternating batches, it means that we use different training data for both tasks, and we are not mixing them in a batch. In a single batch, we provide only ABSA or SRL data. See Figure 6 model's architecture. ## 4 Experiments In our experiments, we aim to verify our idea that injected SRL information can improve the results of the ABSA task, particularly the CP subtask. ### Datasets & Models Fine-Tuning For Semantic Role Labeling, we use OntoNotes 5.0 dataset (Weischedel et al., 2013) for English and CoNLL 2009 (Hajic et al., 2009) for Czech. As metrics, we report the whole role F1 score for both datasets. Additionally, for English, we report CoNLL 2003 official score as a comparative metric as it is the standard metric used with OntoNotes. For Aspect-Based Sentiment, we use the widely-used English dataset from Pontiki et al. (2014) that consists of 3,044 train and 800 test sentences from the restaurant domain. The English dataset contains four sentiment labels: _positive_, _negative_, _neutral_, and _conflict_. Further, we split4 the original training part of 3,044 sentences into development (10%) and training parts (90%). Footnote 4: For both English and Czech we provide a script to obtain the same split distribution. For Czech experiments, we employ the dataset from Hercig et al. (2016) with 2,149 sentences from the restaurant domain. Unlike in the English dataset, there are only three polarity labels: _positive_, _negative_, and _neutral_. Because the dataset has no official split, we divided5 the data into training, development, and testing parts with the following ratio: \(72\%\) for training, \(8\%\) for the development evaluation, and \(20\%\) for testing. Both Czech and English datasets contain five aspect categories: _food_, _service_, _price_, _ambience_, and _general_. For our experiments on English, we use the pre-trained _ELECTRA-small_ model introduced by Clark et al. (2020), which has 14M parameters. For Czech, we employ the pre-trained monolingual model _Small-E-Czech_(Kocian et al., 2021) with the same size and architecture. Firstly, we train separate models for both tasks (ABSA and SRL) and select the optimal set of hyper-parameters on the development data. We then use the same hyper-parameters in combined models. For the details of hyper-parameters, see Appendix A. Footnote 5: For both English and Czech we provide a script to obtain the same split distribution. ### Results & Discussion We report the results of our end-to-end SRL model in Table 3. As we expected, our model performs worse than the model proposed by Shi and Lin (2019), but the results are reasonably high (conflict Figure 5: Concat model architecture. Figure 6: Multi-task model architecture. sidering that it does not have gold predicates on input). Results for our ABSA experiments in Czech and English are shown in Tables 1 and 2, respectively. The _baseline_ refers to the model described in Section 3.2 without any injected SRL information. The SotA results are underlined and the best results for our experiments are bold. We include the results with the 95% confidence interval (experiments repeated 12 times). We use the F1 Micro and accuracy for the CE and CP subtasks, respectively. Based on the results presented in Tables 1 and 2, we can observe that our proposed models (_concat-conv_ and _concat-avg_) with injected SRL information consistently enhance results for the CP subtask in both languages. These improvements are statistically significant. The performance of the _concat-conv_ and _concat-avg_ models does not exhibit a significant difference. In the CE subtask, we achieve the same results as the _baseline_ model. We think that the CE subtask is more distant from the SRL task than the CP subtask and therefore, the injection of the semantic information does not help. In other words, the semantic structure of the sentence may not play a crucial role in aspect detection (that can be viewed as multi-label text classification). On the other hand, for the CP subtask, the combined models can leverage the semantic structure of the sentence to their advantage. For the Czech ABSA dataset we achieve new SotA results on both subtasks5. As we expected, we did not outperform the current SotA results for the English dataset, as our ELECTRA model has considerably fewer parameters than SotA models. For Czech, the _multi-task_ model exhibited a marginal \begin{table} \begin{tabular}{l c c c c c} \hline \hline \multirow{2}{*}{Model} & \multicolumn{3}{c}{Category Extraction} & \multicolumn{3}{c}{Category Polarity} \\ \cline{2-6} & F1 Micro & Precision & Recall & Acc \#3 & Acc \#2 \\ \hline baseline & 86.04\({}^{\pm 0.36}\) & 86.48\({}^{\pm 0.97}\) & 85.62\({}^{\pm 0.65}\) & 75.58\({}^{\pm 0.55}\) & 88.69\({}^{\pm 0.26}\) \\ concat-conv & **86.58\({}^{\pm 0.54}\)** & **86.90\({}^{\pm 0.51}\)** & **86.28\({}^{\pm 0.94}\)** & **79.20\({}^{\pm 0.48}\)** & **90.26\({}^{\pm 0.58}\)** \\ concat-avg & 86.34\({}^{\pm 0.57}\) & 86.57\({}^{\pm 0.84}\) & 86.12\({}^{\pm 1.08}\) & 78.33\({}^{\pm 0.64}\) & 90.06\({}^{\pm 0.79}\) \\ multi-task & 85.62\({}^{\pm 0.63}\) & 86.24\({}^{\pm 0.66}\) & 85.01\({}^{\pm 0.66}\) & 77.27\({}^{\pm 0.69}\) & 89.00\({}^{\pm 0.63}\) \\ baseline (Hercig et al., 2016)* & 71.70 & & & 69.70 & - \\ best (Hercig et al., 2016)* & 80.00 & - & - & 75.20 & - \\ CNN2 (Lenc and Hercig, 2016) & - & - & - & 69.00\({}^{\pm 2.00}\) & - \\ \hline \hline \end{tabular} \end{table} Table 1: Czech results for the category extraction (CE) subtask as F1 Micro score, Precision and Recall. Results for the category polarity (CP) subtask as accuracy for three polarity labels (Acc #3) and binary polarity labels (Acc #2). Results marked with * symbol were obtained by 10-fold cross-validation. \begin{table} \begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{Model} & \multicolumn{3}{c}{Category Extraction} & \multicolumn{3}{c}{Category Polarity} \\ \cline{2-5} & F1 Micro & Precision & Recall & Acc \#4 & Acc \#3 & Acc \#2 \\ \hline baseline & 89.50\({}^{\pm 0.45}\) & 90.95\({}^{\pm 0.70}\) & 88.09\({}^{\pm 0.48}\) & 83.03\({}^{\pm 0.43}\) & 86.91\({}^{\pm 0.55}\) & 92.74\({}^{\pm 0.53}\) \\ concat-conv & **89.74\({}^{\pm 0.55}\)** & **91.24\({}^{\pm 0.54}\)** & **88.28\({}^{\pm 0.77}\)** & **84.19\({}^{\pm 0.49}\)** & **88.08\({}^{\pm 0.41}\)** & **93.76\({}^{\pm 0.46}\)** \\ concat-avg & 89.58\({}^{\pm 0.43}\) & 91.15\({}^{\pm 0.60}\) & 88.08\({}^{\pm 0.66}\) & 84.13\({}^{\pm 0.51}\) & 87.95\({}^{\pm 0.46}\) & 93.49\({}^{\pm 0.44}\) \\ multi-task & 89.36\({}^{\pm 0.15}\) & 90.72\({}^{\pm 0.52}\) & 88.05\({}^{\pm 0.44}\) & 82.83\({}^{\pm 1.10}\) & 87.05\({}^{\pm 1.21}\) & 92.74\({}^{\pm 0.79}\) \\ XRCE (Brun et al., 2014) & 82.29 & 83.23 & 81.37 & 78.10 & - & - \\ NRC (Kirtichenko et al., 2014) & 88.58 & 91.04 & 86.24 & 82.90 & - & - \\ BERT single (Sun et al., 2019) & 90.89 & 92.78 & 89.07 & 83.70 & 86.90 & 93.30 \\ NLI-B (Sun et al., 2019) & 92.18 & 93.57 & 90.83 & 84.60 & 88.70 & 95.10 \\ QACG-B (Wu and Ong, 2021) & 92.64 & 94.38\({}^{\pm 0.31}\) & 90.97\({}^{\pm 0.28}\) & 86.80\({}^{\pm 0.80}\) & 90.10\({}^{\pm 0.30}\) & 95.60\({}^{\pm 0.40}\) \\ BART generation (Liu et al., 2021) & 92.80 & 95.18 & 90.54 & - & 90.55\({}^{\pm 0.32}\) & - \\ \hline \hline \end{tabular} \end{table} Table 2: English results for the category extraction (CE) subtask as F1 Micro score, Precision and Recall. Results for category polarity (CP) subtask as accuracy for four polarity labels (Acc #4), three polarity labels (Acc #3) and binary polarity labels (Acc #2). \begin{table} \begin{tabular}{l c c c} \hline \hline Model & EN & EN-conll05 & CS \\ \hline (Shi and Lin, 2019) & 88.89 & 85.20 & 83.09 \\ end-to-end (ours) & 84.54 & 81.51 & 79.74 \\ \hline \hline \end{tabular} \end{table} Table 3: Comparison of results of the standard model and our end-to-end SRL model (reported in F1 scores, the official metrics, for the datasets used). improvement in the results and generally, the model was significantly inferior to our other models. We decided to use the smaller ELECTRA-based models because of their much smaller computation requirements. However, in future work, we plan comparison with larger models like BERT or RoBERTa to obtain the overall performance overview of our approach. ## 5 Conclusion In this work, we introduce a novel end-to-end SRL model that we use to improve the aspect category polarity task. Our contribution lies in proposing several methods to integrate SRL and ABSA models, which ultimately lead to improved performance. The experimental results validate our initial assumption that leveraging semantic information extracted from an SRL model can significantly enhance the aspect category polarity task. Importantly, the approaches we propose are versatile and can be applied to combine Transformer-based models for other related tasks as well, extending the scope of their applicability. Moreover, we believe that our approaches hold even greater potential in addressing other ABSA subtasks, namely term extraction and term polarity classification. These subtasks could benefit from the integration of SRL and ABSA models in a similar manner. Further, we would like to validate our approach on larger models, for example, BERT or RoBERTa. ## Acknowledgements This work has been partly supported by grant No. SGS-2022-016 Advanced methods of data processing and analysis. Computational resources were provided by the e-INFRA CZ project (ID:90140), supported by the Ministry of Education, Youth and Sports of the Czech Republic.
2303.06676
Local Search For SMT On Linear and Multilinear Real Arithmetic
Satisfiability Modulo Theories (SMT) has significant application in various domains. In this paper, we focus on quantifier-free Satisfiablity Modulo Real Arithmetic, referred to as SMT(RA), including both linear and non-linear real arithmetic theories. As for non-linear real arithmetic theory, we focus on one of its important fragments where the atomic constraints are multi-linear. We propose the first local search algorithm for SMT(RA), called LocalSMT(RA), based on two novel ideas. First, an interval-based operator is proposed to cooperate with the traditional local search operator by considering the interval information. Moreover, we propose a tie-breaking mechanism to further evaluate the operations when the operations are indistinguishable according to the score function. Experiments are conducted to evaluate LocalSMT(RA) on benchmarks from SMT-LIB. The results show that LocalSMT(RA) is competitive with the state-of-the-art SMT solvers, and performs particularly well on multi-linear instances.
Bohan Li, Shaowei Cai
2023-03-12T14:33:53Z
http://arxiv.org/abs/2303.06676v2
# Local Search For SMT On Linear and Multilinear Real Arithmetic ###### Abstract Satisfiability Modulo Theories (SMT) has significant application in various domains. In this paper, we focus on Satisfiablity Modulo Real Arithmetic, referred to as SMT(RA), including both linear and non-linear real arithmetic theories. As for non-linear real arithmetic theory, we focus on one of its important fragment where the atomic constraints are multilinear. We proposed the first local search algorithm for SMT(RA), called LS-RA, based on two novel ideas. First, an interval-based operator is proposed to cooperate with the traditional local search operator by considering the interval information. Moreover, we propose a tie-breaking mechanism to further evaluate the operations when the operations are indistinguishable according to the score function. Experiments are conducted to evaluate LS-RA on benchmarks from SMT-LIB. The results show that LS-RA is competitive with the state-of-the-art SMT solvers, and performs particularly well on multilinear instances. Keywords:SMT Local Search Linear Real Arithmetic Multilinear Arithmetic. ## 1 Introduction The Satisfiability Modulo Theories (SMT) is the problem of checking the satisfiability of a first order logic formula with respect to certain background theories. It has been applied in various areas, including program verification and termination analysis [24, 8], symbolic execution [1] and test-case generation [28], etc. In this paper, we focus on the theory of real arithmetic, consisting of atomic constraints in the form of polynomial equalities or inequalities over real variables. The theory can be divided into two categories, namely _linear real arithmetic_ (LRA) and _non-linear real arithmetic_ (NRA), based on whether the arithmetic atomic constraints are linear or not. As for NRA, this paper concerns one of its important fragment where the atomic constraints are multilinear. The SMT problem with the background theory of LRA and NRA is to determine the satisfiability of the Boolean combination of respective atomic constraints, referred to as SMT(LRA) and SMT(NRA). In general, we refer to the SMT problem on the theory of real arithmetic as SMT(RA). The mainstream approach for solving SMT(RA) is the _lazy_ approach [30, 4], also known as DPLL(T) [27], which relies on the interaction of a SAT solver with a theory solver. Most state-of-the-art SMT solvers supporting the theory on real arithmetic are mainly based on the _lazy_ approach, including Z3 [18], Yices2 [19], SMT-RAT [17], CVC5 [3], OpenSMT [9] and MathSAT5 [13]. In the DPLL(T) framework, the SMT formula is abstracted into a Boolean formula by replacing arithmetic atomic constraints with fresh Boolean variables. A SAT solver is employed to reason about the Boolean structure, while a theory solver is invoked to receive the set of theory constraints determined by the SAT solver, and solve the conjunction of these theory constraints, including consistency checking of the assignments and theory-based deduction. The efforts in the _lazy_ approach are mainly devoted to designing effective decision procedures, serving as theory solvers to deal with the conjunction of theory constraints. The core reasoning module for LRA integrated in DPLL(T) is a variant of _simplex_ algorithm dedicated for SMT solving, proposed in [20]. Another approach for solving LRA constraint systems is the _Fourier-Motzkin_ variable elimination [7], which often shows worse performance than the _simplex_ algorithm. As for non-linear real arithmetic, the _cylindrical algebraic decomposition_ (CAD) [14] is the most widely used decision procedure, and CAD is adapted and embedded as theory solver in the SMT-RAT solver [17] with improvement since [26]. Moreover, an elegant variation of CAD method is instantiated in the model-constructing satisfiability calculus framework of Z3 [22]. Other well-known methods use Grobner bases [23] or the realization of sign conditions [5]. Incomplete methods include a theory solver [16] based on virtual substitution [33], and techniques based on interval constraint propagation [32] proposed in [21, 29]. Despite the fact that local search has been successfully employed to solve SAT [25, 2, 12, 11, 6] and recently to SMT on integer arithmetic [10], we are not aware of any local search algorithm for SMT on real arithmetic. In this paper, for the first time, we design a local search algorithm for SMT(RA), namely LS-RA, based on the following novel strategies. First, we propose the _interval-based_ operator to enhance the conventional local search operator by taking interval information into account. Specifically, we observe that assigning the real-value variable to any value in a given interval would make the same amount of currently falsified clauses become satisfied. Hence, the _interval-based_ operator evaluates multiple values inside the interval as the potential value of the operation, rather than only assign it to a fixed value (e.g. the threshold value to satisfy a constraint). Moreover, we observe that there frequently exist multiple operations with the same best score when performing local search, and thus a tie-breaking mechanism is proposed to further distinguish these operations. Experiments are conducted to evaluate LS-RA on 2 benchmarks, namely SMT(LRA) and SMT(NRA) benchmarks from SMT-LIB. Note that unsatisfiability is not a trivial property of the _interval-based_ operator, but also the _interval-based_ operator evaluates multiple values inside the interval as the potential value of the operation, rather than only assign it to a fixed value (e.g. the threshold value to satisfy a constraint). Moreover, we observe that there frequently exist multiple operations with the same best score when performing local search, and thus a tie-breaking mechanism is proposed to further distinguish these operations. -fiable instances are excluded, and we only consider multilinear instances from SMT(NRA) benchmark. We compare LS-RA with the top 4 SMT solvers according to the SMT-COMP \(2022^{3}\), excluding the portfolio and derived solvers. Specifically, as for SMT(LRA), we compare LS-RA with OpenSMT, Yices2, CVC5 and Z3, while for SMT(NRA), the competitors are Z3, CVC5, Yices2 and SMT-RAT. Experimental results show that LS-RA is competitive and complementary with state-of-the-art SMT solvers, especially on multilinear instances. Moreover, the Ablation experiment confirms the effectiveness of our proposed novel strategies. Note that multilinear instances are comparatively difficult to solve by existing solvers. For example, Z3, perhaps the best solver for satisfiable SMT(NRA) instances, can solve 90.5% QF(NRA) instances, while it can only solve 77.5% multilinear instances from SMTLIB. However, multilinear instances are suitable for local search, since without high order terms, the operation can be easily calculated. In section 2, preliminary knowledge is introduced. In section 3, we propose a novel _interval-based operator_ to enrich the traditional operator by considering the interval information. In section 4, a _tie-breaking mechanism_ is proposed to distinguish multiple operations with the same best score. Based on the two novel strategies, our local search for SMT(RA) is proposed in section 5. Experimental results are presented in section 6. Conclusion and future work are given in section 7. ## 2 Preliminary ### Basic Definitions A \(monomial\) is an expression of the form \(x_{1}^{e_{1}}...x_{m}^{e_{m}}\) where \(m>0\), \(x_{i}\) are variables and \(e_{i}\) are exponents, \(e_{i}>0\) for all \(i\in\{1...m\}\), and \(x_{i}\neq x_{j}\) for all \(i,j\in\{1...m\},i\neq j\). A monomial is linear if \(m=1\) and \(e_{1}=1\). A \(polynomial\) is a linear combination of monomials, that is, an arithmetic expression of the form \(\sum_{i}a_{i}m_{i}\) where \(a_{i}\) are coefficients and \(m_{i}\) are monomials. If all its monomials are linear in a polynomial, indicating that the \(polynomial\) can be written as \(\sum_{i}a_{i}x_{i}\), then it is \(linear\), and otherwise it is _non-linear_. A special case of non-linear polynomial is \(multilinear\) polynomial, where the highest exponent for all variables is 1, indicating that each monomial is in the form of \(x_{1}...x_{m}\). Definition 1: The atomic constraints of the theory of real arithmetic are polynomial inequalities and equalities, in the form of \(\sum_{i}a_{i}m_{i}\bowtie k\), where \(\bowtie\in\{=,\leq,<,\geq,>\}\), \(m_{i}\) are monomials consisting of real-valued variables, \(k\) and \(a_{i}\) are rational constants. The formulas of the SMT problem on the theory of real arithmetic, denoted as SMT(RA), are Boolean combination of atomic constraints and propositional variables, where the set of real-valued variables and propositional variables are denoted as \(X\) and \(P\). The SMT problem on the theory of linear real arithmetic (LRA) and non-linear arithmetic (NRA) are denoted as SMT(LRA) and SMT(NRA), respectively. As for NRA, this paper concerns one important fragment where the polynomials in atomic constraints are multilinear, denoted as _MRA_ in this paper. Example 1: Let \(X=\{x_{1},x_{2},x_{3},x_{4},x_{5}\}\) and \(P=\{p_{1},p_{2}\}\) denote the sets of integer-valued and propositional variables respectively. A typical SMT(LRA) formula \(F_{LRA}\) and SMT(MRA) formula \(F_{MRA}\) are shown as follows: \(F_{LRA}\): \((p_{1}\vee(x_{1}+2x_{2}\leq 2)\ )\wedge(p_{2}\vee(3x_{3}+4x_{4}=2)\vee(-x_{2}-x_{3} <3)\ )\) \(F_{MRA}\): \((p_{1}\vee(x_{1}x_{2}\leq 2)\ )\wedge(p_{2}\vee(3x_{3}x_{4}+4x_{4}=2)\vee(-x_{2}-x_{ 3}<3)\ )\) In the theory of real arithmetic, a positive, infinitesimal real number is denoted as \(\delta\). A literal is an atomic constraint or a propositional variable, or their negation. A \(clause\) is the disjunction of a set of literals, and a formula in _conjunctive normal form_ (CNF) is the conjunction of a set of clauses. For an SMT(RA) formula \(F\), an assignment \(\alpha\) is a mapping \(X\to R\) and \(P\rightarrow\{false,true\}\), and \(\alpha(x)\) denotes the value of a variable \(x\) under \(\alpha\). A _complete assignment_ is a mapping which assigns to each variable a value. A literal is true if it evaluates to true under the given assignment, and false otherwise. A clause is \(satisfied\) if it has at least one true literal, and \(falsified\) if all literals in the clause are false. A complete assignment is a _solution_ to an SMT(RA) formula iff it satisfies all the clauses. ### Local Search When local search is performed on the SMT problem, the search space is comprised of all complete assignments, each of which represents a candidate solution. Typically, a local search algorithm begins with a complete assignment and repeatedly updates it by modifying the value of variables in order to find a _solution_. Given a formula \(F\), the _cost_ of an assignment \(\alpha\), denoted as \(cost(\alpha)\), is the number of falsified clauses under \(\alpha\). In dynamic local search algorithms which use clause weighting techniques [31, 12], \(cost(\alpha)\) denotes the total weight of all falsified clauses under an assignment \(\alpha\). A key component of a local search algorithm is the _operator_, defining how to modify the current solution. When an operator is instantiated by specifying the variable to operate and the value to assign, an _operation_ is obtained. The operation to assign variable \(x\) to value \(v\) is denoted as \(op(x,v)\). An operator for SMT on linear integer arithmetic proposed in [10] is defined as follow. Definition 2: The critical move operator, denoted as \(cm(x,\ell)\), assigns an integer variable \(x\) to the threshold value making literal \(\ell\) true, where \(\ell\) is a falsified literal containing \(x\). Local search algorithms usually choose an operation among candidate operations according to some scoring functions. Given a formula and an assignment \(\alpha\), the most commonly used scoring function of an operation \(op\) is defined as \[score(op)=cost(\alpha^{\prime})-cost(\alpha)\] where \(\alpha^{\prime}\) is the resulting assignment by applying \(op\) to \(\alpha\). An operation \(op\) is said to be \(decreasing\) if \(score(op)>0\). Another property to evaluate the operation is the _make value_. Definition 3: Given an operation \(op\), the make value of \(op\), denoted as \(make(op)\), is the number of falsified clauses that would become satisfied after performing \(op\). ## 3 Interval-based Operation _Critical move_ satisfies falsified clauses by modifying one variable in a false literal to make it true. This operator can still be used in the context of SMT(RA), and it is also used in our algorithm. However, an issue of accuracy arises when applying the critical move operator in the context of Real Arithmetic. Recalling that we need to calculate the threshold value for a literal to become true, when solving a strict inequality, there is no threshold value. Instead, the value depends on what accuracy we intend to maintain. In this section, we propose an operator for SMT(RA), which considers the interval information and is more flexible than critical move. ### Satisfying Domain An important fact on linear or multilinear inequality of real-value variables is that, when all variables but one in the inequality are fixed, there is a domain for the remaining variable, such that assigning the variable to any value in the domain makes the inequality hold. Thus, given a falsified literal \(\ell\) in the form of an atomic constraint and a variable \(x\) in it, it can be satisfied by assigning \(x\) to any value in the corresponding domain, called _Satisfying Domain_. For example, consider a literal \(\ell:(x-y>4)\) where the current assignment is \(\alpha=\{x=0,y=0\}\), then obviously assigning \(x\) to any value in \((4,+\infty)\) satisfies the inequality, and thus the _Satisfying Domain_ is \((4,+\infty)\). We further extend the definition of _Satisfying Domain_ to the clause level, defined as follows. Definition 4: Given an assignment \(\alpha\), for a false literal \(\ell\) and a variable \(x\) appearing in \(\ell\), \(x^{\prime}s\)_**satisfying domain to literal \(\ell\)**_is \(SD_{l}(x,\ell)=\{v|\ell\) becomes true if assigning \(x\) to \(v\}\); for a falsified clause \(c\) and a variable \(x\) in \(c\), \(x^{\prime}s\)_**satisfying domain to clause \(c\)**_is \(SD_{c}(x,c)=\bigcup_{\ell\in c}SD_{l}(x,\ell)\)._ \(SD_{c}(x,c)\) may contain \((-\infty,u]\) whose upper bound is defined as \(UB(x,c)=u\), or \([l,\infty)\) whose lower bound is defined as \(LB(x,c)=l\), or both kinds of intervals. For simplicity, interval \((-\infty,u^{\prime})\) or \((l^{\prime},\infty)\) are denoted as \((-\infty,u^{\prime}-\delta]\) or \([l^{\prime}+\delta,\infty)\) respectively. Example 2: Given a clause \(c=\ell_{1}\vee\ell_{2}\vee\ell_{3}=(a-b>4)\vee(2a-b\geq 7)\vee(2a-c\leq-5)\) and the current assignment is \(\alpha=\{a=0,b=0,c=0\}\), for variable \(a\), the satisfying domains to the three literals are \(SD_{l}(a,\ell_{1})=[4+\delta,\infty)\), \(SD_{l}(a,\ell_{2})=[3.5,\infty)\) and \(SD_{l}(a,\ell_{3})=(-\infty,-2.5]\) respectively. The _Satisfying Domain_ to clause \(c\) is \(SD_{c}(a,c)=(-\infty,-2.5]\cup[3.5,\infty)\), and the corresponding upper bound and lower bound are \(UB(a,c)=-2.5\) and \(LB(a,c)=3.5\) respectively. ### Equi-make Intervals Based on the variables' _satisfying domain_ to clauses, we observe that operations assigning the variable to any value in a given interval would satisfy the same amount of falsified clauses, that is, they have the same _make value_. This leads to a concept called _equi-make interval_. Definition 5: Given an SMT(RA) formula \(F\) and an assignment \(\alpha\) to its variables, for a variable \(x\), an **equi-make interval** is a maximal interval \(I\) such that all operations \(op(x,v)\) with \(v\in I\) have the same make value. We can divide \((-\infty,+\infty)\) into several equi-make intervals w.r.t. a variable. Example 3: Consider a formula \(F:c_{1}\wedge c_{2}\) where both clauses are falsified under the current assignment, and variable \(a\) appears in both clauses. Suppose \(SD_{c}(a,c_{1})=[3,+\infty)\) and \(SD_{c}(a,c_{2})=[5,+\infty)\), then we can divide \((-\infty,+\infty)\) into three intervals as \((-\infty,3)\), \([3,5)\) and \([5,+\infty)\). Operations assigning \(a\) to any value in \((-\infty,3)\) results in a make value of \(0\), those assigning \(a\) to a value in \([3,5)\) results in a make value of \(1\), while those corresponding to \([5,\infty)\) results in a make value of \(2\). Thus, we can enrich the traditional _critical move_ operator by considering the interval information. The intuition is to find the equi-make intervals, and then consider multiple values in such interval as the options for future value of operations, rather than only consider the threshold value. We focus on the variables appearing in at least one falsified clause. Here we describe a procedure to partition \((-\infty,+\infty)\) into equi-make intervals for such variables. * First, we go through the falsified clauses. For each falsified clause \(c\), we calculate for each real-valued variable \(x\) in \(c\) the corresponding _satisfying domain_ to \(c\), \(SD_{c}(x,c)\), as well as the upper bound \(UB(x,c)\) and lower bound \(LB(x,c)\) if they exist. * Then, for each real-valued variable \(x\) appearing in falsified clauses, all its \(UBs\) are sorted in the ascending order, while \(LBs\) sorted in the descending order. After sorting, these bounds are labeled as \(UB^{1}(x),\ldots UB^{n}(x)\) and \(LB^{1}(x)\),\(\ldots\)\(LB^{m}(x)\), where \(UB^{n}(x)\) and \(LB^{m}(x)\) denotes the maximum of \(UB\) and minimum of \(LB\) for \(x\) respectively 4. For convenience in description, we denote \(UB^{0}(x)=-\infty\) and \(LB^{0}(x)=\infty\). These bounds are listed in order: \(UB^{0}(x)<UB^{1}(x)<\ldots<UB^{n}(x)<LB^{m}(x)<\ldots<LB^{1}(x)<LB^{0}(x)\). * Finally, for each variable \(x\), we obtain an interval partition \[IP(x)=\bigcup_{0<i\leq n}\{(UB^{i-1}(x),UB^{i}(x)]\}\cup(UB^{n},LB^{m})\cup \bigcup_{0<j\leq m}\{[LB^{j}(x),LB^{j-1}(x))\}\] Formally, given a real variable \(x\) and an interval \(I\) from \(IP(x)\), \(\forall v_{1},v_{2}\in I\), \(make(op(x,v_{1}))=make(op(x,v_{2}))\). As a slight abuse of notation, for an interval \(I\) from \(IP(x)\), we define its _make value_ as the make value of any operation \(op(x,v)\) with \(v\in I\). Note that all intervals in \(IP(x)\) have positive make values except \((UB^{n},LB^{m})\), whose make value is \(0\). Example 4: Given a formula \(F:c_{1}\wedge c_{2}=(a-b>4\lor 2a-b\geq 7\lor 2a-c\leq-5)\wedge(a-c\geq 2\lor a -d\leq-1)\) and the current assignment is \(\alpha=\{a=0,b=0,c=0\}\). For variable \(a\), \(SD_{c}(a,c_{1})=(-\infty,-2.5]\cup[3.5,\infty)\) and \(SD_{c}(a,c_{2})=(-\infty,-1]\cup[2,\infty)\). Then, \(UB^{0}(a)=-\infty\), \(UB^{1}(a)=UB(a,c_{1})=-2.5,UB^{2}(a)=UB(a,c_{2})=-1,LB^{2}(a)=LB(a,c_{2})=2,LB^ {1}(a)=LB(a,c_{1})=3.5\) and \(LB^{0}(a)=\infty\). Therefore, interval partition for \(x\) is \(IP(x)=I_{1}\cup I_{2}\cup I_{3}\cup I_{4}\cup I_{5}\)=\((-\infty,-2.5]\cup(-2.5,-1]\cup(-1,2)\cup[2,3.5)\cup[3.5,+\infty)\), as shown in Fig 1. For these intervals w.r.t. \(x\), the make value is \(2\), \(1\), \(0\), \(1\), \(2\) respectively. ### Candidate Values for Operations Since assigning a variable \(x\) to any value in an equi-make interval would satisfy the same amount of falsified clauses, after choosing an equi-make interval, we can consider more values in the interval as the option for the future value of operation, rather than only the threshold. In this work, we only consider the intervals with a positive make value, and thus the interval \((UB^{n},LB^{m})\) is omitted. Thus, the interval for consideration is of the form \((UB^{i-1}(x),UB^{i}(x)]\) or \([LB^{i}(x),LB^{i-1}(x))\). For such an interval, we consider the following values for the operation: Figure 1: Interval example * Assign \(x\) to the threshold \(UB^{i}(x)\) or \(LB^{i}(x)\). * Assign \(x\) to the median of the interval, that is \((UB^{i-1}(x)+UB^{i}(x))/2\) and \((LB^{i}(x)+LB^{i-1}(x))/2\). * if there exist integers in the open interval \((UB^{i-1}(x),UB^{i}(x))\) or the interval \((LB^{i}(x),\)\(LB^{i-1}(x))\), assign \(x\) to the largest or smallest integer in the respective open interval; Otherwise, suppose that the open interval can be written as \((\frac{a}{b},\frac{c}{d})\), then assign \(x\) to \(\frac{a+c}{b+d}\). The first option is the same as critical move, and thus _critical move_ can be regarded as a special case of our interval-based operator. The second option is an typical choice among intervals. The third option aims to find a rational value in the open interval with small denominator. ## 4 A Tie-breaking Mechanism We notice that there often exist different operations with the same best \(score\) during local search, and thus tie-breaking is also important to guide the search. To confirm our observation, we conduct a pre-experiment on 100 randomly selected instances. On each instance, we execute a simple local search algorithm which selects an operation with the best \(score\) for 10000 iterations, and we count the number of steps where \(k\) operations with the same best \(score\) are found, denoted as \(step(k)\). The average \(step(k)\) is presented in Fig. 2. The steps where more than one operations have the same best \(score\) take up 61.2% of the total steps. Thus, a tie-breaking heuristic is required to further distinguish these operations with same best \(score\). First, we consider that assigning real-valued variable to values with large denominator can lead the algorithm to more complex search space, leading to Figure 2: Average \(step(k)\) distribution more complicated computation and possible errors. Thus, we prefer operations that assign variable to value with a small denominator. Moreover, we consider that assigning variable to value with large absolute value can lead to the assignment with extraordinarily large value, deviating the algorithm from finding a possible solution. Thus, we prefer operations assigning variable to value with a small absolute value. Based on the above observation and intuition, we propose the selection rule for picking operation, described as follows. **Selection Rules**: Select the operation with the greatest \(score\), breaking ties by preferring the one assigning the corresponding variable to value with the smallest denominator. Further ties are broken by picking the operation assigning the variable with the smallest absolute value. ## 5 LS-RA Algorithm Our local search algorithm adopts a two-mode framework, which switches between Real mode and Boolean mode. This framework has been used in the local search algorithm LS-LIA for integer arithmetic theories [10]. ### Local search Framework As depicted in Fig. 3, after the initialization, the algorithm switches between Real mode and Boolean mode. In each mode, an operation on a variable of the corresponding data type is selected to modify the current solution. The two modes switches to each other when the number of non-improving steps (denoted as \(non\_improve\_steps\)) of the current mode reaches a threshold. The threshold is set to \(L\times P_{b}\) for the Boolean mode and \(L\times P_{r}\) for the Real mode, where \(P_{b}\) and \(P_{r}\) denote the proportion of Boolean and real-variable literals to all literals in falsified clauses, and \(L\) is a parameter. ### Local Search in Real Mode As described in Algorithm 1, if the current assignment \(\alpha\) satisfies the given formula \(F\), then the solution is found (Line 2). The algorithm tries to find a Figure 3: An SMT Local Search Framework decreasing _interval-based_ operation according to the **Selection Rule** (Line 3-4). If there exists no such decreasing operation, indicating that the algorithm falls into the local optimum. We first update the clause weight according to the probabilistic version of the PAWS scheme [31, 12] (Line 6), and then randomly sample \(K\) interval-based operations into the set \(Set_{op}\) (Line 7), where \(K\) is a relatively small parameter. The best operation is picked according to **Selection Rules** among \(Set_{op}\) (Line 8). Note that since the _interval-based_ operation can satisfy at least one clause, picking the best one among few randomly sampled _interval-based_ operation can be regarded as a diversification operation. The probabilistic version of the PAWS scheme works as follows. With probability \(1-sp\), the weight of each falsified clause is increased by one, and with probability \(sp\), for each satisfied clause whose weight is greater than 1, the weight is decreased by one. As for the Boolean mode focusing on the subformula consisting of Boolean variables, LS-RA works in the same way as the Boolean mode of LS-LIA. By putting the Boolean mode and the Real mode together, we propose our local search algorithm for SMT(RA) denoted as LS-RA. ## 6 Experiments We conducted experiments to evaluate LS-RA on on 2 benchmarks from SMTLIB, and compare it with state-of-the-art SMT solvers. Moreover, ablation experiments are conducted to analyze the effectiveness of the proposed strategies. ### Experiment Preliminaries **Implementation:** LS-RA is implemented in C++ and compiled by g++ with '-O3' option. There are 3 parameters in LS-LRA: \(L\) for switching phases, \(K\) for the number of sampled operation and \(sp\) (the smoothing probability) for the PAWS scheme. The parameters are tuned according to suggestions from the literature and our preliminary experiments on 20% sampled instances, and are set as follows: \(L=20\), \(K=3\), \(sp=0.0003\) for all benchmarks. **Competitors:** In the context of SMT(LRA), we compare LS-RA with the top 4 state-of-the-art SMT solvers according to SMT-COMP 20225, namely OpenSMT (version 2.4.2) 6, Yices2 (version 2.6.2) 7, CVC5 (version 1.0.0) 8, and Z3 (version 4.8.17) 9. While in the context of SMT(NRA), the top 4 competitors are as follows, CVC5 (version 1.0.0), Yices2 (version 2.6.2), Z3 (version 4.8.17) and SMT-RAT-MCSAT (version 22.06) 10. The binaries of all competitors are downloaded from their websites. Note that portfolio and derived solvers are excluded. Footnote 5: [https://smt-comp.github.io/2022](https://smt-comp.github.io/2022) Footnote 6: [https://github.com/usi-verification-and-security/opensmt](https://github.com/usi-verification-and-security/opensmt) Footnote 7: [https://yices.csl.sri.com](https://yices.csl.sri.com) Footnote 8: [https://cvc5.github.io/](https://cvc5.github.io/) Footnote 9: [https://github.com/Z3Prover/z3/](https://github.com/Z3Prover/z3/) Footnote 10: [https://github.com/ths-rwth/smtrat](https://github.com/ths-rwth/smtrat) **Benchmarks:** Experiments are carried out on 2 benchmark sets from SMTLIB. * SMTLIB-LRA: This benchmark set consists of SMT(LRA) instances from SMT-LIB11. As LS-RA is an incomplete solver, UNSAT instances are excluded, resulting in a benchmark consisting of 1044 unknown and satisfiable instances. Footnote 11: [https://clc-gitlab.cs.uiowa.edu:2443/SMT-LIB-benchmarks/QF_NRA](https://clc-gitlab.cs.uiowa.edu:2443/SMT-LIB-benchmarks/QF_NRA) * SMTLIB-MRA: This benchmark set consists of SMT(NRA) instances whose atomic constraints are multilinear from SMT-LIB12. UNSAT instances are also excluded, resulting in a benchmark consisting of 979 unknown and satisfiable instances. Footnote 12: [https://clc-gitlab.cs.uiowa.edu:2443/SMT-LIB-benchmarks/QF_LRA](https://clc-gitlab.cs.uiowa.edu:2443/SMT-LIB-benchmarks/QF_LRA) **Experiment Setup:** All experiments are carried out on a server with AMD EPYC 7763 64-Core 2.45GHz and 2048G RAM under the system Ubuntu 20.04.4. Each solver is executed one run with a cutoff time of 1200 seconds (as in the SMT-COMP) for each instance in both benchmarks, as they contain sufficient instances. We compare the number of instances where an algorithm finds a model ("#solved"), as well as the run time. The bold value in table emphasizes the solver with greatest "#solved". ### Results on SMTLIB-LRA Benchmark As shown in Table 1, LS-RA can solve 900 out of 1044 instances, which is competitive but still cannot rival its competitors. We also compare the run time comparison between LS-RA and each competitor on instances from SMTLIB-LRA in Fig 4, which shows that LS-RA is complementary to the competitors. One explanation for the fact that LS-RA cannot rival its CDCL(T) competitors is that the _simplex-based_ theory solver performs so efficiently that there is little space for improvement. Moreover, 54.5% of the instances in SMTLIB-LRA contain Boolean variables, while the Boolean mode of LS-RA is not good at exploiting the relations among Boolean variables, similar to LS-LIA. ### Results on SMTLIB-MRA Benchmark LS-RA can solve more multilinear instances than all competitors on this benchmark (solving 891 out of 979 instances), which is shown in Table 2. The time comparison between LS-RA and its competitors is shown in Fig 5, confirming that LS-RA is more efficient than all competitors. LS-RA works particularly well on instances from "zankl" and "UltimateAutomizer" type, which are industrial instances generated in the context of software verification. On these types, LS-RA can solve all instances, outperforming all the competitors. Moreover, LS-RA can exclusively solve 13 instances from "20170501-Heizmann" type, which implements a constraint-based synthesis of invariants[15]. The explanation for the superiority of LS-RA on SMTLIB-MRA are as follows. In contrast to LRA, the theory solver for NRA constraints requires complex calculation, which reduces the performance of these competitors, while LS-RA can trivially determine the operations in multilinear constraints, and thus LS-RA can efficiently explore the search space. \begin{table} \begin{tabular}{l l l l l l l} \hline \hline & \#inst & CVC5 & Yices & Z3 & OpenSMT & LS-RA \\ \hline 2017-Heizmann & 8 & 4 & 3 & 4 & 4 & **7** \\ 2019-ezsmt & 84 & 61 & 61 & 53 & **62** & 35 \\ check & 1 & 1 & 1 & 1 & 1 & 1 \\ DTP-Scheduling & 91 & 91 & 91 & 91 & 91 & 91 \\ LassoRanker & 271 & 232 & **265** & 256 & 262 & 240 \\ latendresse & 16 & 9 & **12** & 1 & 10 & 0 \\ meti-tarski & 338 & 338 & 338 & 338 & 338 & 338 \\ miplib & 22 & 14 & **15** & **15** & **15** & 4 \\ sal & 11 & 11 & 11 & 11 & 11 \\ sc & 108 & 108 & 108 & 108 & 108 & 108 \\ TM & 24 & **24** & **24** & **24** & **24** & 11 \\ tropical-matrix & 10 & 1 & **6** & 4 & **6** & 0 \\ tta\_startup & 24 & 24 & 24 & 24 & 24 & 24 \\ uart & 36 & **36** & **36** & **36** & 30 \\ Total & 1044 & 954 & **995** & 966 & 992 & 900 \\ \hline \hline \end{tabular} \end{table} Table 1: Results on instances from SMTLIB-LRA \begin{table} \begin{tabular}{l l l l l l l} \hline \hline & \#inst & CVC5 & Yices & Z3 & SMT-RAT & LS-RA \\ \hline 20170501-Heizmann & 51 & 1 & 0 & 4 & 0 & **17** \\ 20180501-Economics-Mulligan & 28 & 28 & 28 & 28 & 28 & 28 \\ 2019-ezsmt & 32 & 31 & **32** & **32** & 21 & 28 \\ 20220314-Uncu & 12 & 12 & 12 & 12 & 12 & 12 \\ LassoRanker & 347 & **312** & 124 & 199 & 0 & 297 \\ meti-tarski & 423 & 423 & 423 & 423 & 423 & 423 \\ UltimateAutomizer & 48 & 34 & 39 & 46 & 18 & **48** \\ zankl & 38 & 24 & 25 & 28 & 30 & **38** \\ Total & 979 & 865 & 683 & 772 & 532 & **891** \\ \hline \hline \end{tabular} \end{table} Table 2: Results on instances from SMTLIB-MRA Figure 4: Run time comparison on instances from SMTLIB-LRA Figure 5: Run time comparison on instances from SMTLIB-MRA ### Effectiveness of Proposed strategies In order to analyze the effectiveness of the strategies in LS-RA, we modify LS-RA to obtain 3 alternative versions as follows. * To analyze the effectiveness of _interval-based_ operator, we modify LS-RA by replacing the interval-based operation with traditional \(cm\) operator, leading to the version \(v\_cm\). * To analyze the effectiveness of the _tie-breaking mechanism_, we modify LS-RA by evaluating operation based on \(score\) without considering the **Selection Rules**, leading to the version \(v\_score\). * We also implement a plain version which adopts neither _interval-based_ operator nor _tie-breaking mechanism_, denoted as \(v\_plain\). LS-RA is compared with these modified versions on both benchmarks. The runtime distribution of LS-RA and its modified versions is presented in Fig. 6, which confirms the effectiveness of our proposed strategies. ## 7 Conclusion and Future Work In this paper, we propose the first local search algorithm for SMT on Real Arithmetic based on the following novel ideas. First, the _interval-based_ operation is proposed to enrich the traditional operation, by considering the interval information. Moreover, a _tie-breaking_ mechanism are proposed to distinguish operations with same best \(score\). Experiments on SMT-LIB show that our solver is competitive and complementary with state-of-the-art SMT solvers, especially for those multilinear instances. The future direction is to extend LS-RA to support all SMT(NRA) instances and deeply combine our local search algorithm with the state-of-the-art DPLL(T) SMT solver, resulting in a hybrid solver that can make the most of respective advantages. Figure 6: Run time distribution comparison
2302.07899
Revisiting oldest stars as cosmological probes: new constraints on the Hubble constant
Despite the tremendous advance of observational cosmology, the value of the Hubble constant ($H_0$) is still controversial (the so called ``Hubble tension'') because of the inconsistency between local/late-time measurements and those derived from the cosmic microwave background. As the age of the Universe is very sensitive to $H_0$, we explored whether the present-day oldest stars could place independent constraints on the Hubble constant. To this purpose, we selected from the literature the oldest objects (globular clusters, stars, white dwarfs, ultra-faint and dwarf spheroidal galaxies) with accurate age estimates. Adopting a conservative prior on their formation redshifts ($11 \leq z_{\rm f} \leq 30$) and assuming $\Omega_{\rm M} = 0.3 \pm 0.02$, we developed a method based on Bayesian statistics to estimate the Hubble constant. We selected the oldest objects ($>13.3$ Gyr) and estimated $H_0$ both for each of them individually and for the average ages of homogeneous subsamples. Statistical and systematic uncertainties were properly taken into account. The constraints based on individual ages indicate that $H_0<70.6$ km/s/Mpc when selecting the most accurate estimates. If the ages are averaged and analyzed independently for each subsample, the most stringent constraints imply $H_0<73.0$ with a probability of 90.3% and errors around 2.5 km/s/Mpc. We also constructed an ``accuracy matrix'' to assess how the constraints on $H_0$ become more stringent with further improvements in the accuracy of stellar ages and $\Omega_{\rm M}$. The results show the high potential of the oldest stars as independent and competitive cosmological probes not only limited to the Hubble constant.
Andrea Cimatti, Michele Moresco
2023-02-15T19:00:03Z
http://arxiv.org/abs/2302.07899v5
# Revisiting oldest stars as cosmological probes: new constraints on the Hubble constant ###### Abstract Despite the tremendous advance of observational cosmology, the value of the Hubble constant (\(H_{0}\)) is still controversial (the so called "Hubble tension") because of the inconsistency between local/late-time measurements and those derived from the cosmic microwave background. As the age of the Universe is very sensitive to \(H_{0}\), we explored whether the present-day oldest stars could place independent constraints on the Hubble constant. To this purpose, we selected from the literature the oldest objects (globular clusters, stars, white dwarfs, ultra-faint and dwarf spheroidal galaxies) with accurate age estimates. Adopting a conservative prior on their formation redshifts (\(11\leq z_{\rm f}\leq 30\)) and assuming \(\Omega_{\rm M}=0.3\pm 0.02\), we developed a method based on Bayesian statistics to estimate the Hubble constant. We selected the oldest objects (\(>13.3\) Gyr), and estimated \(H_{0}\) both for each of them individually and for the average ages of homogeneous subsamples. Statistical and systematic uncertainties were properly taken into account. The constraints based on individual ages indicate that \(H_{0}<70.6\) km/s/Mpc when selecting the most accurate estimates. If the ages are averaged and analyzed independently for each subsample, the most stringent constraints imply \(H_{0}<73.0\) with a probability of 93.2% and errors around 2.5 km/s/Mpc. We also constructed an "accuracy matrix" to assess how the constraints on \(H_{0}\) become more stringent with further improvements in the accuracy of stellar ages and \(\Omega_{\rm M}\). The results show the high potential of the oldest stars as independent and competitive cosmological probes. cosmology; observational cosmology; cosmological parameters; Hubble constant; stellar ages 0000-0002-0002-4882-8879]Andrea Cimatti 0000-0002-4882-7885]Michele Moresco ## 1 Introduction Our understanding of the Universe improved dramatically during the last century. However, in spite of the high precision achieved in observational cosmology, several fundamental questions remain open. For instance, the nature and origin of dark matter and dark energy are still unknown despite their major contribution to the total cosmic budget of matter and energy (Planck Collaboration et al., 2020). Another key question regards the present-day expansion rate (the Hubble constant, \(H_{0}\)), for which independent methods give inconsistent results, e.g. \(67.4\pm 0.5\) km/s/Mpc (Planck Collaboration et al., 2020, hereafter Planck2020) and \(73.04\pm 1.04\) km/s/Mpc (Riess et al., 2022, hereafter SH0ES), leading to the so called "Hubble tension" (Verde et al., 2019; Abdalla et al., 2022; Kamionkowski and Riess, 2022). Today, it is unknown whether the \(H_{0}\) discrepancy is a signal of "new physics" or the result of unaccounted systematic effects. Thus, before adventuring in the uncharted territory of new physics, it is essential to combine as many as possible cosmological probes in order to mitigate the unavoidable systematic uncertainties inherent to each of them (for a review, see Moresco et al., 2022). In this regard, stellar ages play a key role simply because the current age of the Universe today cannot be younger than the age of present-day oldest stars. Historically, the ages of the oldest globular clusters appeared inconsistent with the mostly younger ages of the Universe allowed by the cosmological models in the 1980s and early 1990s (Krauss and Chaboyer, 2003, and references therein). This age crisis was rapidly solved with the discovery of the accelerated expansion which implied an older Universe. More recently, stellar ages have been reconsidered as promising probes independent of the cosmological models (Bond et al., 2013; Jimenez et al., 2019; Valcin et al., 2020, 2021; Boylan-Kolchin and Weisz, 2021; Vagnozzi et al., 2022). As a matter of fact, age dating is based either on stellar physics and evolution (isochrone fitting) or on the abundance of radioactive elements (nu cleochronometry) (Soderblom, 2010). The downside is that stellar ages are still affected by substantial systematic uncertainties (e.g., Chaboyer et al., 1995; Soderblom, 2010; Valcin et al., 2021). In particular, isochrone fitting relies on the assumption of a given theoretical stellar model and requires accurate estimates of metal abundance, absolute distance and dust reddening along the line of sight. Thus, although the age precision can be very high for a given set of assumptions (i.e. statistical errors can be very small), high accuracy is usually prevented by systematic errors. In nucleochronometry, an additional difficulty is the accurate derivation of the abundances of elements (e.g. U, Th) characterized by very weak, and often blended, absorption lines (e.g. Christlieb, 2016). The main aim of this paper is to revive and investigate the potential of the oldest stars as independent clocks to place new constraints on the Hubble constant. ## 2 Method In a generic cosmological model, the Hubble constant \(H_{0}\) can be derived as: \[H_{0}=\frac{A}{t}\int_{0}^{z_{f}}\frac{E(z)}{1+z}dz \tag{1}\] where \(E(z)=H(z)/H_{0}\), \(t\) is the age of an object formed at redshift \(z_{f}\) and \(A=977.8\) allows to convert from Gyr (units of \(t\)) to km/s/Mpc (units of \(H_{0}\)). For \(z_{f}=\infty\), the age \(t\) converges to the age of the Universe \(t_{U}\). In a flat \(\Lambda\)CDM universe, Eq. 1 reduces to: \[H_{0}=\frac{A}{t}\int_{0}^{z_{f}}\frac{1}{1+z}\left[\Omega_{\rm M}(1+z)^{3}+ (1-\Omega_{\rm M})\right]^{1/2}dz\;. \tag{2}\] Based on Eq. 2, it is therefore possible to estimate \(H_{0}\) provided that \(\Omega_{\rm M}\), \(z_{f}\) and stellar ages are known. The sensitivity of this method is described in App. A. ## 3 The oldest stars in the present-day universe The age of the Universe (\(t_{\rm U}\) at \(z=0\)) is very sensitive to \(H_{0}\). For instance, for \(\Omega_{\rm M}=0.3\) and \(\Omega_{\Lambda}=0.7\), the age of the Universe is \(t_{\rm U}\sim 14.1\) Gyr and \(t_{\rm U}\sim 12.9\) Gyr for \(H_{0}=67\) km/s/Mpc and \(H_{0}=73\) km/s/Mpc, respectively. From this example, it is clear that only the _oldest_ stars play a discriminant role in the context of the Hubble tension. With this motivation, we searched the literature for the oldest stars in the Milky Way and in the Local Group with ages estimated based on different methods and with a careful evaluation of systematic errors. * _Galactic globular clusters (GC)._ For our purpose, we focused on the most recent results of O'Malley et al. (2017), Brown et al. (2018), Oliveira et al. (2020), and Valcin et al. (2020) with state-of-art age dating and a careful assessment of the statistical and systematic uncertainties. The oldest GCs have ages \(\gtrsim 13.5\) Gyr with total errors (i.e. combined statistical and systematic) from \(\sim 0.5\) Gyr to \(\gtrsim 1\) Gyr. * _Galactic individual stars._ Very old individual stars are reported in the literature. For instance, Schlaufman et al. (2018) estimated an age of 13.5 Gyr (with a systematic error \(\gtrsim 1\) Gyr) for an ultra metal-poor star belonging to a binary system. For HD 140283, an extremely metal-poor star in the solar neighborhood, an age of \(14.5\pm 0.8\) Gyr (including systematic uncertainties) was derived by Bond et al. (2013), although recent results suggest younger ages (Plotnikova et al., 2022). Recent works (based on Gaia parallaxes, sometimes with asteroseismology measurements and without adopting priors on the age of the Universe) found evidence of stars with ages \(\gtrsim 13.5\) Gyr (e.g., Montalban et al., 2021; Xiang and Rix, 2022; Plotnikova et al., 2022). * _White dwarfs (WD)._ If the distance, magnitude, color, and atmospheric type for a WD are known, its age can be derived based on the well-understood WD cooling curves and initial-final mass relations calibrated using star clusters. Fouesneau et al. (2019) exploited the Gaia parallaxes and reported ages as old as \(13.9\pm 0.8\) Gyr. The potential of WD as chronometers has been recently highlighted by Moss et al. (2022). * _Nucleochronometry._ The relative abundances of nuclides with half-lifes of several Gyr (e.g. U, Th, Eu) can be exploited as chronometers (Christlieb, 2016; Shah et al., 2023). However, its application requires reliable theoretical modeling of the rapid neutron capture (r-process) nucleosynthesis and spectroscopy with very high resolution and signal-to-noise ratio. To date, this method has been applied only to a few stars whose ages turned out to be as old as \(\approx 14\) Gyr, but with large errors of \(2-4\) Gyr. However, Wu et al. (2022) suggested that the uncertainties could be reduced down to \(\sim 0.3\) Gyr through the synchronization of different chronometers. * _Ultra faint galaxies (UFDs) and dwarf spheroidals (dSph)._ UFDs in the Local Group have old stellar populations and may be the fossil remnants of systems formed in the reionization era. Brown et al. (2014) found that the oldest stars have ages in the range of \(13.7-14.1\) Gyr, with systematic uncertainties of \(\sim 1\) Gyr. Moreover, based on the reconstruction of their star formation histories, some dSph systems of the Local Group formed the bulk of their stars at \(z>14\), therefore implying ages \(>\)13.5 Gyr in the standard \(\Lambda\)CDM cosmology (e.g., Weisz et al., 2014; Simon et al., 2022). The above results are based on a variety of astrophysical objects, methods and independent studies, and show unambiguously that the most ancient stars in the present-day Universe are significantly older than 13 Gyr, but with uncertainties (dominated by systematic errors) from \(\sim 0.5\) to \(\gtrsim 1\) Gyr. Can such old ages place meaningful cosmological constraints? We recall the obviousness that the age of an object at \(z=0\) provides only a lower limit to the current age of the Universe as it remains unknown how much time it took for that object to form since the Big Bang: \[t_{\rm U}=\Delta t_{\rm f}+t_{\rm age} \tag{3}\] where \(t_{\rm U}\) is the age of the Universe, \(\Delta t_{\rm f}\) is the time interval between the Big Bang and the formation of an object observed at \(z=0\) with an age \(t_{\rm age}\). Thus, if we measure \(t_{\rm age}\) for an object at \(z=0\), the main unknown remains only \(\Delta t_{\rm f}\). In our work, we exploited the _oldest_ stars at \(z=0\) to maximize \(t_{\rm age}\) and minimize the relevance of \(\Delta t_{\rm f}\) with respect to the current age of the Universe. To this purpose, we decided to anchor \(\Delta t_{\rm f}\) to the redshifts (\(z\sim 11-13\)) of the most distant galaxies known based on spectroscopic identification (Curtis-Lake et al., 2022), although photometric candidates exist up to \(z\approx 18\)(Naidu et al., 2022). The uppermost redshift limit can be set by theoretical models that indicate \(20<z<30\) as the range for the formation of the very first stars (Galli and Palla, 2013). Thus, for our analysis (Sect. 2), we adopted \(11<z_{f}<30\) as a baseline. This corresponds to \(\Delta t_{\rm f}\approx 0.1-0.4\) Gyr after the Big Bang (\(H_{0}=70\) km/s/Mpc, \(\Omega_{\rm M}=0.3\), \(\Omega_{\Lambda}=0.7\)). We remark that this choice is the most conservative possible: should the oldest stars have formed at \(z<11\), their ages would imply an even older universe and, in turn, a lower value of \(H_{0}\). ## 4 Constraining the Hubble Constant For our analysis, we developed a code based on a Bayesian framework, with a log-likelihood defined as: \[\mathcal{L}(\rm age,\mathbf{p})=-0.5\sum_{i}\frac{(\rm age_{i}-age_{m}( \mathbf{p}))^{2}}{\sigma(\rm age_{i})} \tag{4}\] where \(age_{i}\) and \(\sigma(\rm age_{i})\) are the age and its error, \(age_{m}\) is the theoretical age from the model in Eq. 2, and \(\mathbf{p}\) are the parameters of the model. We adopted a flat \(\Lambda\)CDM cosmological model where the free parameters are (\(H_{0}\), \(\Omega_{\rm M}\), and \(z_{\rm f}\)). We sampled the posterior with a Monte-Carlo Markov Chain approach using the affine-invariant ensemble sampler implemented in the public code emcee(Foreman-Mackey et al., 2013). While we decided to adopt flat priors on \(H_{0}=[50,100]\) and \(z_{\rm f}=[11-30]\), we chose to include a Gaussian prior on \(\Omega_{\rm M}\) because, as can be inferred from Eq. 2, there is a significant intrinsic degeneracy between the derived value of \(H_{0}\) and \(\Omega_{\rm M}\) that can be hardly broken from age data alone. Most importantly, in the framework of the current Hubble tension (see, e.g., Verde et al., 2019), we kept our analysis free from CMB-dependent priors by adopting \(\Omega_{\rm M}=0.3\pm 0.02\) obtained from the combination of several low-redshift results Jimenez et al. (2019), and consistently with the latest BOSS+eBOSS clustering analysis (\(\Omega_{\rm M}=0.29^{+0.012}_{-0.014}\)) of Semenaite et al. (2022). In our analysis, we adopted 250 walkers and 1000 iterations each, discarding the first 300 points of the chain to exclude burn-in effects. In App. A, we estimated that our prior choice impacts the error on \(H_{0}\) with a maximum additional systematic error of \(\sigma_{syst,\ prior}(H_{0})=2.37\) km/s/Mpc, which is however highly conservative due to the stringent constraints on the priors coming from the current observational results. ## 5 From the oldest Ages to \(H_{0}\) The method described in Sect. 4 was then applied to the observed data. In particular, we followed two different approaches. ### Individual ages As a first step, we analyzed the individual age estimates of each object presented in Sect. 3, considering an age threshold \(>13.3\) Gyr to select the oldest objects. This value has been chosen to select at least one object for each sample, in order to preserve the variety of age dating results obtained with different methods and samples, and therefore mitigating the possible biases. In the context of the Hubble tension, this is a conservative choice because an older age threshold would have provided lower \(H_{0}\) values. Based on this approach, 38 objects older than 13.3 Gyr were selected, and \(H_{0}\) was estimated for each of them. Fig. 1 shows that \(H_{0}\lesssim 72\) km/s/Mpc for the majority of our data, with values typically in the range \(63<H_{0}\) [km/Mpc/s] \(<72\). By inspecting the posteriors, the highest values of \(H_{0}\) are due to the largest uncertainties on the ages and the consequent asymmetric PDF (Fig. 4). The cases with \(\sigma_{H_{0}}/H_{0}>\)30% have a mean age error \(\sigma_{\rm age}=4\) Gyr, noticeably larger than the average of the entire sample (\(\sigma_{\rm age}=1.4\) Gyr). Instead, for the cases with \(\sigma_{H_{0}}/H_{0}<\)30%, \(<\)20% and \(<\)10%, the highest values of \(H_{0}\) are 74.4, 71.9 and 70.6 km/s/Mpc, respectively (see the histogram in Fig. 1). We also tested how \(H_{0}\) can be constrained with the individual very oldest globular clusters with the smallest age errors. For NGC 6362 (\(13.6\pm 0.5\) Gyr) (Oliveira et al., 2020) and NGC 6779 (\(14.9^{+0.5}_{-0.9}\) Gyr) (Valcin et al., 2020), we obtain \(68.5^{+2.9}_{-3.2}\) and \(63.1^{+3.9}_{-4.5}\) km/s/Mpc, respectively. Taken at face value, this exercise highlights the importance of the oldest objects in the context of the Hubble tension. However, two cases are clearly insufficient to place meaningful constraints. For this reason, we also follow another approach based on the average ages (next Subsection). ### Average ages In order to minimize the potential bias induced by the larger age errors and to obtain more stringent constraints on \(H_{0}\), we refined our analysis by averaging the age estimates (always keeping the oldest objects with ages \(>13.3\) Gyr). However, since each sample is characterized by its own systematic uncertainties, we could not \begin{table} \begin{tabular}{l c c c c} \hline \hline & \# of & Mean Age & \(H_{0}\) & \(P(H_{0}\geq H_{0,\rm Planck})\) & \(P(H_{0}\leq H_{0,\rm SH0ES})\) \\ Method & objects & [Gyr] & [km/s/Mpc] & & \\ \hline GC (Valcin et al., 2020) & 14 & 14.1\(\pm\)0.5 & \(66.28^{+2.93}_{-2.78}\) & 0.35 & 0.99 \\ GC (O’Malley et al., 2017) & 2 & 13.5\(\pm\)1 & \(69.74^{+5.8}_{-4.98}\) & 0.67 & 0.73 \\ GC (Oliveira et al., 2020) & 2 & 13.6\(\pm\)0.4 & \(68.65^{+2.55}_{-2.45}\) & 0.69 & 0.96 \\ GC (Brown et al., 2018) & 1 & 13.4\(\pm\)1.2 & \(70.51^{+7.16}_{-6.03}\) & 0.68 & 0.65 \\ UFD (Brown et al., 2014) & 1 & 13.9\(\pm\)1 & \(67.63^{+5.58}_{-4.7}\) & 0.52 & 0.85 \\ NC (Christlieb, 2016) & 4 & 14.2\(\pm\)1.5 & \(67.44^{+8.45}_{-6.95}\) & 0.50 & 0.77 \\ WD (Fousneau et al., 2019) & 1 & 13.9\(\pm\)0.9 & \(67.46^{+4.65}_{-4.11}\) & 0.51 & 0.90 \\ Individual star (Schlaufman et al., 2018) & 1 & 13.5\(\pm\)1 & \(69.75^{+5.75}_{-5.02}\) & 0.67 & 0.73 \\ Individual star (Bond et al., 2013) & 1 & 14.5\(\pm\)0.8 & \(64.75^{+4.03}_{-3.61}\) & 0.24 & 0.99 \\ Very Metal Poor Stars (Plotnikova et al., 2022) & 11 & 14.7\(\pm\)0.6 & \(63.36^{+2.99}_{-2.73}\) & 0.08 & 0.999 \\ \hline \end{tabular} \end{table} Table 1: Constraints on the Hubble constant \(H_{0}\) based on the average ages of the objects older than 13.3 Gyr present in each of the 10 independent samples. For each sample, we report the number of object available, their mean age including statistical and systematic errors, and the estimated \(H_{0}\). The last two columns report the probability that the estimated \(H_{0}\) is respectively higher than the one obtained from Planck2020 (Planck Collaboration et al., 2020) and lower than the value from SH0ES (Riess et al., 2022). Figure 1: _Left panel._ Constraints on \(H_{0}\) derived from the 38 oldest stellar ages (\(>13.3\) Gyr). The error bars include the statistical and systematic errors provided in each work. _Right panel._ The distribution of \(H_{0}\), color-coded according to its percentage error (red considering a percentage error on \(H_{0}>\)30%, orange for \(>\)20%, yellow for \(>\)10%). average all data into a single age estimate. Therefore, for each of the 10 different samples reported in Sect. 3, we estimated a mean age with an inverse-variance weighted average, adding a posteriori in quadrature the systematic error of each method. We analyzed these data with the same procedure described in Sect. 2, and the results are reported in Tab. 1. We found that \(63.4<H_{0}\) [km/Mpc/s] \(<70.8\), with errors around 2.5 km/s/Mpc in the best case and around 7 km/s/Mpc in the worst. If the systematic errors due to the choice of our priors (see App. A) are also added, the total uncertainties slightly increase to 3.6 and 7.4 km/s/Mpc. The results are presented in the framework of the Hubble tension showing, for each subsample, the average probability (weighted with the sample size) of each \(H_{0}\) to be larger than the Planck value (Planck Collaboration et al., 2020) or smaller than the SH0ES one (Riess et al., 2022). The results indicate an average probability of 93.2% of the Hubble constant to be \(H_{0}<73.0\) km/s/Mpc, with a minimum value of 65% and a maximum value of 99.9%. Instead, the average probability to have \(H_{0}>67.4\) km/s/Mpc is 34.5%, with a minimum value of 8% and a maximum value of 69%. If also the conservative systematic error due to the choice of priors (App. A) is added, the above average probabilities change only by a few percent. In Fig. 2, we compare our estimates with other \(H_{0}\) constraints from the literature including a collection of \(H_{0}\) measurements obtained with late-Universe probes. All our results based on the oldest stellar ages, indicate a statistical preference for a value of \(H_{0}\) smaller than the SH0ES constraint and more compatible with the Planck2020 results, even if the current error bars are still quite large and dominated by systematics. ## 6 Accuracy matrix and prospects The results presented in previous section show the high potential of the oldest stars as cosmological probes. The constraints on \(H_{0}\) can become more stringent with higher accuracy of stellar ages and \(\Omega_{\rm M}\). We used the workflow presented in previous sections to construct a matrix that shows how the accuracy of \(H_{0}\) depends on the errors on stellar ages and \(\Omega_{\rm M}\). The uncertainty on the age in Fig. 3 is the total one, i.e. including statistical and systematic errors. First, we notice that the uncertainty on the age dominates the error budget of \(H_{0}\), whereas the uncertainty on \(\Omega_{\rm M}\) has a subdominant effect. The minimum uncertainty \(\sigma_{H_{0}}\sim\)2.5 km/s/Mpc currently attainable (darker square) is larger by a factor of 2-4 than the most accurate estimates of \(H_{0}\)(Planck Collaboration et al., 2020; Riess et al., 2022). However, the matrix also shows that significant improvements are Figure 2: A comparison between the values of \(H_{0}\) derived in this work and the ones available in the literature. _Upper part_. The \(H_{0}\) values from Planck Collaboration et al. (2020) and Riess et al. (2022). The vertical purple and blue shaded regions show the \(\pm 1\)-\(\sigma\) uncertainties of the two measurements in the entire plot. _Central part_. The \(H_{0}\) measurements obtained with different cosmological probes of the late-Universe: the tip of the red giant branch (TRGB, Freedman et al., 2020; Soltis et al., 2021), surface brightness fluctuations (SBF, Blakeslee et al., 2021; Khetan et al., 2021), lensing time delay (LTD Birrer et al., 2020; Wong et al., 2020), gravitational waves (The LIGO Scientific Collaboration et al., 2021; Palmese et al., 2021), and masers (Pesce et al., 2020). _Lower part_. Our estimates for each subsample in Sect. 3. The inner thicker error bars show the uncertainty including the statistical and systematic errors for the age estimates. The outer thinner error bars show the total errors including also the additional systematic uncertainty derived from the adopted priors discussed in App. A. expected in case the error on \(\Omega_{\rm M}\) is reduced to \(\sim\)0.003 (e.g. with the Euclid mission, see Amendola et al., 2018) and the total error on the age to \(\sim\)0.3 Gyr. More accurate ages could be achieved by increasing the sample size (i.e. minimizing the statistical error) and by further reducing the systematics (e.g. (e.g. Valcin et al., 2021; Wu et al., 2022). This would allow us to reach an accuracy on \(H_{0}\) of the order of \(\lesssim\)1.5 km/s/Mpc that could play a decisive role for the Hubble tension. ## 7 Summary and Outlook The oldest stars in the present-day Universe play a key role as independent cosmological probes. In this work, we collected a sample of stellar objects for which state-of-the-art age estimates were available in the literature to revisit their potential to constrain the Hubble constant. The sample includes different types of objects (globular clusters, individual stars, white dwarfs, ultra-faint and dwarf spheroidal galaxies) whose ages were estimated with independent methods taking into account statistical and systematic uncertainties. The main results of this work can be summarized as follows. * We built a Bayesian framework to constrain the Hubble constant exploiting the age of the oldest stars. We adopted a flat \(\Lambda\)CDM model, assuming a flat prior on the formation redshifts (\(11<z_{\rm f}<30\)) and a Gaussian prior on \(\Omega_{\rm M}=0.30\pm 0.02\) based on late-Universe probes independent of the CMB constraints. This prior choice has been estimated to affect our error estimate at most with a systematic error of \(\sigma_{syst,\;prior}(H_{0})=2.37\) km/s/Mpc, which is, however, highly conservative because the observational constraints significantly limit the actual range of priors. * We selected 38 objects with ages older than 13.3 Gyr and, for each object, we estimated the Hubble constant. The distribution of \(H_{0}\) is concentrated in the range of \(63<H_{0}\) [km/Mpc/s] \(<72\), with a preference for low values of \(H_{0}\) if the most accurate estimates are selected. Although the current age uncertainties of individual objects do not allow stringent constraints on \(H_{0}\), the results clearly show the key role of the oldest objects as independent cosmological probes. * If the ages are averaged and analyzed independently for each subsample, we derived more stringent constraints that imply a high probability (93.2% on average) of \(H_{0}\) to be lower than the SH0ES value, and indicate that the ages of the oldest stars are more compatible with the Planck2020 estimate. * We constructed an "accuracy matrix" to assess how the constraints on \(H_{0}\) can be tightened by increasing the accuracy of stellar ages and \(\Omega_{\rm M}\). Should the systematic errors on stellar ages be reduced to \(\lesssim 0.3-0.4\) Gyr, the accuracy of \(H_{0}\) would increase to \(\sim\)1-2 km/s/Mpc and become fully competitive with the other cosmological probes shown in Fig. 2. The results presented in this work show the high potential and a bright future for the oldest stars as cosmological probes. Several improvements can be expected thanks to massive spectroscopic surveys od extremely/very metal-poor stare (e.g. PRISTINE, WEAVE) combined with the parallax information provided by _Gaia_. Moreover, spectroscopy with extremely large telescopes will allow us to apply nucleochronometry to larger samples and possibly to reduce the age uncertainties to \(\sim\)0.3 Gyr (Wu et al., 2022). In parallel, improved stellar evolution and white dwarf cooling models will likely reduce the systematic uncertainties on age dating. These advances will enhance the constraining power of the oldest stars in cosmology and their full exploitation in synergy with the forthcoming results expected from _Euclid_ and other survey telescopes. Figure 3: The expected errors on \(H_{0}\) for the _reference case_ (see Sect. 4; age=13.5 Gyr and \(11<z_{f}<30\)) as a function of the uncertainty on \(\Omega_{\rm M}\) (x-axis) and on the age of the oldest stellar objects (y-axis). Given the pair \((err(\Omega_{\rm M},err(\)age\())), the derived error on \(H_{0}\) (in km/s/Mpc) is shown in each square. The darker shaded square indicates the range of errors currently spanned in this paper. The lighter shaded square shows the improvement expected from the higher accuracies that could be reasonably obtained in the near future. M.M. and A.C. acknowledge the grants ASI n.I/023/12/0 and ASI n.2018-23-HH.0. A.C. acknowledges the support from grant PRIN MIUR 2017 - 20173ML3WW_001. M.M. acknowledge support from MIUR, PRIN 2017 (grant 20179ZF5KS). emcee (Foreman-Mackey et al., 2013), ChainConsumer (Hinton, 2016), Matplotlib (Hunter, 2007), Numpy (Harris et al., 2020), plot1d [https://github.com/Pablo-Lemos/plot1d](https://github.com/Pablo-Lemos/plot1d).
2303.01119
Mining the quantum vacuum: quantum tunnelling and particle creation
Particle production from the vacuum is a remarkable aspect of particle physics. Prime examples are the Schwinger process of particle production in strong electric fields and the Hawking process of particle production from black holes. These processes can be viewed as quantum tunnelling of particles from the vacuum. The tunnelling approach, and the closely related instanton or complex path approaches, are reviewed here with emphasis on paths in the complex coordinate plane. The method is applied to particle production from a black hole in a magnetic field, where ultra-high energy charged particles are produced.
Ian G. Moss, Piotr Z. Stasiak
2023-03-02T10:01:47Z
http://arxiv.org/abs/2303.01119v1
# Mining the quantum vacuum: quantum tunnelling and particle creation ###### Abstract Particle production from the vacuum is a remarkable aspect of particle physics. Prime examples are the Schwinger process of particle production in strong electric fields and the Hawking process of particle production from black holes. These processes can be viewed as quantum tunnelling of particles from the vacuum. The tunnelling approach, and the closely related instanton or complex path approaches, are reviewed here with emphasis on paths in the complex coordinate plane. The method is applied to particle production from a black hole in a magnetic field, where ultra-high energy charged particles are produced. ## 1 Introduction The quantum vacuum is alive with virtual particles that only emerge into reality in extreme conditions near black holes or in powerful external fields. This particle creation can be described using various techniques, but the one we focus on here is quantum tunnelling from the vacuum. Each methodology has its various strengths, but there are situations where the tunnelling approach is especially useful. One particular application where this is the case is the production of particles from a magnetic black hole. The tunnelling approach is influenced by an early description of particle production from black holes that appeared in the work of Hartle and Hawking [1]. They suggested that the amplitude for particle production could be related to a particle path from the future singularity to the black hole exterior, as in Fig. 1. There is no such classical path, but in the analysis, they used analytic continuation of the time coordinate to show that the probability \(P\) of particle production and absorption for a Schwarzschild black hole where related by \[P(\text{particle~{}emmission})=e^{-\beta E}P(\text{particle~{}absorption}) \tag{1}\] where \(\beta\) is the inverse Hawking temperature. This relation is enough to guarantee that the black hole can be in equilibrium with a heat bath at the Hawking temperature. Hartle and Hawking also extended their relation to charged and rotating black holes. In the period since their pioneering work, analytic continuation has been used to deliver more detailed information about the particle production rate beyond the simple relation Eq. (1), for example with charged black holes [2, 3, 4]. The approach is often employed when a quantum field theory approach is problematic, for example for back reaction problems [5] and for problems with extremal horizons [6]. The combination of quantum tunnelling and particle pair creation actually preceded the theory of black hole pair production, first introduced in the context of alternating electric fields [7], and later developed into a fully consistent theory of the Schwinger process [8, 9, 10, 11]. We aim to show that these situations have features in common that make it reasonable to refer to them all as quantum tunnelling phenomena. Astrophysical applications of vacuum breakdown of are somewhat restricted. A rotating black hole with a magnetic field of around \(10^{13}\)G could in principle induce electric field strengths \(1.3\times 10^{18}\)Vm\({}^{-1}\) needed for electron pair creation. Holes like this may arise from the collapse of a magnetar to form a black hole, for example [12]. However, such systems would be scenes of complex astrophysical phenomena, and secondary pair production processes from high energy synchrotron photons \(\gamma\to e^{+}e^{-}\) would likely be prevalent. Nevertheless, the vacuum production process would generate currents near the horizon and it may be important to include these in fluid simulations. Simple estimates of the vacuum breakdown near a black hole can easily be found by taking the pair creation rates in flat space using the local electric field value in some suitable chosen reference frame [2]. Here we shall improve on this simple approach and include the effects of curvature on the particle production. The wave equations for a charged particles around a magnetic rotating hole are not separable, but the quantum tunnelling approach proves invaluable. It turns out that the flat space effect overestimates the particle particle production. We shall also be able to determine the dynamical parameters of the electrons that are produced and examine their trajectories in some detail. The first sections of this paper aims to clarify some of the aspects of particle production using instantons. In particular, we explore the difference between an instanton that describes vacuum breakdown and an instanton that describes Hawking radiation from an event horizon. We shall also make extensive use of Hamiltonian methods and contours in the complex coordinate plane, whose importance for particle production where extensively studied by Srinivasan and Padmanabhan [13] This paper uses a small modification of SI units in which the distance unit is chosen so that the velocity of light \(c=1\). ## 2 The instanton approach to quantum tunnelling We start with a review of the instanton approach to quantum tunnelling through a potential barrier, in order to bring out some of the features that will be important later on. We will introduce Hamilton's principle function and see how this replaces the usual action, and we will empahsise the roles of branch cuts in the complex coordinate plane. In the simplest situation, a particle tunnels from a localised initial state. The particle is prepared at time \(t=0\) 'inside' the barrier, i.e. to the left of the maximum of the the potential \(V\) shown in figure 2. The probability of finding the particle inside the barrier decays exponentially with a rate \(\Gamma\), which we identify as the vacuum decay rate. A simple analysis of the decay rate using the WKB approximation to the Schrodinger equation gives \[\Gamma\approx\frac{\omega}{2\pi}\exp\left\{-\frac{2}{\hbar}\int_{a}^{b}\left\{ 2m(V-E)\right\}^{1/2}dx\right\} \tag{2}\] where \(\omega^{2}=V^{\prime\prime}(0)/m\) and \(E=(n+\frac{1}{2})\hbar\omega\) for some integer \(n\). Banks and Bender [14] demonstrated (in a more general context) that the exponent in the decay rate could be obtained from a classical trajectory \(x_{b}(t_{I})\) with imaginary time \(t_{I}=it\). The trajectory, or Figure 1: Particle production on a black hole spacetime [1]. The amplitude for particle production at the point \(C\) can be related to paths \(BAC\) and \(AD\) by analytic continuation. instanton, runs from \(x=a\) to \(x=b\) in figure 2 and back to \(x=a\). Consider the classical action \[S[x]=\int\left\{\frac{m}{2}\left(\frac{dx}{dt}\right)^{2}-V\right\}dt. \tag{3}\] Switching to imaginary time, \[S[x]=i\int\left\{\frac{m}{2}\left(\frac{dx}{dt_{I}}\right)^{2}+V\right\}dt_{I}. \tag{4}\] Note that, along the instanton trajectory, \[\frac{m}{2}\left(\frac{dx_{b}}{dt_{I}}\right)^{2}-V=-E. \tag{5}\] It is now possible to relate the exponent in the tunnelling rate to the instanton solution. First, we introduce Hamilton's principle function \(W\), \[W[x_{b}]=S[x_{b}]+E\int_{\mathcal{C}}dt, \tag{6}\] where the contour \(\mathcal{C}\) goes around the path in imaginary time. From Eq. (5), this can be simplified to \[W[x_{b}]=i\int_{C}\left(V-E\right)dt_{I}=2i\int_{a}^{b}\left\{2m(V-E)\right\}^ {1/2}dx, \tag{7}\] Comparing with the WKB result (2) gives an important relation between the tunnelling rate and the principle function, \[\Gamma\approx\frac{\omega}{2\pi}\exp\left\{-\frac{W_{I}[x_{b}]}{\hbar}\right\}, \tag{8}\] where \(W_{I}=\operatorname{Im}W\). We could stop at this point, but suppose that \(V(x)\) is an analytic function, then the expression for \(W[x_{b}]\)can also be expressed as a contour integral in the complex \(x\) plane, \[W[x_{b}]=\int_{C}\left\{2m(E-V)\right\}^{1/2}dx. \tag{9}\] In this form, we can distort the contour of integration as long as it goes exactly once around the branch cut in the integrand. We shall show later that branch cuts and singularities in the complex coordinate plane play an important role in distinguishing different types of quantum process. Figure 2: A simple scenario for quantum tunnelling. The decay rate is dominated by the WKB approximation with energy \(E\) given by an harmonic oscillator state to the left of the barrier. The exponential factor can be expressed as a contour integral around the contour \(C\). It is useful at this point to compare the result to the theory of vacuum decay [15]. Suppose we take Eq. (8) and expand in powers of \(E/V\). We find, \[\Gamma\approx A\left(\frac{S_{I}[x_{b}]}{2\pi}\right)^{1/2}\exp\left\{-\frac{S_{ I}[x_{b}]}{\hbar}\right\}, \tag{10}\] where \(S_{I}=\operatorname{Im}S\) and the factor \(A\) depends on the detailed shape of the potential. If we approach the same problem as a vacuum decay problem, we obtain the same result with the factor \(A\) determined by an operator determinant. Although the two approaches are similar, we note there are important differences. The result using the function \(W\) does not assume \(E/V\) is small and gives a simpler expression for the factor in front of the exponential when we have a finite number of degrees of freedom. In most of the applications considered below we have some ignorable coordinates. As an example, suppose in the quantum tunnelling problem there are two extra spatial dimensions \(y\) and \(z\), but the potential only depends only on \(x\). The wave function factorises, and the WKB analysis of the tunnelling rate at fixed values of the momenta \(p_{y}\) and \(p_{z}\) is the same as the one dimensional case. The formula (8) is still valid provided we modify the definition of the principle function to remove the ignorable coordinates, \[W=S+Et-yp_{y}-zp_{z}. \tag{11}\] A similar correction should be applied and the modified principle function used whenever there are conserved momenta. ## 3 The Schwinger process The Schwinger process is the pair creation of charged particles, usually electron positron pairs, in an electric field. Schwinger's original discussion, used heat-kernel methods, and gave an early example of a non-perturbative result in quantum field theory. We shall review the tunnelling approach to the Schwinger process with the aim of obtaining some general rules for the tunnelling instanton. Consider a particle with mass \(m\) and charge \(e\). The particle world-line \(x^{\mu}(\tau)\) is parameterised by proper time \(\tau\). The action can be expressed in Hamiltonian form with momenta \(p_{\mu}\), \[S=\int\left(\dot{x}^{\mu}p_{\mu}-H\right)d\tau. \tag{12}\] Given the metric \(g_{\mu\nu}\), the vector potential \(A_{\mu}\) and charge \(e\), \[H=\frac{1}{2m}g^{\mu\nu}(p_{\mu}+eA_{\mu})+\frac{m}{2}. \tag{13}\] Figure 3: The instanton for the Schwinger process and Vilenkin’s ‘ex nihilo’ version. We take flat spacetime with a constant electric field \(\mathcal{E}\) in the \(x\) direction, associated with a potential \(A_{t}=\mathcal{E}x\). The resullting Hamiltonian is \[H=-\frac{1}{2m}\left(p_{t}+e\mathcal{E}x\right)^{2}+\frac{1}{2m}p_{x}^{2}+\frac {1}{2m}p_{y}^{2}+\frac{1}{2m}p_{z}^{2}+\frac{m}{2}. \tag{14}\] Normalisation of four-velocity \(\dot{x}^{\mu}\) imposes a constraint \(H=0\) on the Hamiltonian. Furthermore, ignorable coordinates \(t\), \(y\) and \(z\) imply that the energy \(E=-p_{t}\) and momenta \(p_{y}\), \(p_{z}\) are conserved. With these restrictions, the modified principle function (11) reduces to \[W=S+Et-yp_{y}-zp_{z}=\int p_{x}dx \tag{15}\] For convenience, we introduce a new parameter \(x_{0}\) related to the energy by \(E=e\mathcal{E}x_{0}\), then the constraint \(H=0\) implies \[p_{x}^{2}=\left(e\mathcal{E}\right)^{2}\left(x-x_{0}\right)^{2}-m^{2}-p_{ \perp}^{2}, \tag{16}\] where \(p_{\perp}\) is the momentum perpendicular to the \(x\) direction. Note that, for real values of position \(x\), we take the positive square root for \(p_{x}\). In the tunnelling approach, we evaluate the tunnelling exponent Eq. (15) for a solution of the equations of motion that runs along a closed complex contour \(\mathcal{C}\) in the complex \(x\) plane. The tunnelling exponent using Eq. (16) is, \[W=\int_{\mathcal{C}}\left(\left(e\mathcal{E}\right)^{2}\left(x-x_{0}\right)^{ 2}-m^{2}-p_{\perp}^{2}\right)^{1/2}dx, \tag{17}\] The integrand has a branch cut between \(x_{0}\pm\kappa\), where \(\kappa=(m^{2}+p_{\perp}^{2})^{1/2}/|e\mathcal{E}|\). In order to find a suitable integration contour we start from the general solution to the equations of motion in real time, \[x-x_{0} =\kappa\cosh\frac{\tau}{\kappa} \tag{18}\] \[t-t_{0} =\kappa\sinh\frac{\tau}{\kappa}, \tag{19}\] Consider the complex contour \[\tau=-i\kappa\phi+\epsilon, \tag{20}\] where the real parameter \(\phi\) lies on a circle and an \(i\epsilon\) prescription has been used to avoid the branch cut in Eq. (17). In the plane with axes \(\mathrm{Re}(x)\) and \(\mathrm{Im}(t)\), the contour is a circle, as shown in Fig. 3. The direction has been chosen so that the principle value of the square root will result in a positive imaginary part for the integral. If we use the negative root in Eq. (17) then we take a counter-clockwise contour. An interesting interpretation of the instanton has been suggested by Vilenkin [16]. Combining the bottom half of the instanton to the real time evolution of the particle worldlines for \(t>0\) produces the picture on the right. From the point of view of an observer in real time, the electron positron pair suddenly appears as if we have 'creation from nothing'. Strange behaviour should be expected when we try to interpret a quantum phenomenon in purely classical terms. The first diagram in Fig. 4 shows how the instanton contour goes around the branch cut in the complex \(x\) plane. Note that any contour which circles the branch cut clockwise exactly once gives the same value of the tunnelling rate, so that the only ambiguity in the result lies in the winding number of the contour. The second diagram in Fig. 4 shows the contour in the complex \(t\) plane. In this picture the tunnelling contour can be split into a particle line and an antiparticle line. Each line contributes half of the closed instanton path, and an instanton with winding number \(-n\) would represent the production of \(n\) particle-antiparticle pairs. Integrating (17) along the contour around the branch cut gives the tunnelling exponent, \[\frac{W_{I}}{\hbar}=\frac{\pi m^{2}}{|e{\cal E}|\hbar}+\frac{\pi p_{\perp}^{2}}{| e{\cal E}|\hbar}. \tag{21}\] The prefactor for the tunnelling rate in the barrier penetration case was \((V^{\prime\prime}/2\pi m)^{1/2}\), and we will divide this by the Compton wavelength \(\hbar/mc\) to get the correct dimensions. Putting in a phase space factor in addition gives an estimate for the particle production \(d\Gamma\) with transverse momentum \(p_{\perp}\), \[d\Gamma=\frac{|e{\cal E}|}{2\pi\hbar}\left(\frac{dp_{\perp}}{2\pi\hbar}\right) ^{2}e^{-W_{I}/\hbar} \tag{22}\] After integrating the particle production rate \(d\Gamma\) over the transverse momenta, we obtain the correct formula for the particle production rate \(\Gamma\) per unit volume [9], \[\Gamma=\frac{1}{\pi}\left(\frac{e{\cal E}}{2\pi\hbar}\right)^{2}e^{-\pi m^{2}/ \hbar|e{\cal E}|} \tag{23}\] An exponent \(\pi m^{2}/\hbar|e{\cal E}|\approx 1\) for electrons corresponds to an electric field strength \(4.157\times 10^{18}\,\mathrm{V}\mathrm{m}^{-1}\). Pair production is heavily suppressed for smaller field strengths. On the other hand, ordinary perturbation theory can be used to describe pair production for larger field strengths. The result is only useful over a limited range of field strengths. In conclusion, the Schwinger process is represented by a closed contour around a branch cut in the complex coordinate plane. Two halves of the contour with single winding number in the complex time plane represent production of a particle and an antiparticle. ### The thermal Schwinger process The production of particles in an electric field at finite temperature gives another application of the tunnelling approach [17, 18, 19]. Thermal tunnelling rates in quantum mechanics are related to the imaginary part of the free energy [20]. In the path integral approach, we find the free energy by imposing a periodicity \(\beta=\hbar/k_{B}T\) on the action in imaginary time. We do the same for calculating the particle creation rate. As before, the the main focus here will be on the choice of contour for the instanton approximation. In the Schwinger process, the periodicity in imaginary time cuts off the top and bottom of the circular instanton as shown in figure 5. The contour would be continuous on the periodic manifold, but not differentiable. In order to obtain a differentiable contour we move the left and right segment together as Figure 4: Alternative views of the instanton for the Schwinger process. The complex space plane (left) and the complex time plane (right). on the right side of figure 5. This adjustment is essential for obtaining the correct value of the instanton action. The right segment is centred at \(x_{0}\) and the left segment at \(x_{1}\). The corresponding integrals are denoted by \(W_{R}\) and \(W_{L}\), and evaluated using the angle \(\phi=i\tau/\kappa\) as independent variable in Eqs. (17) and (18). The contributions are \[W_{R} = im\kappa\int_{-\phi_{0}}^{\phi_{0}}\sin^{2}\phi\,d\phi-imx_{0} \int_{-\phi_{0}}^{\phi_{0}}\!\!\!\cos\phi\,d\phi \tag{24}\] \[W_{L} = im\kappa\int_{-\phi_{0}}^{\pi+\phi_{0}}\sin^{2}\phi\,d\phi-imx_{ 1}\int_{-\phi_{0}}^{\pi+\phi_{0}}\cos\phi\,d\phi, \tag{25}\] where \(\sin\phi_{0}=\beta/2\kappa\). The final result is independent of \(x_{0}\) and \(x_{1}\) because of the identity \(x_{0}-x_{1}=2\kappa\cos\phi_{0}\). The total functon \(W=W_{R}+W_{L}\), \[W=2i\kappa m\left\{\phi_{0}+\frac{1}{2}\sin 2\phi_{0}\right\} \tag{26}\] The tunnelling exponent [17], \[\frac{W_{I}}{\hbar}=\frac{2\kappa m}{\hbar}\left\{\arcsin\left(\frac{\beta}{ 2\kappa}\right)+\frac{\beta}{2\kappa}\left[1-\frac{\beta^{2}}{4\kappa^{2}} \right]^{1/2}\right\}, \tag{27}\] where \(\kappa=(m^{2}+p_{\perp}^{2})^{1/2}/|e\mathcal{E}|\). This reproduces the Schwinger result (21) in the zero temperature limit. In the high temperature limit, \(W_{I}/\hbar\to 2\beta m\), which represents the probability of finding a particle-antiparticle pair at high temperature. In future, whenever we see \(W_{I}/\hbar\to\beta E\) we will interpret this as a signal of thermal particle production at temperature \(k_{B}T=\hbar/\beta\). ## 4 The Fulling-Davies-Unruh effect The next example is the detection of thermal particles by an accelerating detector. We take the detector to be at rest in a two-dimensional accelerating frame with acceleration \(g\). We shall review the tunnelling description to see what features of tunnelling instantons are typical of thermal particle production. The accelerating frame is associated with a set of Rindler coordinates \((x,t)\), and metric \[ds^{2}=-g^{2}x^{2}dt^{2}+dx^{2}. \tag{28}\] The Hamiltonian (13) for particle motion \(x(\tau)\) and \(t(\tau)\), is \[H=-\frac{1}{2m}\frac{p_{t}^{2}}{g^{2}x^{2}}+\frac{1}{2m}p_{x}^{2}+\frac{m}{2}. \tag{29}\] Figure 5: The instanton for the thermal Schwinger process. Pasting the circular instanton on to the periodic manifold gives a non-differentiable path (left). Moving the pieces together gives the differentiable path (right). The energy \(E=-p_{t}\) is conserved and the Hamiltonian is constrained to \(H=0\). As before, the tunnelling is related to the principle function integrated around a closed contour, \[W=\int_{C}p_{x}dx, \tag{30}\] Using the Hamiltonian constraint, \[W=\int_{C}\left(\frac{E^{2}}{g^{2}}-m^{2}x^{2}\right)^{1/2}\frac{dx}{x} \tag{31}\] To investigate the integration contour, we take the general solution to the equations of motion, \[x =\left(\frac{E^{2}}{m^{2}g^{2}}-\tau^{2}\right)^{1/2}, \tag{32}\] \[t-t_{0} =\frac{1}{g}\log\left(\frac{E+mg\tau}{E-mg\tau}\right)^{1/2}. \tag{33}\] Consider proper time contour \[\tau=-\frac{E}{mg}+\epsilon e^{2i\phi}, \tag{34}\] where \(\phi\) lies on the circle. This gives a circular contour in the complex \(x\) plane around the horizon \(x=0\). Any closed contour which goes around the horizon singularity once will give the same value for the tunnelling exponent. In the complex \(t\) plane, the contour goes between \(t_{0}\pm 2\pi i/g\). The metric is regular and _the contour is closed if we impose periodicity of the metric in imaginary time_. Integrating around the singularity using the residue theorem gives exponent \[\frac{W_{I}}{\hbar}=\frac{2\pi E}{\hbar g}. \tag{35}\] This has the thermal interpretation \(W_{I}/\hbar=E/k_{B}T\) as in relation (1), where the Unruh temperature \[T=\frac{\hbar g}{2\pi k_{B}}. \tag{36}\] We should also examine what happens if we use a different coordinate system, specifically putting the metric in Boyer-Lindquist form with \(r=gx^{2}/2\), \[ds^{2}=-fdt^{2}+f^{-1}dr^{2},\text{ where }f=g^{2}r^{2}. \tag{37}\] Figure 6: Two views of the instanton for the Unruh process. The complex space plane (left) and the complex time plane (right). The same contour (32) which wound once around the horizon in the complex \(x\) plane now winds _twice_ around the horizon in the complex \(r\) plane, though the periodicity in the complex \(t\) plane and the particle production rate remain the same. In conclusion, horizon radiation is represented by a closed contour around a singularity in the complex coordinate plane. The contour is closed in the complex time plane only when we impose impose periodicity in imaginary time. The contour has winding number two in the complex \(r\) plane when we use the Boyer-Lindquist coordinates. ## 5 Charged black holes Radiation from charged black holes can include contributions from thermal radiation with the Hawking temperature \(T_{h}\) and breakdown of the vacuum due to the electric field outside the black hole. All forms of radiation are included in the simple expression for the particle flux obtained from a mode decomposition of the Dirac or the wave equation [21], \[F=\sum_{l,m}\int_{0}^{\infty}\frac{d\omega}{2\pi}\left(1-|A_{lm}|^{2}\right) \frac{1}{e^{\beta\omega_{h}}-1}, \tag{38}\] where the inverse temperature \(\beta=\hbar/T_{H}\). The frequency \(\omega_{h}=\omega-e\Phi_{h}\), where \(\Phi_{h}\) is the electrostatic potential at the horizon. The amplitude \(A_{lm}\) represents reflection of the particle modes with angular wave numbers \(l\) and \(m\) back into the black hole. This amplitude can only be obtained numerically, or using approximate methods for various regimes. The quantum tunnelling approach to particle creation can be used to obtain closed expressions in the regime \(|\beta\omega_{h}|\gg 1\), when it is related to using WKB approximations to the reflection amplitude. In this limit, the flux integral can be decomposed into two parts: _The super-radiant regime_\(\omega<e\Phi_{h}\) where the flux becomes \[F_{\rm super}\approx\sum_{l,m}\int_{0}^{e\Phi_{h}}\frac{d\omega}{2\pi}\left(|A _{lm}|^{2}-1\right) \tag{39}\] It is in this regime that electromagnetic breakdown of the vacuum can occur. _The non-super-radiant regime_\(\omega>e\Phi_{h}\), where \[F_{\rm thermal}\approx\sum_{l,m}\int_{e\Phi_{h}}^{\infty}\frac{d\omega}{2\pi} \left(1-|A_{lm}|^{2}\right)e^{-\beta\omega_{h}}, \tag{40}\] which we can regard as the Maxwell-Boltzmann approximation to the thermal Hawking flux filtered by a grey-body factor. We shall now show how the quantum tunnelling approach reproduces these results. ### The tunnelling approach The spacetime is described by the Reissner-Nordstrom metric \[ds^{2}=-fdt^{2}+f^{-1}dr^{2}+r^{2}(d\theta^{2}+\sin^{2}\theta d\phi^{2}), \tag{41}\] where \[f=1-\frac{2M}{r}+\frac{Q^{2}}{r^{2}}. \tag{42}\] Geometric mass \(M=GM_{*}\) and Geometric charge \(Q=G^{1/2}Q_{*}/(4\pi\epsilon_{0})^{1/2}\) are related to the physical mass and charge \(M_{*}\) and \(Q_{*}\). The electrostatic potential \(\Phi\) at radius \(r\) is \[\Phi=\frac{Q_{*}}{4\pi\epsilon_{0}r}. \tag{43}\] Due to the rotational symmetry, it will be sufficient to start from the Hamiltonian (13) for a particle of charge \(e\) in the equatorial plane, with conserved momenta \(E=-p_{t}\) and \(L=p_{\phi}\), \[H=-\frac{(E-e\Phi)^{2}}{2mf}+\frac{fp_{r}^{2}}{2m}+\frac{1}{2m}\left(m^{2}+\frac {L^{2}}{r^{2}}\right). \tag{44}\] The modified principle function \(W=S+Et-L\phi\) is, \[W=\int_{C}p_{r}dr=\int_{C}\frac{1}{f}\left\{(E-e\Phi)^{2}-\left(m^{2}+\frac{L^{ 2}}{r^{2}}\right)f\right\}^{1/2}dr. \tag{45}\] There are poles at the outer and inner horizons \(r_{h}\) and \(r_{c}\), as well as possible branch cuts. A typical representation of the complex \(r\) plane is shown in figure 7. The contribution from each of the contours will be denoted by a subscript, e.g. \(W_{\infty}\) for the large outer contour. From the large radius limit, we find \[W_{\infty}=\frac{4\pi iM}{(E^{2}-m^{2})^{1/2}}\left\{\left(E^{2}-\frac{e\Phi_{ h}r_{h}}{2M}\right)-\frac{m^{2}}{2}\right\} \tag{46}\] The horizon integrals are obtained from the residue theorem, \[W_{h} =\frac{\pi i}{\kappa_{h}}\left(E-e\Phi_{h}\right) \tag{47}\] \[W_{c} =\frac{\pi i}{\kappa_{c}}\left(E-e\Phi_{c}\right) \tag{48}\] where the surface gravities \(\kappa_{h}=f^{\prime}(r_{h})/2\) and \(\kappa_{c}=f^{\prime}(r_{c})/2\). Integrals around the branch cuts can be deduced from the other integrals using Cauchy's theorem. ### Black hole Schwinger process The Schwinger process for electron-positron production is represented by a contour which goes around the branch cut. This contribution is independent of the Hawking temperature and we identify it with the super-radiant flux (39). From Cauchy's theorem, the principle function \(W_{S}=W_{h}+W_{c}-W_{\infty}\). The imaginary part, \[W_{I}=\frac{4\pi M}{(E^{2}-m^{2})^{1/2}}\left\{\left(E-(E^{2}-m^{2})^{1/2} \right)\left(E-\frac{e\Phi_{h}r_{h}}{2M}\right)-\frac{m^{2}}{2}\right\}. \tag{49}\] In the large energy limit, the tunnelling exponent at leading order of \(m/E\) is \[\frac{W_{I}}{\hbar}=\frac{\pi m^{2}e\Phi_{h}r_{h}}{\hbar E^{2}}. \tag{50}\] Figure 7: The complex \(r\) plane showing different contours. The angular momentum \(L\) only appears in the location of the branch cut. If \(L\ll Er_{h}\), then the branch cut is narrow with centre at the radius where \(E-e\Phi=0\). Physically, this represents the radius \(r(E)\) at which the particles of energy \(E\) are created. The electric field at the centre of the branch cut is \[e\mathcal{E}=\frac{e\Phi}{r}=\frac{E^{2}}{e\Phi_{h}r_{h}} \tag{51}\] Hence \[\frac{W_{I}}{\hbar}=\frac{\pi m^{2}}{\hbar e\mathcal{E}}. \tag{52}\] This recovers the Schwinger result, but with the local electric field \(\mathcal{E}\) at the radius where the particles are created. We conclude that the particle production is sufficiently localised for the equivalence principle to hold. Furthermore, we can use the Schwinger result to infer the pre-factor for the particle production rate per unit volume, \[\Gamma=\frac{1}{\pi}\left(\frac{e\mathcal{E}}{2\pi\hbar}\right)^{2}\,e^{-\pi m ^{2}/e\hbar\mathcal{E}}. \tag{53}\] As with the flat spacetime result, this is only valid for large electric fields. The total luminosity of the black hole can be obtained by integrating the particle production for the region outside of the horizon. Because of the relation between the location of particle creation \(r\) and the energy, this is equivalent to integrating over the energy. First, we rewrite the particle production rate in terms of radius \(r\) using (51) and (52), \[\Gamma=\frac{1}{4\pi\alpha^{2}}\left(\frac{mr_{h}}{hr}\right)^{4}e^{-\alpha r ^{2}/r_{h}^{2}}, \tag{54}\] where \[\alpha=\frac{\pi m^{2}r_{h}}{\hbar e\Phi_{h}}. \tag{55}\] The evaporation rate is then \[\frac{dM_{*}}{dt}=\int_{r_{h}}^{\infty}dr\,4\pi r^{2}\,\Gamma E=\frac{\pi}{ \alpha^{2}r_{h}}\left(\frac{mr_{h}}{\hbar}\right)^{5}\Gamma(-2,\alpha), \tag{56}\] Figure 8: Evolution of the geometric mass and charge parameters due to particle production is downwards along the red lines in this plot. Thermal emission is tiny and not included. Units are solar Schwartzchild radii (2.9 km). where \(\Gamma(a,x)\) is the incomplete Gamma function. The charge evaporates at a rate \[\frac{dQ_{*}}{dt}=\int_{r_{h}}^{\infty}dr\,4\pi r^{2}\,\Gamma e=\frac{\pi e}{ \alpha^{3/2}r_{h}}\left(\frac{mr_{h}}{h}\right)^{4}\Gamma(-3/2,\alpha) \tag{57}\] The relative rates of (geometric) charge and mass evaporation has a simple expression, \[\frac{dQ}{dM}=\frac{Q}{r_{h}}\frac{\Gamma(-2,\alpha)}{\alpha^{1/2}\Gamma(-3/2,\alpha)} \tag{58}\] It is a known result that the black hole looses charge due to super-radiance at a far higher rate than it looses mass [22]. However, having an expression in closed form is a success of the tunnelling approach. ### Black hole Hawking process The Hawking flux has two contributions. For \(E>e\phi_{h}\), there is a contribution from the contour which circles the horizon and represents particle production at the horizon. We may also have contributions from branch cut outside the horizon which now represents the transmission term \(|A_{lm}|^{2}\) through the potential barrier. The horizon contribution has winding number two in the coordinate system in use, as we saw earlier in the context of the Fulling-Davies-Unruh effect. The integral \(W_{H}=2W_{h}\) gives a particle creation rate \[\Gamma_{H}\propto e^{-\beta(E-e\Phi_{h})/h} \tag{59}\] which agrees with the first term in (40), at the Hawking temperature \(T_{H}=\hbar\kappa_{h}/2\pi\). ## 6 Particle production on a magnetic rotating black hole background In this section we apply the tunnelling method to the production of electron-positron pairs from the vacuum around a rotating black hole in an external magnetic field. The Hawking radiation is insignificant for large black holes, and so with astrophysical applications in mind we consider only the Schwinger process. However, we take an idealised vacuum situation with no other particles present. ### Geometry For a solar-mass black hole, the back-reaction of the magnetic field on the geometry is small when \(B\leq 10^{15}\,G\) and the Kerr metric can be used, \[ds^{2}=-\frac{\Delta}{\rho^{2}}\omega^{t\ 2}+\frac{s^{2}}{\rho^{2}}\omega^{ \phi\ 2}+\frac{\rho^{2}}{\Delta}dr^{2}+\rho^{2}d\theta^{2}, \tag{60}\] where \[\omega^{t}=dt-as^{2}d\phi,\hskip 28.452756pt\omega^{\phi}=(a^{2}+r^{2})d \phi-adt, \tag{61}\] The metric functions are \(\Delta=r^{2}+a^{2}-2Mr\), \(s=\sin\theta\) and \(\rho^{2}=r^{2}+a^{2}-a^{2}s^{2}\). The geometric mass \(M\) is \(G/c^{2}\) times the physical mass. We will take a magnetic field with rotational symmetry about the black hole axis and assume the simplest dipole field that approaches a constant field with strength \(B\) in the \(z\) direction at large distances. Furthermore, we will assume that the movement of charged particles leaves the black hole with a net charge that neutralises the electromotive force (EMF). The electromagnetic potential for this zero EMF field has components \[A_{t}=-\frac{Mars^{2}}{\rho^{2}}B,\hskip 28.452756ptA_{\phi}=\frac{As^{2}}{2 \rho^{2}}B, \tag{62}\] where \(A=(r^{2}+a^{2})^{2}-\Delta a^{2}s^{2}\). Although the EMF vanishes, there is an electric field in the non-rotating (zero angular momentum) frame defined in Ref. [23]. We shall see that this electric field is associated with the particle production. #### Dynamics Some basic dynamical notions will be needed for the particle production calculation. The four-momentum \(p_{\mu}\) for a particle with mass \(m\), charge \(e\) and four velocity \(u^{\mu}\) is \[p_{\mu}=mg_{\mu\nu}\left(u^{\nu}+eA^{\nu}\right) \tag{63}\] Along the Killing directions, we set \[E=-p_{t},\hskip 28.452756ptL=p_{\phi}, \tag{64}\] The momenta are related by the constraint \[g^{\mu\nu}p_{\mu}p_{\nu}=-m^{2}, \tag{65}\] After inserting the metric components, \[\frac{\Delta}{\rho^{2}}p_{r}^{2}+\frac{1}{\rho^{2}}p_{\theta}^{2}+V=0, \tag{66}\] where the effective potential \(V\) is given by \[V=-\frac{A}{\Delta\rho^{2}}\left(E-\Omega L\right)^{2}+\frac{\rho^{2}}{As^{2}} \left(L-\frac{eBAs^{2}}{2\rho^{2}}\right)^{2}+m^{2}. \tag{67}\] The local rotation rate \(\Omega=2Mra/A\). ### Tunnelling exponents The tunnelling exponent is given by \(\mbox{Im}W/\hbar\), where the modified principle function \(W=S+Et-L\phi\). Inserting the action leaves \[W=\int_{C}\left(p_{r}dr+p_{\theta}d\theta\right). \tag{68}\] Unlike in the previous examples, there are two remaining coordinates \(r\) and \(\theta\), but both are implicitly functions of the proper time \(\tau\). The complex contour \(C\) for the Schwinger process surrounds a branch cut and gives an imaginary value to \(W\). This happens in a region where classical trajectories are forbidden because \(V\) is negative, and the momenta are therefore complex. Tunnelling occurs inside a potential barrier that ends at points PQ as shown in Fig 9. A crucial observation is that the tunnelling only occurs with any significant rate for very small values of \(W_{I}\) compared to the astrophysical scales set by the mass of the black hole. This requires both brackets in the potential (67) to be very small, and restricts the values of the energy and angular momentum. The centre of the barrier \(r=r_{c}\), \(\theta=\theta_{c}\) is located where both brackets vanish, \[E=\Omega_{c}L,\hskip 28.452756ptL=\frac{eBs_{c}^{2}A_{c}}{2\rho_{c}^{2}} \tag{69}\] These relate both the energy and angular momentum to \(r_{c}\) and \(s_{c}=\sin\theta_{c}\). Because the barrier is extremely narrow, we can think of pair creation for particles with energy \(E\) happening along the circle at \(r_{c}(E)\) and \(\theta_{c}(E)\). In the \(x^{i}=(r,\theta)\) sector, the Hamiltonian that generates the field equations is \[H=\frac{1}{2m}g^{ij}p_{i}p_{j}+\frac{1}{2m}V \tag{70}\] In the region of the barrier, we introduce small quantities \(\delta q^{1}=r-r_{c}\) and \(\delta q^{2}=\theta-\theta_{c}\), and we use a quadratic approximation to the Hamiltonian, \[H=\frac{1}{2m}g^{ij}p_{i}p_{j}+\frac{1}{4m}V_{,ij}\delta q^{j}\delta q^{j}+\frac {m}{2}, \tag{71}\] where the Hessian of the potential is evaluated at the centre of the barrier (\(r_{c},\theta_{c}\)). We diagonalise the Hamiltonian by solving the eigenvalue problem for basis vectors \(e_{n}^{i}\), \[V_{,ij}e_{n}^{j}=2m^{2}\lambda_{n}g_{ij}e_{n}^{j}. \tag{72}\] Introduce normal mode coordinates \(x^{n}\), where \[\delta q^{i}=x^{n}\,e_{n}^{i} \tag{73}\] In terms of the normal modes, \[H=\frac{1}{2m}\delta^{mn}p_{m}p_{n}+\frac{1}{2}m\Lambda_{mn}x^{m}x^{n}+\frac{m }{2}, \tag{74}\] where \(\Lambda_{mn}=\text{diag}(\lambda_{1},\lambda_{2})\). For a compact instanton, we must use the mode \(x\) which has a negative eigenvalue \(\lambda=-\omega^{2}\). This is the mode that corresponds to the line PQ in figure 9. For this mode, \[H=\frac{1}{2m}p_{x}^{2}-\frac{1}{2}m\omega^{2}x^{2}+\frac{m}{2}=0 \tag{75}\] The principle function is \[W=\int_{C}p_{x}dx=im\int_{C}(1-\omega^{2}x^{2})^{1/2}dx, \tag{76}\] where the contour winds once around the branch cut for a single pair creation event. Hence \[W=\frac{im\pi}{\omega}. \tag{77}\] A better feel for the result can be obtained by scaling out the dimensionfull quantities from \(\omega\), \[\omega=\frac{eB}{m}\hat{\omega}, \tag{78}\] Figure 9: A region of the (\(r,\theta\)) plane showing regions of positive potential (grey) and negative potential (white). The line PQ shows the trajectory of an instanton, with the particle-antiparticle pair produced at P and Q. where \(\hat{\omega}\equiv\hat{\omega}(r_{c}/M,a/M,\theta_{c})\) is dimensionless, and obtained by solving the eigenvalue problem (72) with \(\lambda=-\omega^{2}\). The particle production rate at \((r_{c},\theta_{c})\) is \(\propto\exp(-W_{I}/\hbar)\), where \[\frac{W_{I}}{\hbar}=\frac{\pi m}{\hbar\omega}=\frac{\pi m^{2}}{\hbar\hat{ \omega}eB}=\frac{\pi B_{0}}{\hat{\omega}B}, \tag{79}\] and \(B_{0}=m^{2}/eB=4.4\times 10^{13}\,G\) for electrons. In general, the factor \(\hat{\omega}\) has to be obtained numerically, but in the special case of equatorial particle production the value has a closed form, \[\hat{\omega}=\frac{1}{r^{2}}\left\{a^{2}(r+M)^{2}-\Delta r^{2}\right\}^{1/2}. \tag{80}\] ### Particle fluxes The factor \(\pi/\hat{\omega}\) that determines the particle production rate has been plotted in figures 10, where we see the the relative amount of particle production for different values of \(r\) and \(\theta\). Particle production is concentrated close to the horizon. Initially, \(p_{r}=p_{\theta}=0\), and the the particles move in circular orbits. As more particles are produced, a current loop will build up which produces a field counteracting the original field. Gradually, the instability in the normal mode '\(x\)' direction drives particles away from their circular orbits, and into a chaotic ones [24]. Comparing the exponents in the particle production rates Eq. (23) and Eq. (79) suggest that there is an 'effective' Schwinger process electric field, which we denote by \(E_{s}\), \[E_{s}=\hat{\omega}B \tag{81}\] Figure 11 shows a comparison between the actual electric field strength, \(E_{r}\), in the locally non-rotating frame and the field strength \(E_{s}\) inferred by actual rates. The two agree at the horizon, but as we move away from the horizon the flat space Schwinger result overestimates the production rate. Although the Schwinger process in flat space does not give the exact exponent, it should still be accurate enough for calculating the pre-factor in the particle production rate, especially if we use the effective field strength \(E_{s}\) in the Schwinger result (23). The particle production rate per unit proper volume and time should therefore be \[\Gamma=\frac{1}{4\pi^{3}}\left(\frac{eB\hat{\omega}}{\hbar}\right)^{2}e^{-\pi B _{0}/\hat{\omega}B} \tag{82}\] Figure 10: A region of the \(r\) and \(\theta\) plane showing contours of constant particle production rate. The colours represent the factor \(\pi/\hat{\omega}\). The extremal case \(a=M\) on the left and \(a=0.7M\) on the right. The particle production depends on radius \(r\) and angle \(\theta\). The electrons and positrons move in circular orbits at near-light speed and generate current density \(J^{\mu}\), with \[dJ^{\mu}=2e\Gamma u^{\mu}d\tau, \tag{83}\] where \(d\tau\) is the proper time interval in the non-rotating frame. The rate of change of azimuthal current \(I\) in the Boyer-Lindquist frame is obtained by a volume integral of \(dJ^{\phi}\), \[\frac{dI}{dt}=\int_{r_{h}}^{\infty}dr\int_{0}^{\pi}d\theta\,4\pi\rho^{2}\sin \theta\,e\Gamma u^{\phi} \tag{84}\] For a detailed calculation, we can find the value of the velocity \(u^{\phi}\) by expanding about the centre of the barrier as before. Consider the velocity components \(u^{\alpha}\), where \(x^{\alpha}=\{t,\phi\}\). When expressed in terms of the velocity, the potential \(V\) in Eq. (67) becomes \[V=m^{2}g_{\alpha\beta}u^{\alpha}u^{\beta}+m^{2}, \tag{85}\] where the \(u^{\alpha}\) velocity components are regarded as functions of \(r\) and \(\theta\), given in terms of the constant momenta by \(u^{\alpha}=g^{\alpha\beta}(p_{\beta}-eA_{\beta})/m\). We defined the centre of the barrier \(r_{c}\), \(\theta_{c}\) as the point where these functions vanish. At the ends of the barrier, we use the normal mode \(e^{i}\) from Eq. (73) and define \(u^{\alpha}{}_{,x}=u^{\alpha}{}_{,i}e^{i}\), \[u^{\alpha}=u^{\alpha}{}_{,i}\delta q^{i}=xu^{\alpha}{}_{,x} \tag{86}\] The other components \(u^{i}\) vanish at the ends of the barrier by Eq. (66). Substituting back into the potential (85) gives \[u^{\alpha}=\frac{u^{\alpha}{}_{,x}}{|u_{,x}|} \tag{87}\] where \(|u_{,x}|^{2}=-g_{\alpha\beta}u^{\alpha}{}_{,x}u^{\beta}{}_{,x}=\omega^{2}\). Figure 11: The electric field strength \(E_{r}\) in the non-rotating (zero angular momentum) frame and the effective field strength \(E_{s}\), inferred from the Schwinger particle production formula. The field strengths are evaluated in the equatorial plane for an extremal black hole. Finally, we can obtain a rough estimate by expanding \(\hat{\omega}\) about the horizon, where \(\hat{\omega}=2\). This gives \[I\approx\left(\frac{2eBr_{h}}{\pi\hbar}\right)^{2}e^{-\pi B_{0}/2B}\Delta t \approx I_{0}B_{13}^{2}e^{-\pi B_{0}/2B}\Delta t \tag{88}\] where \(B_{13}\) is the magnetic field strength in units of \(10^{13}\)G, \(I_{0}=3.4\times 10^{51}\)A and \(\Delta t\) is the time over which the particles remain in circular orbits. The system can only maintain equilibrium if the magnetic field generated by this current is smaller than the external field \(B\). The induced field near the horizon \(B_{I}\approx\mu_{0}I/2r_{h}\). Requiring \(B_{I}<B\) gives \[B\lesssim 1.0\times 10^{12}\mathrm{G} \tag{89}\] Note that this is rather less than the field \(B_{0}=4.4\times 10^{13}\)G. Nevertheless, the energy of the particles from Eq. (69) is still very high, \(E\approx 3.00\times 10^{20}\,B_{13}r_{\mathrm{km}}\,\mathrm{eV}\) for particles produced close to the horizon \(r_{\mathrm{km}}\) measured in kilometers. We can obtain information about the trajectories by looking at the potential diagrams in 12. Initially, due to the closeness to the point \(r_{c}\), \(\theta_{c}\) where the potential gradients vanish, the forces moving the particles away from the circular orbits are very small. From the potentials, we see that the particles produced on the inner edge of the instanton always end up inside the hole. Particles produced on the outside edge can eventually move off to infinity. Depending on the initial radius, the particle may cross the equatorial plane, but the ones that avoid the equatorial plane drift away in the direction along the axis of rotation. The electrons are produced in high energy circular orbits and should be significant sources of synchrotron radiation. In flat space, the synchrotron emission rate \(dE/dt\propto\gamma^{2}\) where \(\gamma\) is the Lorentz factor. In curved space, the Lorentz factor \(\gamma\sim u^{t}\) and we saw earlier that the tunnelling process requires special values of the energy and angular momenta, and these imply \(u^{t}\sim 1\). Consequently, synchrotron emission in the circular orbits is very highly suppressed, and has a negligible affect on the motion. However, this only covers the initial circular orbits, and gradually as the particles drift away from the potential barrier, they will accelerate to higher speeds and the emission will increase. ## 7 Conclusion We have seen some of the tricks employed when using the instanton approach to particle creation in moderately strong fields and curved spacetimes. It can be viewed as a method for obtaining quick results Figure 12: A region of the \(r\) and \(\theta\) plane showing positive potential (grey) and negative potential (white). Particles are confined to a white region and produced at the edge. On the left \(r_{c}=1.5M\) and the outgoing particle cannot cross the equatorial plane. On the right \(r_{c}=1.4M\) and the particle can cross the equatorial plane. from any situation where WKB analysis would be appropriate. The results are non-perturbative, but the range of usefulness is restricted and prefactors to the exponential rates are often difficult to obtain. An example is the rate of vacuum breakdown due to the Schwinger effect near a black hole in a magnetic field. The system is very limited because it ignores collisions between particles in the surrounding medium, in particular the production from high energy photons via \(\gamma\to e^{+}e^{-}\). Nevertheless, it is clear that fields in excess of \(4.4\times 10^{11}\,G\) will copiously produce electrons of energy above \(10^{18}\)eV. The particles produced by the Schwinger mechanism can form a current loop around the black hole before drifting off along the rotation axis, whilst emitting significant synchrotron radiation as they speed up. One application of the results may be to add particle production terms to relativistic MHD simulations, using the Schwinger particle production rate in a zero angular momentum frame. The examples illustrate some interesting features of the instanton approach to particle creation. In particular, the importance of Hamiltonian methods and the distinction between branch cuts which signal vacuum breakdown and singularities that signal horizon radiation. This work was supported by the UK Science and Technology Facilities Council (STFC) [grant ST/T000708/1].
2303.12603
Driveability Constrained Models for Optimal Control of Hybrid Electric Vehicles
This work investigates the effect of three different driveability constraints on the optimal energy management strategy for a p2 parallel hybrid. Two of these constraints are used to prevent frequent gear shifting and engine start/stops, while the third is used to increase the sportiness of the vehicle by maximizing the available torque reserve at all times. The constraints are imposed by reformulating them as penalty terms to be added to the base running cost of the control strategy, which is fuel consumption. Dynamic programming, a popular optimal control technique, is then used to design the energy management strategy that minimizes the total cost. A case study is developed for a p2 parallel hybrid and simulated on a combination of the Artemis driving cycles. The impact of each driveability constraint is analyzed with respect to a set of relevant features of the control strategy, such as the choice of engine operating points and the gear shift pattern. The resulting discussion provides some useful insight for the design of real-time, rule-based control strategies.
Federico Miretti, Daniela Misul
2023-03-22T14:40:49Z
http://arxiv.org/abs/2303.12603v1
# Driveability Constrained Models for Optimal Control of Hybrid Electric Vehicles ###### Abstract This work investigates the effect of three different driveability constraints on the optimal energy management strategy for a p2 parallel hybrid. Two of these constraints are used to prevent frequent gear shifting and engine start/stops, while the third is used to increase the sportiness of the vehicle by maximizing the available torque reserve at all times. The constraints are imposed by reformulating them as penalty terms to be added to the base running cost of the control strategy, which is fuel consumption. Dynamic programming, a popular optimal control technique, is then used to design the energy management strategy that minimizes the total cost. A case study is developed for a p2 parallel hybrid and simulated on a combination of the Artemis driving cycles. The impact of each driveability constraint is analyzed with respect to a set of relevant features of the control strategy, such as the choice of engine operating points and the gear shift pattern. The resulting discussion provides some useful insight for the design of real-time, rule-based control strategies. Keywords:SDG13, hybrid vehicles, driveability, energy management strategy, sportiness ## 1 Introduction Hybrid electric vehicles (HEVs) enable fuel economy improvement by exploiting the additional degree of freedom granted by the presence of an electrical power source to operate the thermal engine at higher efficiencies. A dedicated controller, often called the energy management system (EMS), is needed to define how these two power sources are used to meet the driver's power demand. Additionally, since HEVs generally employ automated transmissions, its control is also included in the EMS [4, 13]. The EMS has a strong influence on the powertrain's performance in terms of fuel economy, emissions and driveability. Hence, a great deal of attention has been devoted to EMS design both within the industry and in academic research, and a wide range of techniques has been proposed. Among these, dynamic programming is one of the most popular. Because of its flexibility and guaranteed optimality, it can be easily and effectively used to analyze optimal control strategies in the design phase, in an off-line simulation environment. Its most notable drawbacks are a general inability to deal with complex simulation models because of its computational burden and the need for advance knowledge of the vehicle's speed profile in time. The latter in particular makes the technique unsuitable for real-time control. Still, dynamic programming can be highly effective in supporting the design of real-time control strategies. For example, rule-based controllers can be designed by applying some rule extraction procedure to the control trajectories generated by dynamic programming [7, 15, 9], or the same results can be used to calibrate some other optimizationbased EMS [3]. Unfortunately, when used to derive fuel-optimal control strategies, dynamic programming typically induce a number of undesirable driveability issues such as frequent engine start/stops, erratic gear shifting, and a general lack of sportiness. This is also typical of other common optimal control approaches such as equivalent consumption minimization strategies (ECMS) and Pontryagin's minimum principle (PMP). Many works can be found in the literature which address some or all of these problems, which are commonly referred to as driveability issues. Possibly the earliest of such applications can be found in [8], where a penalty term associated with gear shifting was included in a dynamic programming algorithm whose results were then used to develop a rule-based control strategy. Several authors used stochastic dynamic programming to reduce the frequency of gear shifts [6], engine starts [5], or both [14], by adding corresponding penalty terms to the running cost in the cost functional. Similarly, the authors in [2, 1] embedded penalty terms for gear shifts and engine starts in an heuristic framework named SERCA and in a dynamic programming application to act as a benchmark. Torque (or power) reserve appears to be the least considered among driveability aspects. To the best of our knowledge, only two works have tackled this issue, both of which employed some variant of the ECMS. One approach for a multi-mode PHEV with heuristic penalty factors for each mode transition was developed in [12], while also including a hard constraint for the available torque reserve at the wheels. In contrast, [16] dealt with torque reserve by adding a penalty term to the equivalent consumption, in addition to a gear shift penalty. In this work, we developed a four-term cost functional to be used in a dynamic programming framework, which includes fuel consumption, penalties for gear shifts and engine starts and a penalty term for the available torque reserve. We then developed a case study with a p2 parallel hybrid, whose main parameters can be found in Table 1, and assessed the effect of each penalty term on the obtained control strategies. Finally, we discuss the implications of our results on the development of real-time heuristic control strategies. ## 2 Simulation model The simulation model was developed using a backward-facing approach [4, 13], as is typical for control-oriented models in EMS design. The tractive effort \(F_{\rm veh}\) was evaluated with a simple longitudinal model considering the resistant forces \(F_{\rm res}\) (using road load coefficients \(k_{0}\), \(k_{1}\) and \(k_{2}\)) and the vehicle's inertia \[F_{\rm veh}=F_{\rm res}+m_{\rm veh}a_{\rm veh}=k_{0}+k_{1}v_{\rm veh}+k_{2}v_{\rm veh }^{2}+m_{\rm veh}a_{\rm veh}. \tag{1}\] A quasi-static powertrain model was then used to propagate this tractive effort through the wheels, final drive and gearbox to obtain a torque demand \(T_{\rm d}\), which for the p2 hybrid considered in this work refers to the gearbox input. \[T_{\rm d}=\frac{F_{\rm veh}r_{\rm wh}}{\tau_{\rm fd}\tau_{\rm gb}(\gamma)}, \tag{2}\] Here, \(r_{\rm wh}\) is the wheel radius, \(\tau_{\rm fd}\) and \(\tau_{\rm gb}\) are the final drive and gearbox speed ratios, and \(\gamma\) represents the gear number. This torque demand was then split between the engine and the e-machine based the torque-split factor \(\alpha_{\rm eng}\): \[\alpha_{\rm eng}=\frac{T_{\rm eng}}{T_{\rm d}}. \tag{3}\] The engine and e-machine were characterized by a steady-state fuel flow rate map \(\dot{m}_{\rm f}(\omega_{\rm eng},T_{\rm eng})\) and an efficiency map \(\eta_{\rm em}(\omega_{\rm em},T_{\rm em})\), as well as torque limit curves and speed constraints. The e-machine efficiency was used to evaluate the battery electrical power \(P_{\rm b}\). The battery current \(i_{b}\) was evaluated as a function of the battery power \(P_{\rm b}\) with an equivalent circuit model: \[P_{\rm b}=v_{b}i_{b}=(v_{\rm oc}(\sigma)+R_{0}(\sigma)i_{\rm b})\ i_{b}. \tag{4}\] where \(\sigma\) is the battery state of charge and \(v_{\rm oc}(\sigma)\) and \(R_{0}(\sigma)\) are the open-circuit voltage and internal resistance characteristics. \begin{table} \begin{tabular}{l l l} \hline Component & Parameter & Value \\ \hline Vehicle & Mass & 1300 kg \\ & First coast-down coefficient & 150 N \\ & Second coast-down coefficient & 2.24 N/(m\(\cdot\) s) \\ & Third coast-down coefficient & 0.44 N/(m\(\cdot\) s)\({}^{2}\) \\ & Tyre radius & 0.327 m \\ Transmission & Gear ratios & [3.46, 1.844, 1.258, 1.027, 0.85] \\ & Efficiency & [0.93, 0.94, 0.947, 0.948, 0.946] \\ Engine & Displacement & 0.9 l \\ & Rated power & 52 kW \\ & Maximum torque & 85 Nm \\ E-machine & Rated power & 30 kW \\ & Maximum torque & 200 Nm \\ Battery & Type & Li-ion \\ & Nominal capacity & 5.3 Ah \\ & Nominal voltage & 295 V \\ \hline \end{tabular} \end{table} Table 1: Main vehicle data. ## 3 EMS design with dynamic programming Dynamic programming is an optimal control technique to control the evolution of a dynamical system in time while minimizing some additive cost \(J\). In the context of dynamic programming, the simulation is discretized in \(N\) time steps. The model is characterized by a set of control variables \(u\) which influence the system's state evolution as defined by the state dynamics \(x_{k+1}=f(x_{k},u_{k},w_{k})\) while incurring in a total cost \(J(x_{0})=\sum_{k}^{N-1}L(x_{k},u_{k},w_{k})\), where \(L\) is the stage cost. The exogenous input \(w_{k}\) is used to characterize the set of variables which affect the simulation without being influenced by the controls; in powertrain simulation models, they are generally identified with the speed and acceleration profiles of the prescribed driving mission. The model that was developed for this work uses three state variables to characterize the battery's state of charge \(\sigma\), the gear number for the previous time step \(\gamma_{\text{p}}\) and the engine state for the previous time step \(\epsilon_{\text{p}}\), i.e. \[x=\begin{pmatrix}\sigma\\ \gamma_{\text{p}}\\ \epsilon_{\text{p}}\end{pmatrix}, \tag{5}\] and two control variables to set the engine torque-split factor \(\alpha_{\text{eng}}\) and the gear number for the current time step \(\gamma\), i.e. \[u=\begin{pmatrix}\alpha_{\text{eng}}\\ \gamma\end{pmatrix}. \tag{6}\] The engine torque-split ratio was selected to characterize the powerflow over other common choices as it highly interpretable, i.e. there is a direct correspondence between the value of \(\alpha_{\text{eng}}\) and the operating mode [10]. The running cost was set to a trade-off of four different terms: \[L=\dot{m}_{\text{fuel}}\,\Delta t+L_{\gamma}+L_{\epsilon}+L_{T_{\text{res}}}. \tag{7}\] The first term is the fuel consumption over a time step \(\Delta t\) (\(\dot{m}_{\text{fuel}}\) being the fuel flow rate), so that the fuel consumption over the whole mission will be minimized. The remaining three terms \(L_{\gamma}\), \(L_{\epsilon}\), \(L_{T_{\text{res}}}\) are penalty terms that penalize gear shifting, engine starts and low torque reserve availability respectively. The gear shift penalty was defined by a factor \(\phi_{\gamma}\) which is applied each time a gear shift occurs: \[L_{\gamma}=\begin{cases}\phi_{\gamma}&\text{if }\gamma\neq\gamma_{\text{p}},\\ 0&\text{otherwise}.\end{cases} \tag{8}\] Similarly, the engine start penalty was defined by a factor \(\phi_{\epsilon}\) which is applied each time the engine is turned on. An engine start occurs when a non-zero torque is set and the engine was off at the previous time step: \[L_{\gamma}=\begin{cases}\phi_{\epsilon}&\text{if }\alpha_{\text{eng}}>0\wedge \epsilon_{\text{p}}=0,\\ 0&\text{otherwise}.\end{cases} \tag{9}\] The torque reserve penalty was defined as the ratio between the used power-train torque \(T_{\rm pwt}\) and the available powertrain torque \(T_{\rm pwt,max}\), multiplied by a tunable factor \(\phi_{T_{\rm res}}\). The penalty is only applied if the vehicle is neither braking nor at standstill: \[L_{T_{\rm res}}=\begin{cases}\phi_{T_{\rm res}}\cdot\frac{T_{\rm pwt}}{T_{\rm pwt,max}}&\text{if }T_{\rm req}>0\wedge v_{\rm veh}>0,\\ 0&\text{otherwise}.\end{cases} \tag{10}\] More specifically, the powertrain torque is defined as the sum of the engine and e-machine torque at the gearbox input: \[T_{\rm pwt}=T_{\rm eng}+\max{(T_{\rm em}\tau_{\rm tc},0)}. \tag{11}\] Note that the e-machine torque was subject to lower saturation at zero in order to prevent its torque in generator mode being counted while using the powertrain in battery charging mode. Finally, available powertrain torque \(T_{\rm pwt,max}\) was simply defined as \[T_{\rm pwt,max}=T_{\rm eng,max}+T_{\rm em,max}\tau_{\rm tc}. \tag{12}\] Since both the engine and e-machine maximum torque are dependent on their speed, they are influenced by the gear engaged in the gearbox. Hence, the torque reserve penalty can be affected by the EMS by changing the gear number. ## 4 Case study In order to assess the effect of driveability constraints on the fuel-optimal control strategy, we implemented the simulation model described in the previous section in MATLAB and we used a dedicated dynamic programming solver called DynaProg [11] to obtain optimal control strategies with the cost functional formulated in Eq. 7. For the driving cycle, a combination of the Artemis Urban, Artemis Rural Road and Artemis Motorway 130 cycles was used as shown in Fig. 1, with a total length of 51 kilometers and duration of 52 minutes. With this framework, we developed four different cases by tuning the cost functional. In the first case, we set all driveability penalties to zero, considering Figure 1: The simulated driving cycle. fuel economy only as our objective. In the remaining three cases, we considered fuel economy and one driveability penalty at a time, disregarding the other two. In the remainder of this section, these strategies will be referred to as: 1. fuel-optimal: no penalty terms for driveability are considered. 2. gear shift-penalty: fuel-optimal with a penalty term for gear shifting. 3. engine start penalty: fuel-optimal with a penalty term for engine starts. 4. torque reserve penalty: fuel-optimal with a penalty term for torque reserve. The fuel-optimal strategy produced a fuel economy of 4.58 l/100km, an average of 18 gear shifts per minute and 3.1 engine starts per minute, with an average torque reserve of 58.3 %. The penalty factors for the other strategies were tuned with three separate parameter sweeps to obtain a sensible trade-off between fuel economy and each driveability objective. In particular, we aimed at less than one gear shift per minute for strategy b), less than 0.67 engine starts for strategy c) and an average torque reserve of at least 65 % for strategy d). The corresponding fuel consumption increase for each strategy is reported in Table 2. Fig. 2 shows the engine operating points throughout the mission for the four different strategies, color-coded based on the adopted operating mode. As expected, the engine tends to work near the optimal operating line (OOL) for the fuel-optimal control strategy. Introducing the gear shift penalty in b), the most notable difference is that the pure thermal operating points are now concentrated into two distinct and narrower speed ranges. These points are operated with the third and fourth gear engaged; clearly, the unconstrained strategy in a) uses frequent shifting between this two to move more points closer to the OOL. Considering the engine start penalty in c), we can note an increased usage of the pure thermal mode and a decrease in the usage of power-split mode, which is also evident from Table 3. In particular, this strategy makes a wider use of pure electric mode during the Urban phase of the driving cycle, discharging the battery, and uses the Rural Road phase to charge the battery back up; this is clearly visible from the state of charge profiles in Fig. 3. Still, the areas where the engine operating points concentrate remain similar. Finally, the effect of the torque reserve penalty in d) generates a large number of pure thermal points in the low-speed region of the map. These points are all points that provide a good trade-off between fuel economy and sportiness, \begin{table} \begin{tabular}{l l l l l} \hline & Fuel economy & Gear shifts & Engine starts & Torque reserve \\ & & \#/min & \#/min & \% \\ \hline a) fuel-optimal & 4.58 l/100km & 18 & 3.1 & 58.3 \% \\ b) gear shift-penalty & +1.6 \% & 0.93 & 2.7 & 60.7 \% \\ c) engine start penalty & +3.2 \% & 14 & 0.67 & 55.6 \% \\ d) torque reserve penalty & +2.5 \% & 16.4 & 3.3 & 65.8 \% \\ \hline \end{tabular} \end{table} Table 2: Performance of the four strategies. Figure 3: Comparison of the battery state of charge profile with the four strategies. Figure 2: Comparison of engine operating maps with four different cost functions. because they are concentrated along the OOL and at the same time they leave the full torque of the e-machine, in its constant torque region, available. We now turn our attention to gear shift behavior in Fig. 4, which shows how the engaged gears relate to the vehicle speed and engine power; this is a typical analysis tool when designing gear shift schedules for automated transmissions. Note that only hybrid modes are represented, i.e. pure electric points are not depicted. Considering the fuel-optimal strategy in a), we observe that a clear shifting pattern emerges as the operating points are neatly separated based on the engaged gear. We also note that the first gear is almost never engaged, as low speed operation is driven almost exclusively in pure electric. Introducing a gear shift penalty in b), however, complicates the shifting behavior. Although it is still possible to identify preferred areas for each gear, there are significant overlays such as the third and fourth gear being engaged in the area previously reserved to the fifth gear at several speeds. This is likely a consequence of the strategy having to sometimes operate in a non-efficient way in order to limit the number of gear shifts. The strategy with a penalty for engine starts in c) instead shows a more regular shifting pattern; the most notable difference with respect to the fuel-optimal strategy is a reduced usage of the fifth gear, which is mostly engaged at high power; further inspection revealed that these points were engaged in battery charging mode. Also noticeable is an increased usage of the third gear at higher power; these correspond to the additional pure thermal operating points. Finally, introducing the torque reserve penalty in d) generated a larger concentration of operating points at high power and high speed for the fourth and fifth gear, which correspond to the additional pure thermal and battery charging points that we previously observed in Fig. 2. ## 5 Conclusions In this work, we implemented dynamic programming to investigate the effect of three different driveability constraints on the optimal energy management strategy for a p2 parallel hybrid. The constraints were implemented by adding three different penalty terms to the base cost of the optimal control problem, which is fuel consumption. \begin{table} \begin{tabular}{l l l l l} \hline & \multicolumn{2}{c}{Pure electric} & \multicolumn{1}{c}{Pure thermal} & \multicolumn{1}{c}{Power-split} & \multicolumn{1}{c}{Battery charging} \\ \hline a) fuel-optimal & 42.8 \% & 7.91 \% & 26.2 \% & 23.1 \% \\ b) gear shift-penalty & 43.3 \% & 6.64 \% & 25.9 \% & 24.1 \% \\ c) engine start penalty & 61.8 \% & 11.4 \% & 8.42 \% & 18.4 \% \\ d) torque reserve penalty & 40.4 \% & 17.6 \% & 24.5 \% & 17.5 \% \\ \hline \end{tabular} \end{table} Table 3: Time shares spent in each operating mode with the four strategies. By testing each penalty term individually, we were able to assess the impact of each corresponding driveability aspect on a set of relevant features of the control strategy, such as the choice of engine operating points and the gear shift pattern. These considerations provide useful insight for the development of real-time, rule-based control strategy that minimize fuel consumption while preventing unrealistic and potentially damaging gear shifting and engine start/stop behavior, as well as targeting varying levels of sportiness.
2304.08824
A Dark Matter Probe in Accreting Pulsar-Black Hole Binaries
The accretion of dark matter (DM) into astrophysical black holes slowly increases their mass. The rate of this mass accretion depends on the DM model and the model parameters. If this mass accretion effect can be measured accurately enough, it is possible to rule out some DM models, and, with the sufficient technology and the help of other DM constraints, possibly confirm one model. We propose a DM probe based on accreting pulsar-black hole binaries, which provide a high-precision measurement on binary orbital phase shifts induced by DM accretion into black holes, and can help rule out DM models and study the nature of DM.
Ali Akil, Qianhang Ding
2023-04-18T08:43:22Z
http://arxiv.org/abs/2304.08824v3
# A Dark Matter Probe in Accreting Pulsar-Black Hole Binaries ###### Abstract The accretion of dark matter (DM) into astrophysical black holes slowly increases their mass. The rate of this mass accretion depends on the DM model and the model parameters. If this mass accretion effect can be measured accurately enough, it is possible to rule out some DM models, and, with the sufficient technology and the help of other DM constraints, possibly confirm one model. We propose a DM probe based on accreting pulsar-black hole binaries, which provide a high-precision measurement on binary orbital phase shifts induced by DM accretion into black holes, and can help rule out DM models and study the nature of DM. ## I Introduction Dark matter (DM) consists around 26% of the energy density in the Universe [1], which deeply influences the cosmic evolution and shapes the large scale structure. With decades of observations and studies, the Lambda-cold dark matter (\(\Lambda\)CDM) model has become the standard model of Big Bang cosmology [2; 3; 4]. However, there still exist unsolved DM problems between theoretical predictions and observations, such as the core-cusp problem [5; 6; 7], missing satellites problem [8; 9], etc., which push us to study the nature of DM. The difficulty in studying the nature of DM is its weak or absent interaction with baryonic matter, we are not able to observe it directly. This left space for a large number of theoretical models accounting for the gravitational effects of DM, like weakly interacting massive particles (WIMPs) [10; 11], ultralight DM [12], primordial black holes (PBHs) [13], modified gravity [14], etc. Some of these models have restricted parameter ranges due to observational constraints [15; 16; 17]. However, most of the models are not completely ruled out and are waiting to be examined by future DM probes. To develop a DM probe to study the nature of DM, the accretion of DM into a black hole (BH) is a potential channel, where various DM models contribute different accretion rates and result in distinct BH mass increment after long enough timescale. Such an accretion effect of DM has been well studied in a number of scenarios for understanding DM properties [18; 19; 20; 21; 22], however, due to the slow accretion rate of DM, a measurable DM accretion effect needs a supermassive host astrophysical object, which owns a complex matter surrounding environment and causes difficulty in extracting the DM information. The introduction of pulsar in BH accreting DM changes the story. The pulsar can emit stable pulse signals, which provide a high time-resolution measurement on its surrounding environment [23]. This high precision measurement can even measure the mass loss of a pulsar due to electromagnetic radiation [24]. When a pulsar rotates around an accreting BH, the cumulative DM accretion slowly increases the BH mass and influences orbital evolution of pulsar-black hole (PSR-BH) binaries. This deviation from a standard general-relativistic orbital evolution would produce an orbital phase shift, which can be detected in pulse timing after a long-time accretion cumulation. Since accretion effects in various DM models are different, the detected orbital phase shift induced by DM accretion is corresponding with DM models and model parameters, which can work as a DM probe in ruling out DM models and studying the nature of DM. Although, we do not observe any PSR-BH binaries so far, numerous studies have estimated the number of PSR-BH binaries inside the Milky Way is around \(\mathcal{O}(10)-\mathcal{O}(1000)\)[25; 26]. With the improvement of sensitivity in radio telescopes, such as Five-Hundred Metre Aperture Spherical Radio Telescope (FAST) [27], MeerKAT [28] and future Square Kilometre Array (SKA) [29], the PSR-BH binaries are expected to be observed in the near future. Also, gravitational waves (GWs) provide another window for detection. The low frequency GW detectors, like Laser Interferometer Space Antenna (LISA) [30], have high sensitivity in detecting GWs from small mass ratio binaries, which could be PSR-BH binaries. A joint observation by radio telescopes and GW detectors can enhance the possibility of the detection of PSR-BH binaries. This paper is organized as follows. In Sec. II, we give a brief introduction on the DM accretion rate into an astrophysical BH, including the accretion rate of WIMPs in Sec. II.1, the accretion rate of a hot particle DM model in Sec. II.2, the accretion rate of ultralight DM in Sec. II.3, a brief estimation on the accretion rate of PBHs in Sec. II.4, and a comment on baryonic matter accretion about how and in what cases it affects our proposal in Sec. II.5. Then in Sec. III, we discuss about the pulse timing in PSR-BH binaries, including using PSR-BH binaries to study the new phenomena in Sec. III.1 and studying BH mass changing effect in PSR-BH bina ries in Sec. III.2. Finally, in Sec. IV, we numerically calculate the mass accretion effect in PSR-BH binaries and use it to constrain the DM models, including constraining WIMPs in Sec. IV.1, ultralight DM in Sec. IV.2, PBHs in Sec. IV.3. ## II Dark Matter Accretion Accretion into stellar objects, particularly BHs, has been studied for a long time. In 1952, the Bondi accretion formula was derived for a non-relativistic gas cloud [31]. Then in the framework of General Relativity, Michel derived a formula for hot gas accretion [32]. Unruh, on the other hand, worked out the case for scalar fields by solving the Klein-Gordon equation in a Schwarzschild BH background geometry [33]. The reader will notice that different DM models obey different accretion formulas, which will lead to different accretion rates. For example, WIMPs obey Bondi's formula, whereas hot particle DM, being relativistic, will obey Michel's formula, etc. In each of the coming subsections we will briefly introduce one accretion formula. All the accretion rates basically scale like the square of the BH mass \(M_{B}^{2}\), but the constant which multiplies it varies heavily from one DM model to another. And the accreting BH that we are considering is a Schwarzschild BH. ### Weakly Interacting Massive Particles WIMPs were until recently a long time most popular DM candidate. WIMPs are beyond standard model particles that interact only very weakly except for their gravitational field. WIMPs mass can be anywhere between \(2\,\mathrm{GeV}\) and \(100\,\mathrm{TeV}\)[34]. The accretion of WIMPs by BHs is captured by the well known Bondi accretion formula [31]. The Bondi accretion assumes a spherically symmetric stellar object stationarily accreting a cloud of non-relativistic matter particles. For practical reasons, we will refer to the form in which it is presented in [35] (for a detailed derivation, see Appendix. A), where the equation reads, \[\frac{\mathrm{d}M_{\mathrm{B}}}{\mathrm{d}t}=4\pi\lambda_{B}(GM_{\mathrm{B}} )^{2}\frac{\rho_{\infty}}{\gamma^{\frac{3}{2}}\,\Theta_{\infty}^{\frac{3}{2}}c ^{3}}\, \tag{1}\] with \[\Theta=\frac{k_{B}T}{mc^{2}}=\frac{c_{s}^{2}}{\gamma c^{2}}\, \tag{2}\] being the dimensionless temperature. Here, the physical constants \(G\), \(c\), \(k_{B}\) are Newton's constant, the speed of light, and the Boltzmann constant, respectively. \(\rho_{\infty}\) is the density at infinity. Infinity in this context being what is relevant for such an astrophysical scale rather than for example cosmological scale. \(M_{\mathrm{B}}\) is the mass of the accreting BH, \(m\) is the WIMPs mass and \(\gamma\) is the so called polytropic constant, characterizing a polytropic fluid of pressure \(P\) and density \(\rho\) for which \(P\sim\rho^{\gamma}\). Moreover, \(\lambda_{B}=\frac{1}{4}\left(\frac{2}{5-3\gamma}\right)^{\frac{5-3\gamma}{2( \gamma-1)}}\). \(c_{s}\) is the sound speed of DM 1, and it is constrained by the rotation curve of the Milky Way as \(c_{s}<10^{-4}c\)[37], which puts an upper bound on dimensionless temperature \(\Theta<\mathcal{O}(10^{-8})\). As one would expect, the colder (smaller dispersion) the DM is, the higher the accretion rate. Moreover, the heavier it is (within the range of validity of the Bondi accretion), the higher its accretion rate. Footnote 1: An ideal CDM model is collisionless with a zero sound speed, however, CDM particles still have a non-zero velocity dispersion (see [36] for details), which causes a free streaming away from gravitational collapse and produces an effective sound speed. ### Hot Particle Dark Matter For the hot DM case, general relativistic effects are quite important, therefore we deal with Lorentz covariant quantities and have a full general relativistic treatment. This is captured by the Michel accretion formula [32], which has the same form as Bondi's, with a different \(\lambda_{M}\neq\lambda_{B}\), a smaller mass and higher temperature, causing a much smaller accretion rate (for detailed derivation, see Appendix. B). \[\frac{\mathrm{d}M_{\mathrm{B}}}{\mathrm{d}t}=4\pi\lambda_{M}(GM_{\mathrm{B}} )^{2}\frac{\rho_{\infty}}{\gamma^{\frac{3}{2}}\,\Theta_{\infty}^{\frac{3}{2}}c ^{3}}. \tag{3}\] Where \(\lambda_{M}\), unlike in the Bondi accretion, depends also on the sound speed of the medium but in general varies between 1 and 2 [35]. Since this accretion rate is extremely small and cannot be observed in the method we propose, we will not elaborate on it. But it is good to note that this model is one of those that will be favoured by the experiment that we propose in the case of the measurement showing there is no growth in the astrophysical BH mass. ### Ultralight Dark Matter Recently, the scalar field models of DM have arguably become the most popular. The ultralight DM model, on the small mass between \(10^{-24}\,\mathrm{eV}\) and \(1\,\mathrm{eV}\), is believed to solve the small scale problems of CDM [12]. The accretion rate of the ultralight DM is derived from the Klein-Gordon equation on a non-rotating BH background [33]. The mass accretion rate by a non-rotating BH of mass \(M_{\mathrm{B}}\) traveling with velocity \(v\) through uniformly distributed ultralight DM of mass \(m_{\mathrm{ul}}\) and density \(\rho_{\mathrm{DM}}\) can be expressed as follows, \[\frac{\mathrm{d}M_{\mathrm{B}}}{\mathrm{d}t}=\frac{32\pi^{2}(GM_{\mathrm{B}} )^{3}m_{\mathrm{ul}}\rho_{\mathrm{DM}}}{\hbar c^{3}v[1-\exp(-\xi)]}\, \tag{4}\] where \(\xi\) is defined as \(\xi\equiv 2\pi GM_{\rm B}m_{\rm ul}/\hbar v\) and \(\hbar\) is the reduced Planck's constant. ### Primordial Black Holes Apart from ultralight DM with mass smaller than \(1\,\)eV and WIMPs with mass between \(2\,\)GeV and \(100\,\)TeV, some ultraheavy objects can also be the DM candidate, in particular, primordial black holes (PBHs). PBHs were first introduced in [13], which proposed that primordial perturbations could collapse to a BH, and PBHs was later used in accounting for massive astrophysical compact halo objects (MACHOs) [38; 39]. As PBHs are much heavier than the other DM models that we have considered above, the way they can be accreted by astrophysical BHs differs radically. Consider the mass of PBH is above \(1\,M_{\odot}\), the accretion of a PBH in an astrophysical BH behaves like a binary evolution, and its accretion rate is related with the merger rate of this binary system, which can be estimated with the mean free path of the astrophysical BH and its moving velocity. Given the amount of DM in our galaxy, if we distribute it into PBHs rather than the other types of DM, those would be highly separated from each other, due to their large mass. Here, we assume the PBH mass function is monochromatic. Then, the mean free path \(l\) in this system depends on the cross section of BHs \(\sigma\simeq 27\pi(GM_{\rm B}/c^{2})^{2}\)[40] and number density of PBHs \(n=\rho_{\rm DM}/M_{\rm PBH}\), which can be expressed as follows, \[l_{f}=\frac{1}{\sigma n}=\frac{1}{27\pi}\left(\frac{c^{2}}{GM_{\rm B}}\right)^ {2}\frac{M_{\rm PBH}}{\rho_{\rm DM}}. \tag{5}\] Then, the mean free time for this astrophysical BH can be approximated as \(t_{f}\sim l_{f}/v\), where \(v\) is the relative velocity between the astrophysical BH and PBHs. The PBHs accretion rate can be evaluated as follows, \[\frac{{\rm d}M_{\rm B}}{{\rm d}t}\simeq\frac{M_{\rm PBH}}{t_{f}}\simeq 27\pi( GM_{\rm B})^{2}\frac{\rho_{\rm DM}v}{c^{4}}. \tag{6}\] ### Comment Baryonic Matter Accretion One important thing to take into account is the accretion of the baryonic matter. A detailed and careful account is certainly needed. However, as we keep the precise computation for a future project, we try here discuss the magnitude of that accretion. First, one should notice that if the PSR-BH binary is found outside the galactic disk, the baryonic matter there is very scarce and baryonic accretion is to be comfortably ignored. On the other hand, if the binary is in the galactic disk, we might expect a significant baryonic matter accretion. However, as we are comparing the behaviour of the binary for different DM models, and that baryonic matter accretion is the same in both cases, this will decrease its importance in the process. It will contribute as an added term to the BH mass, in both the cases that we will be comparing, which decreases its significance. And finally, an important point is that in our galaxy, the baryonic matter is in its great majority in stars (around \(90\%\), see [41] page 2). Therefore, we assume that if a whole star is swallowed by a Milky Way BH we will be able to see that. ## III Pulsar-Black Hole Binaries ### Background The PSR-BH binary system is the holy grail in radio astronomy, since the stable pulse signals emitted from pulsar can provide high precision measurements on the strong gravity field around the BH [42]. Due to the motion of the pulsar around the BH, the pulse signals could be affected by the periodic motion of the pulsar and the gravitational field in the binary, and such effects contribute three different kinds of time delay in measuring the orbital Time-of-Arrival (TOA) of receiving pulse signals [43] as follows, \[\Delta_{\rm orb}{\rm TOA}=\Delta_{R}+\Delta_{E}+\Delta_{S}. \tag{7}\] Here, \(\Delta_{R}\) is the Romer delay, which describes the elapsed time of light in passing the binary orbit. \(\Delta_{E}\) is the Einstein delay that is the general-relativistic time dilation in the PSR-BH gravitational field. \(\Delta_{S}\) is the Shapiro delay, which is the delayed time of light in curved spacetime. After measuring TOA of receiving pulses, we can use them to fit a given model by minimizing the timing residual between data and model predictions (see TEMPO2 program [44] for details). It gives five Post-Keplerian (PK) parameters, which can help determine the mass of the binary components and the orbital parameters of the binary. With these PK parameters, new phenomena can be tested in a given binary model. The new phenomena produces a different gravitational effect, which could influence the orbit evolution in a binary, and such an orbit deviation would produce a significant orbital phase shift, which could be detected after long enough observation time. In the general relativistic background, the orbital phase shift \(\Delta\phi\) can be calculated as follows, \[\Delta\phi(t)=\int_{0}^{t}f(\tau)d\tau-\int_{0}^{t}f_{\rm GR}(\tau)d\tau\, \tag{8}\] where \(f_{\rm GR}\) is the orbital frequency in general relativity without new phenomena and \(f\) is the orbital frequency with new phenomena. In order to ensure the detection of orbital phase shift induced by the new phenomena, the measurement uncertainty of the orbital phase shift \(\sigma_{\Delta\phi}\) should be smaller than the measured \(\Delta\phi\). The uncertainty of orbital phase shift is determined by the single measurement error and the number of independent measurements. Assuming the observation time per day \(t_{\rm obs}\simeq 10\,\)hrs, the orbital phase error in a single continuous measurement is decided by the orbital phase measurement error within one orbital period \(\sigma_{\phi}\) divided by the number of orbital periods within one continuous observation \(N_{P_{\rm b}}\), which is \(\sigma_{\phi}/N_{P_{\rm b}}\). Here \(\sigma_{\phi}\) can be maximally estimated as a pulse period \(P\) divided by orbital period \(P_{\rm b}\) and \(N_{P_{\rm b}}\) equals \(t_{\rm obs}/P_{\rm b}\). In calculating the number of independent measurements, we assume the observation is under running once per day, the number of independent measurements within observation time \(t\) is \(t/1\,\)days. Then the uncertainty of the orbital phase shift can be calculated as follows (also see [45; 46]), \[\sigma_{\Delta\phi}=\frac{1}{\sqrt{t/1\,{\rm day}}}\frac{P}{t_{\rm obs}}. \tag{9}\] The detection of new phenomena in PSR-BH binaries requires two conditions, one is that the orbital phase shift within observation time should be larger than its measurement uncertainty and the other one is that the observation time for the orbital phase shift cannot be longer than the duty time of the radio telescope \(T_{\rm duty}\) and the merger time of the binary \(T_{\rm merger}\). The conditions can be expressed as follows, \[|\Delta\phi(t)|>\sigma_{\Delta\phi}(t)\,\quad t<\min(T_{\rm duty},T_{\rm merger }). \tag{10}\] ### Mass Changing Binaries The physical background of the PSR-BH binary is very complex, it includes the baryonic matter and DM. Such a matter surrounding environment could cause matter accretion effect around the BH, which slowly increases the mass of the BH (the accretion effect around the pulsar is neglected, due to its relatively small mass, the ratio of accretion rate between pulsar and BH would be smaller than \(\mathcal{O}(1\%)\), if the BH mass is larger than \(10\,M_{\odot}\)). Although this matter accretion effect is extremely weak, it can still induce a detectable orbital phase shift in PSR-BH binaries after a long enough observation time. This orbital phase shift depends on the surrounding matter density and matter properties, especially, DM properties. Various DM models and model parameters would produce different orbital phase shifts in the PSR-BH binary, which can be used to distinguish the DM models and constrain the parameter regions of DM within a model. In this scenario, studying the orbital evolution of the PSR-BH binary with a changing BH mass is essential. During the orbital evolution of the PSR-BH binary, the gravitational radiation takes the gravitational energy and angular momentum away from the binary system. Follow [47] which is calculated as follows, \[P =\frac{G}{5c^{5}}\left(\frac{{\rm d}^{3}Q_{ij}}{{\rm d}t^{3}} \frac{{\rm d}^{3}Q_{ij}}{{\rm d}t^{3}}-\frac{1}{3}\frac{{\rm d}^{3}Q_{ii}}{{ \rm d}t^{3}}\frac{{\rm d}^{3}Q_{jj}}{{\rm d}t^{3}}\right)\,\] \[\frac{{\rm d}L_{i}}{{\rm d}t} =-\frac{2G}{5c^{2}}\epsilon_{ijk}\frac{{\rm d}^{2}Q_{mj}}{{\rm d}t^{ 2}}\frac{{\rm d}^{3}Q_{mk}}{{\rm d}t^{3}}. \tag{11}\] Here, \(G\) is the Newton's constant, \(c\) is the speed of light, \(\epsilon_{ijk}\) is the three-dimensional Levi-Civita symbol, \(Q_{ij}\) is a tensor which is defined as \(Q_{ij}=\sum_{\alpha}m_{\alpha}x_{\alpha i}x_{\alpha j}\), its form in a binary system can be expressed as \(Q_{xx}=\mu d^{2}\cos^{2}\phi\), \(Q_{yy}=\mu d^{2}\sin^{2}\phi\) and \(Q_{xy}=Q_{yx}=\mu d^{2}\sin\phi\cos\phi\), where \(\mu\) is the reduced mass \(m_{1}m_{2}/(m_{1}+m_{2})\), \(d\) is the distance between components in the binary and \(\phi\) is the orbital phase of the binary. In Eq. (11), the magnitude of the gravitational radiation is determined by the time derivative of \(Q_{ij}\), where a changing BH mass contributes. Such a changing BH mass not only contributes to the gravitational radiation, but also influences the gravitational potential energy of the binary. Due to the accretion of DM into BHs, the gravitational potential energy is transferred from the DM to the PSR-BH binary, an overall contribution of gravitational potential energy can be estimated as follows, a detailed derivation can be found in Appendix. D. \[\frac{{\rm d}E_{\rm p}}{{\rm d}t}=-\frac{Gm_{\rm p}}{a}\frac{{\rm d}M_{\rm B} }{{\rm d}t}\, \tag{12}\] where \(a\) is the semi-major axis of the binary, \(m_{\rm p}\) is the mass of the pulsar and \(M_{\rm B}\) is the BH mass. The total energy \(E\) and angular momentum \(L\) of the PSR-BH binary are related with their orbital parameters, semi-major axis \(a\) and eccentricity \(e\) as follows [47], \[a=-\frac{Gm_{\rm p}M_{\rm B}}{2E}\,\quad L^{2}=\frac{Gm_{\rm p}^{2}M_{\rm B} ^{2}}{m_{\rm p}+M_{\rm B}}a(1-e^{2}). \tag{13}\] Combining Eqs. (11-13), the time derivative of orbital parameters \(da/dt\) and \(de/dt\) can be numerically solved from the energy conservation and angular momentum conservation, which can be expressed as follows, \[\frac{{\rm d}E}{{\rm d}t}=-P_{\rm acc}+\frac{{\rm d}E_{p}}{{\rm d}t}\,\quad\frac{{\rm d}L}{{\rm d}t}=\frac{{\rm d}L_{\rm acc}}{{\rm d}t}. \tag{14}\] Here, \(P_{\rm acc}\) is the power of the gravitational radiation emitted from the PSR-BH binary with a DM accretion into the BH. \({\rm d}L_{\rm acc}/{\rm d}t\) is the time derivative of angular momentum with the DM accretion effect. After obtaining the time derivative of the semi-major axis \({\rm d}a/{\rm d}t\), the time derivative of orbital frequency \({\rm d}f/{\rm d}t\) can be calculated from the Kepler's third law as follows, \[\frac{{\rm d}f}{{\rm d}t}=\frac{1}{4\pi}\frac{a^{-5/2}G^{1/2}}{(m_{\rm p}+M_{ \rm B})^{1/2}}\left(a\frac{{\rm d}M_{\rm B}}{{\rm d}t}-3(m_{\rm p}+M_{\rm B}) \frac{{\rm d}a}{{\rm d}t}\right). \tag{15}\] Meanwhile, the orbital frequency without the DM accretion follows the standard general relativistic evolution, which gives its time derivative as follows, \[\frac{\mathrm{d}f_{\mathrm{GR}}}{\mathrm{d}t}=-\frac{3}{4\pi}\frac{G^{1/2}(m_{ \mathrm{p}}+M_{\mathrm{B}})^{1/2}}{a_{\mathrm{GR}}^{5/2}}\frac{\mathrm{d}a_{ \mathrm{GR}}}{\mathrm{d}t}. \tag{16}\] Here, subscript GR denotes that the evolution of physical quantity follows the standard general relativitic evolution in [47]. Then the corresponding frequency evolution \(f(t)\) and \(f_{\mathrm{GR}}(t)\) can be numerically solved, following Eq. (8), orbital phase shift \(\Delta\phi\) induced by the DM accretion on the BH can be obtained. In order to make sure a detection of DM accretion into the BH, Eq. (10) needs to be satisfied. ## IV Dark matter accretion in PSR-BH binaries ### Weakly Interacting Massive Particles For WIMPs, we use Eq. (1) to calculate the effect of its accretion on PSR-BH binaries. It shows that the DM accretion rate into BHs depends on the DM density, BH mass and dimensionless temperature. Since the measurable pulsar systems live in the Milky Way, the DM density follows a galactic DM density profile, which we use the Navarro-Frenk-White (NFW) model to describe [48; 49], \[\rho(r)=\frac{\rho_{0}}{\frac{r}{r_{0}}(1+\frac{r}{r_{0}})^{2}}\, \tag{17}\] where \(\rho_{0}\) is the characteristic density and \(r_{0}\) is the scale length. By fitting the rotation curve of the Milky Way, we have the best fit parameters are \(\rho_{0}=0.052\,M_{\odot}/\mathrm{pc}^{3}\), \(r_{0}=8.1\,\mathrm{kpc}\)[50]. In a practical calculation, we take the position of the PSR-BH binary to be at \(10\,\mathrm{kpc}\) away from the center of the Milky Way. Following the process introduced in Sec. III, the evolution of the orbital phase shift can be obtained, as shown in Fig. 1. We can find that a detectable timescale for WIMPs (\(\Theta\sim 10^{-12}\)) accretion on \(100\,M_{\odot}\) BHs in the PSR-BH binary system is \(\mathcal{O}(10)\,\mathrm{years}\). A smaller \(\Theta\) represent a lower temperature of WIMPs or heavier WIMPs mass, which could effectively increase the accretion rate in Eq. (1) and enlarge the orbital phase shift in PSR-BH binaries. In order to obtain a detectable parameter region for the BH mass and dimensionless temperature, we follow the Eq. (10) in constraining parameters, where we assume the pulsar mass is \(1.6\,M_{\odot}\), pulse period is \(1\,\mathrm{ms}\) and the duty time of radio telescope is set as \(10\,\mathrm{years}\). The result is shown in Fig. 2. We can find that a larger BH mass and higher orbital frequency could extend the range of a detectable dimensionless temperature. This is because that a larger BH mass and higher orbital frequency can increase the power of GW radiation, which speeds up the shrinkage of orbit of the PSR-BH binary. Then, orbital phase shift induced by the WIMPs accretion on BH would be enlarged. Also, for a larger eccentricity (dashed curves in Fig. 2), stronger GW is emitted which increases the orbital phase shift and extends the observable \(\Theta\) range. The lower bound of shadow regions are given by the condition \(\Delta\phi(T_{\mathrm{duty}})=\sigma_{\Delta\phi}(T_{\mathrm{duty}})\), while the upper bound are given by the condition \(\Delta\phi(T_{\mathrm{merger}})=\sigma_{\Delta\phi}(T_{\mathrm{merger}})\). Therefore, we can find that even though a large BH mass could improve the detectable range of \(\Theta\), its short merger time prohibits a large cumulative orbital phase and decreases the detectable range of \(\Theta\). Figure 2: The detectable regions for dimensionless temperature \(\Theta\) with different BH masses in PSR-BH binaries. We assume the pulsar mass is \(1.6\,M_{\odot}\), pulse period is \(1\,\mathrm{ms}\) and the duty time of radio telescope is \(10\,\mathrm{years}\). The blue (red) shadow regions represent the result from PSR-BH binaries with GW frequency \(10^{-2}\,\mathrm{Hz}\) (\(10^{-3}\,\mathrm{Hz}\)). The solid (dashed) regions represent the result from PSR-BH binaries with eccentricity \(e=0\) (\(e=0.6\)). Figure 1: The evolution of orbital phase shift in the PSR-BH binary. We assume that the mass of BH is \(100\,M_{\odot}\), the mass of pulsar is \(1.6\,M_{\odot}\), the initial detected GW frequency is \(0.01\,\mathrm{Hz}\) and the eccentricity is \(e=0\) (\(e=0.6\)) for solid curves (dashed curves). The gray shadow regions are detectable orbital phase shift ranges by pulsar with pulse period \(1\,\mathrm{ms}\) and \(100\,\mathrm{ms}\), respectively. The different colors of curves denote dimensionless temperature of WIMPs are \(10^{-11}\), \(10^{-12}\) and \(10^{-13}\), respectively. In this numerical calculation on the orbital phase shift, we use Eq. (15) and (16), which is the orbital evolution in non-relativistic limit. This limit is only valid in the inspiral phase of the PSR-BH binary, so an upper bound of the orbital frequency during inspiral phase should be put in calculating the orbital phase shift. This maximal inspiral phase frequency can be estimated as \(f_{\rm insp}=(a\eta^{2}+b\eta+c)/2\pi GM\)[51, 52, 53]. \(\eta\) is the symmetric mass ratio, which is defined as \(\eta\equiv m_{\rm p}M_{\rm B}/M^{2}\) in the PSR-BH binary, \(M\equiv m_{\rm p}+M_{\rm B}\) and the coefficients \(a=0.29740\), \(b=0.04481\), \(c=0.09556\) (see Table 1 of [54]). For a larger detectable \(\Theta\) range, we can extend the location of the PSR-BH binary to a position with the higher DM density, where the mass accretion rate is effectively enhanced, hence increases the orbital phase shift of the binary and enlarges the detectable window of \(\Theta\). We consider the PSR-BH binary with \(0.01\,\rm Hz\) GW frequency, circular orbit and \(1\,\rm ms\) pulse period. Within a \(10\,\rm years\) observation, the detectable range for \(\Theta\) is shown in Fig. 3. It clearly shows a high DM density location near the center of the Milky Way can extend the detectable range of \(\Theta\), and the accretion effect on a \(1000\,M_{\odot}\) BH can extend the range of \(\Theta\) up to a value close to \(10^{-8}\), which is an upper bound of \(\Theta\) inside the Milky Way [37]. ### Ultralight Dark Matter To estimate the accretion rate of ultralight DM, we use Eq. (4), which gives the mass accretion rate by a non-rotating BH of mass \(M_{\rm B}\) traveling through a uniform distributed scalar field. Due to the accretion rate of ultralight DM is relatively weak compared with WIMPs, we mainly focus its accretion at the center of the Milky Way. Follow [55], we apply the central density and virial velocity of the soliton in Eq. (4), which gives its accretion rate, \[\frac{{\rm d}M_{\rm B}}{{\rm d}t}=\frac{2.5\,M_{\odot}}{10^{17}\,{\rm yr}} \left(\frac{M_{\rm B}}{\dot{M}_{\rm B}}\right)^{2}\left(\frac{m_{\rm ul}}{ \ddot{m}_{\rm ul}}\right)^{6}\left(\frac{M_{\rm sol}}{\dot{M}_{\rm sol}} \right)^{4}\, \tag{18}\] where the reference BH mass \(\dot{M}_{\rm B}=100\,M_{\odot}\), the reference ultralight DM mass \(\ddot{m}_{\rm ul}=10^{-22}\,\rm eV\) and the reference soliton mass \(\ddot{M}_{\rm sol}=10^{10}\,M_{\odot}\). The accretion rate of ultralight DM \(\dot{M}_{\rm B}\propto M_{\rm B}^{2}\), which is similar with the WIMPs' accretion rate, therefore, in a given parameter setting \((M_{\rm B},m_{\rm ul},M_{\rm sol})\), the time evolution of orbital phase shift is similar like the behavior in Fig. 1. In order to find a detectable mass range of ultralight DM in PSR-BH binaries, we set the soliton mass of the Milky Way is \(10^{9}\,M_{\odot}\)[56], and we follow the procedures introduced in Sec. III, using Eq. (10), where we assume the pulsar mass is \(1.6\,M_{\odot}\), pulse period is \(1\,\rm ms\) and the duty time of radio telescope is set as \(10\,\rm years\). The result is shown in Fig. 4. We can find that a larger BH mass and higher orbital frequency can extend the detectable mass range of ultralight DM as previous discussions in WIMPs. And the orbital phase shift induced by a larger ultralight DM mass could be detected in PSR-BH binaries with a wider BH mass range, this is because a larger ultralight DM mass can effectively enhance the accretion rate of ultralight DM in Eq. (18). Also, contour in Fig. 4 is similar with that in Fig. 2, the difference is only a rescale on the values of x-axis. The reason is that the accretion rate in WIMPs and ultralight DM can be generalized as \(\dot{M}_{\rm B}=f(\alpha)M_{\rm B}^{2}\), where \(\alpha\) is the parameter of DM models, namely, \(\Theta\) in WIMPs and \(m_{\rm ul}\) in ultralight DM. For each value of \(f(\alpha)\), it relates with two different values in \(\Theta\) and \(m_{\rm ul}\) and corresponds with a specific accretion rate. Meanwhile, this accretion rate determines a BH mass range in PSR-BH binaries, which is the same BH mass range in two DM models, it causes a rescaled x-axis in their contours. Figure 4: The detectable regions for the mass of ultralight DM \(m_{\rm ul}\) with different BH masses in PSR-BH binaries. The blue (red) shadow regions represent the result from PSR-BH binaries with GW frequency \(10^{-2}\,\rm Hz\) (\(10^{-3}\,\rm Hz\)). The solid (dashed) regions represent the result from PSR-BH binaries with eccentricity \(e=0\) (\(e=0.6\)). Figure 3: The detectable regions for dimensionless temperature \(\Theta\) with different locations of PSR-BH binaries. We assume the pulsar mass is \(1.6\,M_{\odot}\). The red (blue) shadow regions represent the result from the mass of BH is \(100\,M_{\odot}\) (\(1000\,M_{\odot}\)). ### Primordial Black Holes In order to estimate the magnitude of PBHs accretion rate, we can compare PBHs accretion rate in Eq. (6) with WIMPs accretion rate in Eq. (1). The parameters \(\lambda_{B}\), \(\gamma\) are around \(\mathcal{O}(1)\) in Eq. (1), then we have \[\frac{\dot{M}_{\rm B}^{\rm P}}{\dot{M}_{\rm B}^{\rm W}}\simeq\frac{27}{4}\frac{v }{c}\Theta^{3/2}. \tag{19}\] Here, we use the velocity in rotation curve of the Milky Way [57] to approximate the relative velocity between the BH and PBHs \(v\), which is around \(200\,\rm km/s\). And we set \(\Theta\sim 10^{-10}\), which is the largest detectable value in a \(10^{-2}\,\rm Hz\) PSR-BH binary. Then, the accretion rate ratio between PBHs model and WIMPs can be evaluated as \(\dot{M}_{\rm B}^{\rm P}/\dot{M}_{\rm B}^{\rm W}\sim\mathcal{O}(10^{-17})\). Such a small PBHs accretion rate can hardly be detected in our PSR-BH binary systems. Actually, the accretion rate in Eq. (6) is a time averaged accretion rate, and the mass increment due to accreting PBHs only occur when the astrophysical BH interacts with a PBH. This interaction happens within a very short time interval, so a physical PBH accretion rate can be approximated as a summation of Dirac delta functions \[\frac{\mathrm{d}M_{\rm B}}{\mathrm{d}t}=M_{\rm PBH}\sum_{n=1}^{\infty}\delta(t -nt_{f})\, \tag{20}\] where \(t_{f}\) is the mean free time of each merger of the accreting BH and PBH. Therefore, this mean free time cannot be too long that such a merger event would not happen once during the observational duty time. We can give a brief estimation on this mean free time by dividing the mean free path by the relative velocity between the accreting BH and PBHs. In Eq. (5), we assume the mass of accreting BH is \(100\,M_{\odot}\), PBH mass is \(1\,M_{\odot}\) and \(\rho_{\rm DM}=0.013\,M_{\odot}/\rm pc^{3}\), which is the DM density at the location of the sun in NFW model [50]. As above, the relative velocity between the accreting BH and PBHs is approximated by the velocity in rotation curve \(v\sim 200\,\rm km/s\). Then a mean free time is \(t_{f}=l_{f}/v\sim\mathcal{O}(10^{26})\,\rm years\), which is too long to be detected in a PSR-BH binary. ## V Conclusions To summarize, we propose a DM probe that can be used to study the nature of DM. This probe is based on the DM accretion in PSR-BH binaries. The DM accretion could slowly increase the BH mass, and such a mass increment in a PSR-BH binary would affect its gravitational radiation and decrease gravitational potential energy, then induce an orbital phase shift in the orbital evolution of the PSR-BH binary. Various DM models contribute different mass accretion rates and induce distinct orbital phase shifts. After a long observation time in pulse timing, such an orbital phase shift could be detected. From observable orbital phase shifts, it can help us distinguish the DM models and constrain the parameters in DM models. In this work, we mainly focus on three DM models, WIMPs, ultralight DM, and PBHs. For WIMPs, the accretion rate follows \(\dot{M}_{\rm B}\sim M_{\rm B}^{2}\rho_{\rm DM}\Theta^{-3/2}\), a larger BH mass, the higher DM density and a lower dimensionless temperature, all can enhance the WIMPs accretion rate. Consider WIMPs with \(\Theta\sim\mathcal{O}(10^{-12})\), accreted by a PSR-BH binary with a \(100\,M_{\odot}\) BH mass at the position \(10\,\rm kpc\) away from the center of the Milky Way, the detectable timescale of induced orbital phase shift is \(\mathcal{O}(10)\,\rm years\). Within a \(10\,\rm years\) observation, a value of \(\Theta\) can be detected up to \(10^{-8}\) in a PSR-BH binary with \(10^{-2}\,\rm Hz\) GW frequency, and the observable range of \(\Theta\) can be extended with a larger BH mass and higher orbital frequency. For ultralight DM, we mainly consider the accretion inside the soliton of the Milky Way, due to their weak accretion rate, which follows \(\dot{M}_{\rm B}\sim M_{\rm B}^{2}m_{\rm tl}^{6}\), so a larger mass of ultralight DM can effectively improve the detectability of accretion effect. In our parameter setting, the orbital phase shift induced by the accretion of ultralight DM with mass above \(\mathcal{O}(10^{-20})\,\rm eV\) can be detected, which could help constrain the mass the ultralight DM. For PBHs, their number density inside the Milky Way is small due to a large PBH mass, and this small number density causes an extremely long mean free time of a BH encountering PBHs, around \(\mathcal{O}(10^{26})\,\rm years\), which is not a detectable timescale. Therefore, a null detection result may indicate a possibility of PBHs as DM or some other DM models with undetected parameter regions. In above discussions, the mass accretion rate in different DM models basically follows \(\dot{M}_{\rm B}=f(\alpha)M_{\rm B}^{2}\), where \(\alpha\) is the DM parameter. Therefore, a detected DM accretion rate may correspond to different DM parameters in their models, which could cause difficulty in distinguishing them. In pining down a specific DM model, some other constraints on DM [58] can be used in breaking this DM model degeneracy, then a DM parameter in this model could be constrained from observed accretion rate. In addition, some other effects such as baryonic matter accretion and dynamical friction in a DM density spike [59], should be taken into consideration in a real data analysis, which is neglected in above calculations. Apart from mass accretion, some other phenomena could also change the BH mass and induce an orbital phase shift in PSR-BH binaries, such as superradiance effect around a Kerr BH [60; 61; 62], which could extract \(\mathcal{O}(10\%)\) mass of host BH to form a light boson cloud [63; 64]. Therefore, detailed studies on these significant effects in PSR-BH binaries could shed light on unknown physics and show a great potential of the PSR-BH binary. ## Acknowledgements We would like to thank Lam Hui, Yi Wang, Henry Tye, and Leonardo Modesto for the very helpful comments and advice. ## Appendix A Weakly Interacting Massive Particles Accretion Here we will introduce the Bondi accretion formula which applies to the WIMPs. We assume spherical symmetry, and a steady state accretion, with the uniform density \(\rho_{\infty}\) and pressure \(p_{\infty}\) at spatial infinity. Thus, at infinity, the sound speed \(c_{s}\) reads, \(c_{s,\infty}=(\frac{2p_{\infty}}{\rho_{\infty}})^{\frac{1}{2}}\). The steady flow translates as \[\dot{M}=-4\pi r^{2}\rho u\, \tag{10}\] where \(u\) is the radial velocity. We can then write the equations for momentum conservation, \[u\frac{\mathrm{d}u}{\mathrm{d}r}+\frac{c_{s}^{2}}{\rho}\frac{\mathrm{d}\rho}{ \mathrm{d}r}+\frac{GM}{r^{2}}=0\, \tag{11}\] and the mass conservation, \[\frac{1}{\rho}\frac{\mathrm{d}\rho}{\mathrm{d}r}=-\frac{2}{r}-\frac{1}{u} \frac{\mathrm{d}u}{\mathrm{d}r}. \tag{12}\] Substituting the latter into the former yields, \[\frac{1}{2}\left(1-\frac{c_{s}^{2}}{u^{2}}\right)\frac{\mathrm{d}u^{2}}{ \mathrm{d}r}=\frac{-GM}{r^{2}}\left(1-\frac{2c_{s}^{2}r}{GM}\right)\, \tag{13}\] which is the Bondi equation. We can contemplate the term between parentheses on the LHS. As \(r\to\infty\) and \(c_{s}\to c_{s,\infty}\) that term is negative. Whereas for \(r\to 0\) it is positive again. Assuming continuity, there must be a point where it vanishes. At that point the LHS vanishes simultaneously for the \(u^{2}(r_{s})\equiv u_{s}^{2}=c_{s}^{2}\). For our case, the physically relevant solution is this so called transonic solution with the condition \[u^{2}\to 0\,\text{ as }\,r\to\infty. \tag{14}\] From equation (13), the sonic point is at \[r_{s}=\frac{GM}{2c_{s}^{2}}. \tag{15}\] Moreover, from equation (11), with the Polytropic condition \[P\sim\rho^{\gamma}\, \tag{16}\] one simply derives \[\frac{u^{2}}{2}+\frac{c_{s}^{2}(r)}{\gamma-1}-\frac{GM}{r}=\frac{c_{s,\infty} ^{2}}{\gamma-1}. \tag{17}\] Then, evaluating this at the sonic point, we find, \[c_{s}(r_{s})=c_{s,\infty}\sqrt{\frac{2}{5-3\gamma}}. \tag{18}\] This finally brings us to the Bondi accretion law, \[\dot{M}_{B}=4\pi\lambda_{B}(GM)^{2}\frac{\rho_{\infty}}{c_{s,\infty}^{3}}\, \tag{19}\] with \[\lambda_{B}=\frac{1}{4}\left(\frac{2}{5-3\gamma}\right)^{\frac{5-3\gamma}{2 (\gamma-1)}}. \tag{20}\] Another way of expressing Eq.(19), as a function of the temperature per particle mass, is using \[\frac{k_{B}T}{m}=\frac{P}{\rho}=\frac{c_{s}^{2}}{\gamma}\, \tag{21}\] where \(k_{B}\) is the Boltzman constant and \(m\) is the WIMP mass. This, inserted in (19), yields \[\dot{M}_{B}=4\pi\lambda_{B}(GM)^{2}\frac{\rho_{\infty}m^{\frac{3}{2}}}{\gamma^ {\frac{1}{2}}\left(k_{B}T\right)^{\frac{3}{2}}}\, \tag{22}\] Which is equivalent to Eq.(1). ## Appendix B Hot Particle Dark Matter Accretion We start from the continuity equation, and the vanishing of the divergence of the stress-energy tensor, \[\nabla_{\mu}\left(\rho u^{\mu}\right) =0\, \tag{23}\] \[\nabla_{\mu}T^{\mu\nu} =0. \tag{24}\] Here, \(U^{\mu}\) is the DM fluid 4-velocity, and \(T^{\mu\nu}=\rho(1+h)u^{\mu}u^{\nu}+pg^{\mu\nu}\). We assume a Schwarzschild metric background, \[\mathrm{d}s^{2}=-(1-\frac{2M}{r})\mathrm{d}t^{2}+\frac{1}{1-\frac{2M}{r}} \mathrm{d}r^{2}+r^{2}\mathrm{d}\Omega^{2}. \tag{25}\] The corresponding metric determinant square root is thus \[\sqrt{-g}=r^{2}\sin\theta\, \tag{26}\] and the four-velocity, defined in our spherically symmetric case as \[u^{\mu}=\frac{\mathrm{d}x^{\mu}}{\mathrm{d}\tau}=(u^{t},u^{r},0,0)\, \tag{27}\] with \(u^{\mu}u_{\mu}=-1\). The components of the four-velocity thus relate to each other as \[g_{tt}(u^{t})^{2}+g_{rr}(u_{r})^{2}=-1 \tag{28}\] \[(u^{t})^{2}=\frac{1-\frac{2M}{r}+(u^{r})^{2}}{(1-\frac{2M}{r})^{ 2}}. \tag{29}\] Now we go back to Eq.(18) which yields, \[\frac{\mathrm{d}}{\mathrm{d}r}(r^{2}\rho u^{r})=0\, \tag{19}\] and Eq.(19) yields, \[\frac{\mathrm{d}}{\mathrm{d}r}\left(r^{2}\rho(1+\frac{\gamma}{\gamma-1}\Theta)u ^{t}u^{r}\right)=0. \tag{20}\] Once integrated they reduce to \[\dot{M}=4\pi r^{2}\rho u^{r}\, \tag{21}\] and \[(1+\frac{\gamma\Theta}{\gamma-1})\left(1-\frac{2M}{r}+|u^{r}|^{2}\right)=(1+ \frac{\gamma\Theta_{\infty}}{\gamma-1}). \tag{22}\] Combining the two equations, and following the same procedure as in Appendix. A yields Eq. (3). ## Appendix C Ultralight Dark Matter Accretion In [33], Unruh considered the Klein-Gordon equation on a Schwarzschild BH geometry, \[g^{\mu\nu}\phi_{,\mu;\nu}+m^{2}\phi=0. \tag{23}\] With the separation of variables \[\phi(t,r,\theta,\varphi)=\mathrm{e}^{-i\omega t}f_{\omega t}(r)Y_{lm}(\theta, \varphi)\, \tag{24}\] one is left with, \[\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
2310.04159
Amortized Network Intervention to Steer the Excitatory Point Processes
Excitatory point processes (i.e., event flows) occurring over dynamic graphs (i.e., evolving topologies) provide a fine-grained model to capture how discrete events may spread over time and space. How to effectively steer the event flows by modifying the dynamic graph structures presents an interesting problem, motivated by curbing the spread of infectious diseases through strategically locking down cities to mitigating traffic congestion via traffic light optimization. To address the intricacies of planning and overcome the high dimensionality inherent to such decision-making problems, we design an Amortized Network Interventions (ANI) framework, allowing for the pooling of optimal policies from history and other contexts while ensuring a permutation equivalent property. This property enables efficient knowledge transfer and sharing across diverse contexts. Each task is solved by an H-step lookahead model-based reinforcement learning, where neural ODEs are introduced to model the dynamics of the excitatory point processes. Instead of simulating rollouts from the dynamics model, we derive an analytical mean-field approximation for the event flows given the dynamics, making the online planning more efficiently solvable. We empirically illustrate that this ANI approach substantially enhances policy learning for unseen dynamics and exhibits promising outcomes in steering event flows through network intervention using synthetic and real COVID datasets.
Zitao Song, Wendi Ren, Shuang Li
2023-10-06T11:17:28Z
http://arxiv.org/abs/2310.04159v2
# Amortized Network Intervention to Steer the Excitatory Point Processes ###### Abstract We tackle the challenge of _large-scale network intervention_ for guiding _excitatory point processes_, such as infectious disease spread or traffic congestion control. Our model-based reinforcement learning utilizes neural ODEs to capture how the networked excitatory point processes will evolve subject to the time-varying changes in network topology. Our approach incorporates Gradient-Descent based Model Predictive Control (GD-MPC), offering policy flexibility to accommodate _prior knowledge_ and _constraints_. To address the intricacies of planning and overcome the high dimensionality inherent to such decision-making problems, we design an Amortize Network Interventions (ANI) framework, allowing for the _pooling_ of optimal policies from history and other contexts, while ensuring a _permutation equivalent_ property. This property enables efficient knowledge transfer and sharing across diverse contexts. Our approach has broad applications, from curbing infectious disease spread to reducing carbon emissions through traffic light optimization, and thus has the potential to address critical societal and environmental challenges. ## 1 Introduction In the face of widespread epidemic outbreaks, governments must act swiftly and wisely to control the spread of diseases, often through measures like temporary city lockdowns or travel restrictions (Salathe & Jones, 2010; Sambaturu et al., 2020). Similarly, optimizing traffic light schedules in densely populated urban areas is essential to alleviate traffic congestion. These real-world scenarios highlight the necessity of guiding event processes across networks by modifying network structures as needed. The dynamics of these networked events are complex, involving vast volumes of data across multiple dimensions. Decision-making must be reliable and adaptable to rapidly changing circumstances. However, altering dynamic network structures presents a computational challenge, especially in scenarios like city traffic control, where real-world constraints and various factors must be considered. For instance, when regulating the coronavirus, government interventions must balance health concerns with economic implications and public sentiment. Thus, this network intervention problem requires innovative solutions. We model events, such as infectious disease spread or traffic congestion, as multivariate excitatory temporal point processes. Our goal is to solve a _model-based_ reinforcement learning (MBRL) problem: _guiding large-scale excitatory processes across dynamic networks by modifying network structures to minimize costs_. This presents challenges in both modeling and computation. First, modeling networked excitatory point processes with complex excitation patterns is challenging. Traditional disease models, such as SIR models (Weiss, 2013), use ordinary differential equations (ODEs). These models divide the population into compartments like susceptible, infectious, and recovered, utilizing ODEs to capture changes over time. Similarly, classic traffic flow models rely on ODEs or PDEs. For example, the Lighthill-Whitham-Richards (LWR) model (Lighthill & Whitham, 1955; Richards, 1956) employs PDEs to describe traffic density evolution along roads. These models offer simplified yet insightful representations of disease dynamics and traffic patterns. To address the complex dynamics of high-dimensional event sequences, we turn to the Neural ODE model (Chen et al., 2018), a data-driven approach for modeling ODE dynamics. Importantly, our model-based RL framework can adapt to various event process models beyond Neural ODEs, allowing for efficient computational choices while maintaining high prediction accuracy. The second challenge is to design intervention policies that accommodate domain constraints, incorporate feedback rapidly, and adapt to changing circumstances. Gradient-Descent-based Model Predictive Control (GD-MPC) with medium-sized neural network models (Nagabandi et al., 2018; Bharadhwaj et al., 2020) is a valuable approach among Model-based Reinforcement Learning (MBRL) algorithms. MPC solves a finite-horizon optimization problem at each time step using a sliding window approach, which improves decision-making. MPC advantages include explicit consideration of system dynamics and constraints, continuous adaptation based on feedback, and flexibility in incorporating various objectives and constraints. These features make MPC a powerful tool for designing adaptive intervention policies for complex, high-frequency event sequences. The third challenge is scaling the MBRL algorithm for high-dimensional problems, like controlling an entire city's traffic network. We've developed the Amortize Network Interventions (ANI) framework to tackle this issue. ANI enables us to extract optimal policies from historical data and similar tasks while preserving a crucial _permutation equivalent_ property. We introduce a novel metric to aid in learning permutation equivalent representations, ensuring efficient parameter transfer and sharing across tasks, thereby enhancing our approach's adaptability and scalability. Our proposed method is strategically crafted to meet the above three challenges. To assess its efficacy and efficiency, we have conducted comprehensive experiments using synthetic traffic congestion data and real-world COVID datasets. The experimental results demonstrate the effectiveness of our approach in adeptly steering excitatory point processes through the control of network dynamics. ## 2 Problem Formulation: Model-based RL We begin by modeling spatial-temporal event sequences as temporal graph networks. For infectious diseases, we divide the geographical map into regions, each corresponding to a graph node. Each time step records new confirmed cases in these regions, creating a discrete-time dynamic graph. In the case of traffic congestion incidents, we use a lane-based approach. Each lane on a road becomes a network node, and at each time step, we track the congestion count for each lane within the specified time interval. Formally, we define a temporal graph network \(\mathcal{G}_{t}=(\mathcal{V}_{t},\mathcal{E}_{t})\) indexed by \(t=0,1,\dots\), with \(\mathcal{V}_{t}\) and \(\mathcal{E}_{t}\) representing the node and edge sets at time \(t\). The network maintains a fixed set of \(N\) nodes at each time step. For each node, representing either a region or a traffic lane, we observe a sequence of event spike counts at each time step. This results in a spike count matrix observed up to time \(t\), denoted as \(\mathbf{X}_{t}\in\mathbb{N}^{t\times N}\). Here, \(\mathbf{X}_{t}\) contains \(N\) time-series of event counts: \(\mathbf{X}_{t}=\{\mathbf{x}_{t}^{1},\dots,\mathbf{x}_{N}^{t}\}\). We focus on the problem of _managing the flow of the event counts to achieve specific levels at minimal cost, through sequential adjustment of the edge \(\{\mathcal{E}_{t}\}_{t\geq 0}\)_. Adding or removing certain edges will alter the connections between corresponding nodes, influencing the generative patterns of events. This formulation has broad applications, including containing epidemic outbreaks through lockdown policies or regulating traffic congestion by strategically designing traffic lights. Figure 1: A viral infection started in a random region, with a network intervention curbing its spread. Nodes represent counties, and edges are roads. On day two, one county had a spike in cases, which spread to its neighboring county (red node). On days three and four, external lockdowns were alternated on neighboring roads to curb the pandemic (yellow node). We consider a finite-time horizon control framework, where an agent aims to find an edge intervention policy \(\pi(\mathbf{h}^{t}):\mathcal{S}\rightarrow\mathcal{A}\), given the current state \(\mathbf{h}^{t}\), such that the cumulative expected reward within a fixed time horizon is maximized, \[\pi^{*}=\arg\max_{\pi\in\Pi}\ \mathbb{E}\left[\sum_{t=0}^{T}r^{t}(\mathbf{h}^{t}, \mathbf{a}^{t})\right], \tag{1}\] where \(\mathbf{h}^{0}\sim p^{0}(\cdot)\), \(\mathbf{h}^{t+1}\sim\mathbb{P}(\cdot|\mathbf{h}^{t},\mathbf{a}^{t})\), and \(\mathbf{a}^{t}\sim\pi(\mathbf{h}^{t})\). Several key aspects are as follows: 1. **Environment**: high-dimensional event sequences \(\{\mathbf{X}_{t}\}_{t\geq 0}\) with stationary dynamics, occurring over a temporal graph network \(\{\mathcal{G}_{t}=(\mathcal{V}_{t},\mathcal{E}_{t})\}_{t\geq 0}\). 2. **State**: all the historical observations up to current time \(t\), including event sequence and intervention histories. We assume the state information is completely encoded into a graph embedding vector \(\mathbf{h}^{t}\), where \(\mathbf{h}^{t}\in\mathbb{R}^{N\times D}\) and \(D\) is the embedded dimension. We will explain how to perform the state embedding when we describe our predictive model for the event sequences. 3. **Action**: the action space is defined as \(\mathcal{A}:=\{\mathbf{a}\in\{0,1\}^{N\times N}|\mathbf{a}^{T}\mathbf{c}\leq B,\sum_{m,n}\mathbf{a}_{mn}\leq K\}\), where \(\mathbf{c}\) is the intervention cost to the edges, \(B\in\mathbb{R}_{+}\) is the total budget at each stage, and \(K\) is the maximum number of edges to be intervened at each stage. Here, we put some budget constraints on the action space to enable a safe policy. 4. **State Transition**: although the dynamics of the event sequences are unknown, we will build a predictive model \(\mathbb{P}_{\theta}(\cdot|\mathbf{h}^{t},\mathbf{a}^{t})\) and estimate the model parameters \(\theta\) using observational data. 5. **Reward Function**: the reward function is tailored to suit particular applications. It is influenced by cumulative event counts and can be augmented by incorporating other societal or environmental considerations. Note that our time-dependent reward function \(r^{t}\) can entail a discount factor \(\gamma^{t}\). Since the state transition model \(\mathbb{P}_{\theta}(\cdot|\mathbf{h},\mathbf{a})\) are unknown, we need to learn them from data. The optimal policy \(\pi^{*}\) in Eq. (1) can be estimated by repeated querying the model. In the next section, we will explain how to build the predictive model for the environment, e.g., the event sequence model. It is noteworthy to mention that solving a large-scale problem requires solving the abovementioned problem (Eq. (1)) repeatedly - from one region to multiple regions, from one fixed time window to multiple time windows. How to leverage the optimal policies of previous subproblems to ease the optimization of a new one? In this paper, we have devised the Amortized Network Interventions (ANI) framework. As demonstrated in Fig. 2, ANI enables us to aggregate optimal policies from historical data and similar tasks while preserving a critical permutation equivalent property. We will elaborate on ANI in Section 5. ## 3 Modeling the Environment: Networked Jump ODE Model Inspired by traditional ODE- and PDE-based models in infectious disease and traffic flow studies, we propose a data-driven approach to model event sequence dynamics. We introduce a Networked Jump ODE (NJODE) model to replicate the evolution of excitatory point processes, drawing from concepts in Neural Spatio-Temporal Point Processes (Chen et al., 2020) and Neural Jump SDEs (Jia and Benson, 2019), which have been used for fine-grained spatio-temporal Figure 2: _Overview of the Method._ The proposed Amortized Network Intervention contains three modules. The first module is to generate a latent node embedding \(\mathbf{h}^{t}_{n}\) and evolve the latent states through the NJODE model. The second module learns a Permutation Equivalent Embedding (PEE) over the latent space \(\mathbf{h}_{n}\) by a bi-contrastive loss function prepared for the downstream adaptation. The third module accesses the learned PEE from the second module and generates a permutation equivalent policy via Model Predictive Control (MPC). event process modeling. We modify these models to handle large event counts in discrete-time and high-frequency scenarios. We model the excitatory point processes based on two assumptions. (1) Processes with the same network share triggering kernel model parameters but have distinct parameters for emission probability distributions. This scalability helps accommodate more nodes in local regions without significantly increasing model parameters. (2) Different local regions share a similar underlying dynamic structure. This enables fine-tuning or reusing pre-trained local region dynamics for unseen local region dynamics. **Evolution of latent states** We formalize the state transition model by an ODE system with jumps, where the latent state \(\mathbf{h}^{t}\) at each time \(t\) evolves according to \[\mathbf{h}^{t_{0}}_{n} =\mathbf{h}^{0}_{n} \tag{2}\] \[\frac{d\mathbf{h}^{t}_{n}}{dt} =f_{h}(t,\mathbf{h}^{t}_{n}),\ \ \forall t\in\mathbb{R}_{+} \setminus\cup_{i}\{t_{i}\},\] (3) \[\lim_{\epsilon\downarrow 0}\mathbf{h}^{t_{i}+\epsilon}_{n} =\sum_{m\in\mathcal{N}_{n}}w_{m\to n}\cdot\phi_{h}( \mathbf{h}^{t_{i}}_{m},x^{t_{i}}_{m}). \tag{4}\] Here, \(\mathcal{N}_{n}\) is the neighbors of node \(n\) and \(\mathbf{h}^{t_{i}}_{n}\in\mathbb{R}^{D}\) is the latent state for node \(n\), where \(n\in\{1,2,\ldots,N\}\). \(t_{i}\) represents the time stamp to record discrete jumps. Rather than treating the event arrival time as a random variable (Chen et al., 2020), we regard the total number of discrete events with interval \([t_{i-1},t_{i})\) as a random variable \(x^{t_{i}}\), allowing us to process high-frequency temporal data like traffic flow. The use of \(\epsilon\) is to portray \(\mathbf{h}^{t}_{n}\) as a left-continuous function with right limits at any fixed \(t_{i}\). \(f_{h}\) is used to model the continuous change and \(\phi_{h}\) is used to model the instantaneous jump based on neighbors' events \(x^{t_{i}}_{m}\). \(f_{h}\) and \(\phi_{h}\) are shared for different event processes in the same local region, \(w_{m\to n}\) indicates the influence strength from node \(m\) to \(n\).We denote \(\mathbf{W}=[w_{m\to n}]\) as the influence matrix. This architecture is similar to a recurrent neural network with a continuous-time latent state modeled by a neural ODE. Under this formulation, the latent state \(\mathbf{h}^{t}_{n}\) incorporates both historical information from itself and abrupt changes triggered by neighboring nodes. This mechanism for preserving abrupt change and recording memory is important to model excitatory point processes and generalize other unseen dynamics. **Conditional emission probability distribution** At each time \(t_{i}\), we parameterize the event count distribution as a function of the latent state \(\mathbf{h}^{t}\). Specifically, in the rest of the paper, we assume the spike count \(x^{t}_{n}\) follows a Poisson distribution, whose intensity \(\lambda^{t}_{n}\) is a function of \(\mathbf{h}^{t}_{n}\): \[\lambda^{t}_{n}=\text{exp}(b_{\psi_{n}}+g_{\psi}(\mathbf{h}^{t}_{n})). \tag{5}\] Here, we assume \(g_{\psi}\) is the shared distribution parameter neural network among different nodes, while \(b_{\psi_{n}}\) is the distinct baseline variable for different nodes. Given this model, we see that the final emission probability of \(x^{t}_{n}\) conditioned on historical observations \(x^{<t}_{n}\) is given by \[\log p_{\theta}(x^{t}_{n}|x^{<t}_{n})=-\lambda^{t}_{n}+x^{t}_{n}\log\lambda^{ t}_{n}-\log(x^{t}_{n}!) \tag{6}\] where \(\theta\) refers to all model parameters. Finally, given a spike counts matrix \(\mathbf{X}\in\mathbb{N}^{N\times T}\), we assume different nodes at different times are conditionally independent given the latent state \(\mathbf{h}^{t}\), thereby we estimate the parameter \(\theta\) by maximum log-likelihood and the total log-likelihood is expressed as \[\mathcal{L}_{LLH}(\mathbf{X};\theta)=\sum_{t=0}^{T-1}\sum_{n=1}^{N}\log p_{ \theta}(x^{t}_{n}|x^{<t}_{n}). \tag{7}\] **Mean field approximation for reward modeling** In our MBRL formulation, we use the estimated event process model as our environment simulator to perform online planning. The reward is usually a function of the generated future events. For example, it can be the negative value of the total number of newly infested people at the next stage, i.e., \(r^{t}_{n}:=-\sum_{n=1}^{N}\hat{x}^{t+1}_{n}\), where \(\hat{x}^{t}_{n}\) denotes an estimator of \(x^{t}_{n}\). In the planning phase, accurately approximating the expected cumulative reward demands a considerable number of rollouts from conditional emission probability distributions, which can be time-consuming. Instead, we construct a reward model \(r^{t}_{n,\text{MFA}}\) based on the _mean field approximation_ (MFA) for \(x^{t}_{n}\) by averaging over the high dimensional freedoms on the conditional term (detailed in Appendix E). As a result, during planning, we have a deterministic reward model after removing the stochasticity in Eq. (4) and replacing \(x^{t_{i}}_{m}\) with its mean. This mean field approximation enables efficient online planning. ## 4 Gradient-Descent-based Model Predictive Control Given the estimated environment model in Section 3, we design control algorithms to obtain an optimal event flow steering policy by performing interventions to the graph's edges. Specifically, for an \(N\)-node influence graph, each action involves selecting a subset of \(k\) (\(k\leq K\)) edges from \(N(N-1)\) direct edges (excluding self-connections). Hence, we can represent action \(\mathbf{a}^{t}\) as a \(k\)-hot matrix, resulting in the intervened influence graph given by \(\mathbf{W}\odot(1-\mathbf{a}^{t})\). Our approach draws inspiration from Adaptive MPC (Garcia et al., 1989), which dynamically adjusts and enhances a model in real-time to account for time-varying dynamic characteristics. We construct a policy-gradient-based control algorithm and incorporate flexible constraints on the action space. **Receding Horizon Control** We construct our cumulative objective function from a rolling-horizon perspective. At each time \(t\), we optimize the policy \(\pi_{\varphi}\) by looking \(T\)-step ahead, i.e., \[\pi_{\varphi}^{*}=\operatorname*{arg\,max}_{\pi_{\varphi}}\sum_{i=1}^{T-1}r_{ \text{MFA}}^{t}(\mathbf{h}^{t+i},\pi_{\varphi}(\mathbf{h}^{t+i}),f_{h}\circ \phi_{h}(\mathbf{h}^{t+i},\pi_{\varphi}(\mathbf{h}^{t+i})), \tag{8}\] where the expected reward is replaced by the MFA, and the function composition \(f_{h}\circ\phi_{h}\) will give the next state. In online planning, the learned model is used to explore state trajectories that start from the current latent state \(\mathbf{h}^{t}\). After finding a reward-maximizing policy from time \(t\) to \(t+T\), only the first action is employed. At time \(t+1\), when new data arrives, a new latent state \(\mathbf{h}^{t+1}\) is queried again from our model, and the calculations repeat, yielding a new policy and prediction trajectory. **Gradient-Descent-based Optimization** Instead of exhaustively searching the discrete combinatorial action space to optimize our objective, we approximate this space using a continuous relaxation technique (Xie and Ermon, 2019). We replace \(\mathbf{W}\odot(1-\mathbf{a}^{t})\) with \(\mathbf{W}\odot(1-\mathbf{p}^{t})\), where \(\mathbf{p}^{t}\) represents edge selection probabilities. With this reparametrization, the objective becomes a fully deterministic function of the policy and dynamics, enabling end-to-end differentiable policy learning. **Incorporating Fairness Constraints and More** We can incorporate flexible constraints, including fairness, into the decision-making process. We distinguish between hard and soft constraints in our approach. For hard constraints, such as limitations on consecutive lockdown days for a county (e.g., not locking down a county for more than certain consecutive days), we can employ a dynamic mask to explicitly exclude actions that fall outside the feasible space. As for soft constraints, like ensuring overall fairness in the policy, we can design an additional reward term, denoted as \(r_{\text{aug}}^{t}\), and scale it by \(\lambda\). This augmented reward term is jointly updated with the policy to enforce fairness within the optimization objective. ## 5 Making Large-Scale Problem Tractable: Amortized Policy ``` Input: Task pools \(\mathcal{B}\) and a pretrained model pool \(\Theta\) learned based on Eq. (7) Result: Policy parameters \(\varphi\) and representation parameters \(\psi\) Initialize parameters \(\varphi\) and \(\psi\); whilemeta-training not completedo Sample a network \(\mathcal{M}_{i}\sim\mathcal{B}\) and corresponding model \(\theta_{i}\in\Theta\); // Policy & Representation Learning Optimize \(\{\varphi,\psi\}\) jointly based on Eq. (8)(11); Obtain intervened network \(\mathbf{W}^{\prime}\) based on policy \(\pi_{\varphi}\); // Planning ahead Collect new data \(\mathcal{D}_{i}\) by \(\text{{N}}\text{{JODESolver}}(\mathbf{W}^{\prime},\theta_{i})\) by Eq. (2 -5); // Adapative Model Update Optimize \(\{\theta_{i}\}\) by \(\mathcal{D}_{i}\) based on Eq. (7) and Update \(\theta_{i}\) in \(\Theta\); end while ``` **Algorithm 1**ANI (_Meta-Training Phase_) In practice, managing a city's extensive traffic network is a challenging large-scale problem due to its sheer size. To tackle this, we employ a divide-and-conquer approach, breaking the problem down into manageable subproblems. For instance, we segment the vast network into smaller, more manageable subgraphs, each representing a tractable subproblem. While this strategy makes the overall problem more manageable, it raises a crucial question: How can we utilize optimal policies from previous subproblems to streamline the optimization of new ones? To address this, we introduce the Amortized Network Interventions (ANI) framework. **Amortized Intervention** In the previous section, our assumption was that each agent operates solely with local information, without utilizing global data. In this section, our objective is to learn a shared amortized policy (Gordon et al., 2019) that can be applied across different regions with distinct dynamics. We hypothesize the existence of collective behavior among these various local temporal dynamic systems. Given a sequence of local policies \(\{\pi_{i}\}_{i=1}^{M}\) addressing \(M\) distinct sub-problems, our goal is to create an amortized policy \(\pi_{\text{amo}}\). This policy should extract invariant representations and enable the adoption of similar policy structures among similar temporal dynamic systems. **Permutation Equivalent Property** Inspired from policy similarity embeddings (PSM) (Agarwal et al., 2021) and the policy permutation invariant property in SensoryNeuron (Tang and Ha, 2021), we devise an agent that can extract _permutation equivalent embeddings_ and is _policy permutation equivalent_ to the latent state space \(\mathbf{h}^{t}\). Since each dimension of \(\mathbf{h}^{t}\) corresponds to one node in the excitatory point process, the permutation equivalent property along the node dimension characterizes the collective behavior within complex dynamic systems. We present the definition of permutation equivalent property in Definition 1, based on which we design a permutation equivalent metric in Definition 2 that defines the distance between states, similar to \(\pi\)-bisimulation (Castro, 2020). **Definition 1** (**Permutation Equivalent Policy**): _Given a state \(\mathbf{h}^{t}=(\mathbf{h}^{t}_{1};\dots;\mathbf{h}^{t}_{N})\) and an action parameterized by a \(k\)-hot adjacency matrix in \(\mathbb{R}^{N\times N}\), we say a policy is **permutation equivalent** (PE) to the state \(\mathbf{h}^{t}\) if the order of corresponding rows in the adjacency matrix is also permuted accordingly when we reshuffle the orders the \(N\) latent states. Mathematically, the permutation equivalent policy can be described by a function \(\pi:\mathbb{R}^{N\times D}\rightarrow\mathbb{R}^{N\times N}\) such that_ \[\pi(\mathbf{P}\mathbf{h}^{t})=\mathbf{P}\pi(\mathbf{h}^{t})\mathbf{P}^{T},\] _where \(\mathbf{P}\in\mathbb{R}^{N\times N}\) is any permutation matrix._ **Definition 2** (Permutation Equivalent Metric, PEM): _For any \(\mathbf{x},\mathbf{y}\in\mathcal{S}\), where \(\mathbf{y}\) is permuted state of \(\mathbf{x}\), i.e., \(\mathbf{y}=\mathbf{P}\mathbf{x}\) for some permutation matrix \(\mathbf{P}\), the PEM under a distance \(d\) and policy \(\pi\) is described by \(d_{\pi}:\mathcal{S}\times\mathcal{S}\rightarrow\mathbb{R}\), satisfying the recursive equation:_ \[d_{\pi}(\mathbf{x},\mathbf{y})=d(\pi(\mathbf{x}),\mathbf{P}^{T}\pi(\mathbf{y} )\mathbf{P})+\gamma d_{\pi}(\mathbf{x}^{\prime},\mathbf{P}^{T}\mathbf{y}^{ \prime}), \tag{9}\] _where \(\mathbf{x}^{\prime}\) and \(\mathbf{y}^{\prime}\) are the transition states of \(\mathbf{x}\) and \(\mathbf{y}\), given the deterministic dynamic \(f\) and policy \(\pi\)._ The distance \(d\) term in Definition 2 captures the difference in local permutation equivalent behavior while the recursive term captures long-term behavioral difference. The exact weights assigned to the two are given by the discount factor \(\gamma\). The proposed distance can be efficiently computed by approximate dynamic programming algorithms. Bi-Contrastive Metric EmbeddingsWe use a representation mapping \(\psi\) to project the high dimensional latent graph embeddings \(\mathbf{h}^{t}\) into two low dimensional graph embedding \(\mathbf{p}^{t}\) and \(\mathbf{m}^{t}\), where \(\mathbf{p}^{t}\) only contains the internal positional information of \(N\) node \(\{\mathbf{h}^{t}_{n}\}_{n=1}^{N}\) while \(\mathbf{m}^{t}\) contains the individual magnitude information for different nodes \(\mathbf{h}^{t}_{n}\). We illustrate the architecture in Figure 2. Intuitively, the graph magnitude embedding \(\mathbf{m}^{t}\) is invariant under row permutations of \(\mathbf{h}^{t}\) while the graph positional embedding \(\mathbf{p}^{t}\) is invariant when we only change the magnitude of the row features in \(\mathbf{h}^{t}\). During training, we perturb the anchor graph embedding \(\mathbf{h}^{t}\) into two groups \(\mathcal{G}_{\text{perm}}(\mathbf{h}^{t})\) and \(\mathcal{G}_{\text{mage}}(\mathbf{h}^{t})\). To jointly learn the positional and magnitude embeddings with PEM, we adapt SimCLR (Chen et al., 2020) and design a bi-contrastive learning scheme, under which the graph positional embeddings and graph magnitude embeddings can either be a positive pair under permutation transformation or a negative pair under magnitude adjustment. For any anchor embedding \(\mathbf{h}_{0}\), we take the augmentation \(\mathbf{h}_{1}\in\mathcal{G}_{\text{perm}}(\mathbf{h}_{i})\), and \(\mathbf{h}_{k}\in\mathcal{G}_{\text{mage}}(\mathbf{h}_{i}),k\neq 0,1\). Then, the bi-contrastive metric embeddings loss is given by a state similarity weighted SimCLR contrastive loss \[\mathcal{L}_{BCME}(\mathbf{h}_{0},\mathbf{h}_{1},\{\mathbf{h}_{k} \};\psi) =-\log\frac{\Gamma(\mathbf{h}_{0},\mathbf{h}_{1})\text{exp}(s( \mathbf{m}_{0},\mathbf{m}_{1}))}{\Gamma(\mathbf{h}_{0},\mathbf{h}_{1})\text{ exp}(s(\mathbf{m}_{0},\mathbf{m}_{1}))+\sum_{k\neq 0,1}(1-\Gamma(\mathbf{h}_{0}, \mathbf{h}_{k}))\text{exp}(s(\mathbf{m}_{0},\mathbf{m}_{k}))} \tag{10}\] \[+\log\frac{\text{exp}(s(\mathbf{p}_{0},\mathbf{p}_{1}))/\Gamma( \mathbf{h}_{0},\mathbf{h}_{1})}{\text{exp}(s(\mathbf{p}_{0},\mathbf{p}_{1}))/ \Gamma(\mathbf{h}_{0},\mathbf{h}_{1})+\sum_{k\neq 0,1}\text{exp}(s(\mathbf{p}_{0}, \mathbf{p}_{k}))/(1-\Gamma(\mathbf{h}_{0},\mathbf{h}_{k}))}, \tag{11}\] where \(\Gamma(\mathbf{h}_{0},\mathbf{h}_{1})=\text{exp}(-d_{\pi}(\mathbf{h}_{0}, \mathbf{h}_{1})/\beta)\) is the weight given by PEM. \(\beta\) controls the sensitivity of similarity measure to PEM \(d_{\pi}\). \(s(\mathbf{u},\mathbf{v}):=\frac{\mathbf{u}^{T}\mathbf{v}}{||\mathbf{u}|||| \mathbf{v}||}\) denotes the cosine similarity function. ## 6 Experimental Evaluation We assess the effectiveness of our approach, Amortized Network Intervention (ANI), in managing networked temporal dynamics through simulated and real-world experiments. Our results demonstrate that ANI successfully reduces the mutual influence effects in both synthetic and two real datasets. We measure this improvement by calculating reduced intensities. ### Network Intervention on Synthetic Data In our synthetic data experiments, we performed intervention analysis. Specifically, we used our Networked Jump ODE Model on low-dimensional synthetic Multivariate Hawkes Processes (MHP) without applying network amortization. To assess the performance of our model-based reinforcement learning algorithm for dynamic network intervention, we conducted a comparative analysis against two model-free RL baselines, SAC (Haarnoja et al., 2018) and PPO (Schulman et al., 2017), as well as one model-based RL baseline called Neural Hawkes Process Intervention (NHPI) (Qu et al., 2023). We also adapted model-free RL techniques to Temporal Point Processes (TPP) (Upadhyay et al., 2018) for event intervention and maintained the event intervention settings for NHPI to explore and compare the effectiveness of event intervention versus action intervention with high-frequency event data. Details on data generation for the synthetic dataset can be found in Appendix G.1. Our study results are depicted in Figure 3. Remarkably, our approach achieves comparable levels of intensity reduction as SAC and PPO in both datasets, all without direct interaction with the environment. NHPI, which focuses on event intervention, faces difficulties in reducing activity intensity, especially with high-frequency event sequences. For additional generalization results on unseen MHPs with synthetic data, please refer to Appendix G.2. Here, our goal is to design an amortized city lock-down strategy that shares a similar policy structure for distinct city regimes to curb the epidemic by intervening in the influence matrix between cities. Concretely, we trained an amortized policy from five different county corpus and tested the amortized interventions on multiple unseen county dynamics. To generalize to an unseen split, the agent needs to be invariant to the orders of different counties and the amplitude or the phase of the spikes of the underlying excitatory point processes. Thus, we evaluated the generalization ability to the unseen counties corpus in two parts, local community transformation, and cross-community adaption, where local community transformation captures the agent's ability to generalize to a permuted or intensity-adjusted community, and cross-region adaption characterizes the ability to generalize to a intensity-peak-shifted community. We illustrate the two types of transformation in Figure 5. **Generalization Over Local Community Transformation** We show the generalization ability to a permutated or intensity-adjusted community by permutating and changing the intensity magnitude on the same community region and applying the amortized policy to them. Figure 4 demonstrates the intensity costs of using amortized policy and not using during learning policies on two types of communities after transformation. For the cost curve on the local community by magnitude transformation, the amortized policy starts to converge at around 30 episodes while the cost of the non-amortized policy is still decreasing. Importantly, we observe the amortized policy also displays a more stable learning curve compared with the non-amortized policy. **Generalization Over Cross Community Adaption** We investigate how well the proposed approach generalizes over unseen intensity dynamics from different counties. We evaluate the generalization performance in different county corpus with or without a similar dynamic structure to the training environment. Specifically, we define the testing environment as "in-distribution" or "generalize via interpolation" when the testing environment shares a similar intensity peak to the training environment and define the testing environment as "out-of-distribution" or "generalize via extrapolation" when the testing environment has a peak-shift or a delay effect to the original training environment. Table 1 summarize the average reduced intensity for different methods under different region settings. Figure 4: Generalization results of local community transformations on Covid data. Figure 5: Two types of transformation of Covid data. Figure 3: Cumulative intensity cost on synthetic datasets. Figure 6: **Top**: Satellite map extracted from Google Earth (Goo, 2022). **Middle**: Road Network in SUMO (Lopez et al., 2018). **Bottom**: Extracted networks where red nodes are junction points. Notably, Table 1 (1\({}^{st}\) row ) indicates that the non-adaptive and non-amortized policies are struggling to control the intensities in both in-distribution and out-of-distribution environments. Importantly, when we use an adaptive but non-amortized policy, the reduced intensities are quite obvious (Table 1, 3\({}^{rd}\) row). This is not surprising since adaptively learning a policy (i.e., repeatedly updating the model with new policies) allows the agent to explore more possibilities in the environment and thus can obtain an optimal trajectory more easily. Furthermore, using amortized policy gives a significant jump on both adaptive and non-adaptive policy among all four environments. It is also interesting to point out that in-distribution environments are easier to generalize than out-of-distribution environments which contain a peak-shift or other complex transformations when compared with the trained environment. These findings are also consistent with the intensity cost curves illustrated in Figure 7. ### Evaluating Generalization on Traffic Data We endeavored to enact network interventions aimed at alleviating traffic congestion within the urban road network system, particularly at road intersections. Event data were collected through SUMO (Lopez et al., 2018) simulations, whereby a traffic car was categorized as contributing to congestion if its velocity dropped below 0.5m/s. The network topology was derived from real-world cartography, as illustrated in Figure 6, and subsequently processed by SUMO to create four distinct crossroad types (detailed information available in Appendix I.1). Following training on these crossroads, we assessed the generalization capabilities of our proposed amortized network intervention method on an additional set of four previously unseen road intersections. As depicted in Figure 8, our results indicate that the learned meta-policy exhibits rapid adaptability to unfamiliar road systems only after a few gradient steps, demonstrating superior traffic congestion mitigation ability compared to a train-from-scratch model. Furthermore, we include a visual representation of the learned network intervention in Appendix I.3. ### Understanding Gains from PEM: Ablations and Visualizations We show the efficacy of the proposed Policy Equivalent Embeddings (PEE) which are Bi-contrastive metric embeddings (Bi-CMEs) learned with Policy Equivalent Metrics (PEM) on the latent states by comparing it to Policy Similar Embeddings (PSE) (Agarwal et al., 2021) which is another common generalization approach effective on pixel-based RL tasks. Specifically, we investigate the gains from Bi-CMEs and PEM by ablating them. Instead of learning Bi-CMEs jointly through the position and magnitude embeddings, we learn a separate CME for (Chen et al., 2020) position and magnitude embeddings and use these separately learned embeddings to generate the policies. Table 2 shows that PEEs (= PEM + Bi-CMEs) generalize significantly better than PSM or Single CMEs, both of which significantly degrade performance (-90%). This is not surprising since policy similar metric (PSM) requires two similar states collected by nearest neighbors which may introduce incorrect clusters on the latent state space. However, by introducing permutation equivalence as an inductive bias to the problem of controlling a dynamic system modeled by neural ODEs, PEM can better characterize the invariance features from different dynamic systems. \begin{table} \begin{tabular}{c c c c c} \hline \hline \multirow{2}{*}{Adaptive} & \multirow{2}{*}{Amortized} & \multicolumn{4}{c}{Reduced Intensities} \\ \cline{3-5} & & \multicolumn{2}{c}{In-distribution} & \multicolumn{2}{c}{Out-of-distribution} \\ \cline{3-5} & & Georgia-0 & Alabama-0 & Georgia-1 & West Virginia-0 \\ \hline \multirow{2}{*}{False} & False & -0.05(0.18) & 0.08(0.06) & -0.07(0.11) & -0.02(0.05) \\ & True & 0.21(0.43) & 0.18(0.58) & 0.06(0.24) & 0.02(0.02) \\ \multirow{2}{*}{True} & False & 0.18(0.19) & 0.14(0.22) & 0.02(0.13) & 0.15(0.10) \\ & True & **0.47(0.14)** & **0.71(0.42)** & **0.39(0.27)** & **0.54(0.27)** \\ \hline \hline \end{tabular} \end{table} Table 1: Reduced amount of intensities after network interventions for each node per unit time on four unseen communities by different methods. We report average performance across 100 runs for three different seeds, with a standard deviation between parentheses. \begin{table} \begin{tabular}{c c c c c} \hline \hline \multirow{2}{*}{Adaptive} & \multirow{2}{*}{Amortized} & \multicolumn{4}{c}{Reduced Intensities} \\ \cline{3-5} & & \multicolumn{2}{c}{In-distribution} & \multicolumn{2}{c}{Out-of-distribution} \\ \cline{3-5} & & Georgia-0 & Alabama-0 & Georgia-1 & West Virginia-0 \\ \hline \multirow{2}{*}{False} & False & -0.05(0.18) & 0.08(0.06) & -0.07(0.11) & -0.02(0.05) \\ & True & 0.21(0.43) & 0.18(0.58) & 0.06(0.24) & 0.02(0.02) \\ \multirow{2}{*}{True} & False & 0.18(0.19) & 0.14(0.22) & 0.02(0.13) & 0.15(0.10) \\ & True & **0.47(0.14)** & **0.71(0.42)** & **0.39(0.27)** & **0.54(0.27)** \\ \hline \hline \end{tabular} \end{table} Table 2: **Ablation studies**. Reduced intensity after network interventions on West Virginia (Split 0) when we ablate the similarity metric and learning procedure for metric embeddings in different data augmentation settings. Each ablation entry is repeated for 100 trials for a fair comparison. Figure 8: Generalization results of mitigated traffic flow on two unseen intersections from SUMO. Visualizing learned representationsWe visualize the metric embeddings in the ablation above by projecting them to two dimensions with t-SNE. Figure 9 shows that PEEs partition the latent embeddings into four parts: (1) original position embeddings (red) and position embeddings with adjusted magnitude (green); (2) original magnitude embeddings (yellow) and magnitude embeddings with position permuted randomly (blue); (3) position embeddings with position permuted randomly (blue) which are orthogonal to the original position embeddings (red) and (4) magnitude embeddings with adjusted magnitude (purple) which are orthogonal to the original magnitude embeddings (yellow). Nevertheless, the projection of embeddings learned with PSM (Right in Figure 9) gives a clear collapsing effect on position embeddings with position permuted randomly (blue) and magnitude embeddings with adjusted magnitude (purple). This finding is consistent with the results in Table 2 that Bi-CMEs weighted by PSM fail to extract permutation invariant and magnitude invariant information from the latent dynamic system. ## 7 Conclusions This paper presents Amortized Networks Intervention, a versatile framework to steer the excitatory point processes. Our approach handles partial observability, fairness constraints, and large-scale network interventions on a combinatorial action space, and achieves promising performance on challenging tasks on large, real-world datasets. Furthermore, the framework discussed here holds the potential for addressing significant problems like traffic light scheduling in urban areas.
2303.04061
Noisy intermediate-scale quantum computers
Quantum computers have made extraordinary progress over the past decade, and significant milestones have been achieved along the path of pursuing universal fault-tolerant quantum computers. Quantum advantage, the tipping point heralding the quantum era, has been accomplished along with several waves of breakthroughs. Quantum hardware has become more integrated and architectural compared to its toddler days. The controlling precision of various physical systems is pushed beyond the fault-tolerant threshold. Meanwhile, quantum computation research has established a new norm by embracing industrialization and commercialization. The joint power of governments, private investors, and tech companies has significantly shaped a new vibrant environment that accelerates the development of this field, now at the beginning of the noisy intermediate-scale quantum era. Here, we first discuss the progress achieved in the field of quantum computation by reviewing the most important algorithms and advances in the most promising technical routes, and then summarizing the next-stage challenges. Furthermore, we illustrate our confidence that solid foundations have been built for the fault-tolerant quantum computer and our optimism that the emergence of quantum killer applications essential for human society shall happen in the future.
Bin Cheng, Xiu-Hao Deng, Xiu Gu, Yu He, Guangchong Hu, Peihao Huang, Jun Li, Ben-Chuan Lin, Dawei Lu, Yao Lu, Chudan Qiu, Hui Wang, Tao Xin, Shi Yu, Man-Hong Yung, Junkai Zeng, Song Zhang, Youpeng Zhong, Xinhua Peng, Franco Nori, Dapeng Yu
2023-03-07T17:14:53Z
http://arxiv.org/abs/2303.04061v1
# Noisy intermediate-scale quantum computers ###### Abstract Quantum computers have made extraordinary progress over the past decade, and significant milestones have been achieved along the path of pursuing universal fault-tolerant quantum computers. Quantum advantage, the tipping point heralding the quantum era, has been accomplished along with several waves of breakthroughs. Quantum hardware has become more integrated and architectural compared to its toddler days. The controlling precision of various physical systems is pushed beyond the fault-tolerant threshold. Meanwhile, quantum computation research has established a new norm by embracing industrialization and commercialization. The joint power of governments, private investors, and tech companies has significantly shaped a new vibrant environment that accelerates the development of this field, now at the beginning of the noisy intermediate-scale quantum era. Here, we first discuss the progress achieved in the field of quantum computation by reviewing the most important algorithms and advances in the most promising technical routes, and then summarizing the next-stage challenges. Furthermore, we illustrate our confidence that solid foundations have been built for the fault-tolerant quantum computer and our optimism that the emergence of quantum killer applications essential for human society shall happen in the future. ###### Contents * I Introduction * II Quantum algorithms * III Superconducting qubits * IV Trapped-ion qubits * V Semiconductor spin qubits * VI NV centers * VII NMR system * VIII Neutral atom arrays * IX Photonic quantum computing * X Outlook and conclusion * XI Acknowledgments * XI References ## I Introduction Quantum computing exploits phenomena of quantum nature, such as superposition, interference, and entanglement, to provide beyond-classical computational resources. Its ultimate goal is to build a quantum computer that can be significantly more powerful than classical computers in solving certain tasks. Historically, quantum computing dates back to the early 1980s, when Benioff developed a quantum-mechanical model of the Turing machine [1], and Feynman [2] and Manin [3] proposed the idea of harnessing the laws of quantum mechanics to simulate phenomena that a classical computer could not feasibly do. In 1994, Shor devised an effi cient quantum algorithm for finding the prime factors of an integer, a very concrete and important problem for which no efficient classical algorithm is known [4]. Shor's algorithm, along with a number of other quantum algorithms [5], strengthened the foundations of quantum computing, inspired the community of quantum physicists and stimulated research in finding actual realizations of quantum computing. The first implementation scheme came in 1995, when Cirac and Zoller made a proposal for quantum logic gates with trapped ions [6]. In the following years, other physical routes to realize quantum computing, such as nuclear magnetic resonance (NMR) [7; 8; 9], spin qubits [10; 11] and superconducting qubits [12], were proposed, and there has been substantial experimental progress in the area since then. Several hardware platforms, including cavity quantum electrodynamics systems, ion traps, and NMR, have successfully realized more than one qubit in experiments since the start of this century. In the following decade, various platforms have achieved quantum information processing on small-scale quantum systems composed of several qubits. In recent years, the field has advanced to the point where research groups have been able to demonstrate quantum devices at a scale around or even beyond forty qubits, particularly in trapped ions and superconducting circuits. Remarkable progress has been achieved toward fault-tolerant quantum computing. In the beginning, as Fig. 1 shows, universal quantum gates and precise readout have been realized in various physical qubit systems and demonstrated the fulfillment of DiVincenzo criteria. Hardware-level developments and the progress in fabrication further enable the integration of qubits. These achievements enable excessive trials of prototype demonstration of quantum computing, including analog/digital quantum simulation, quantum error correction (QEC), fault-tolerant quantum operations, quantum algorithms, etc. Google achieved quantum supremacy using randomized circuit sampling on their 53-qubit sycamore processor [13]. Afterward, several "quantum advantage" experiments, including superconducting systems [14; 15] and photons [16; 17; 18] have been realized, and the gap between the computational power of quantum computers and their classical counterparts was greatly widened. Another milestone is that the realization of quantum annealing in commercialized quantum machines triggers the industrialization of quantum computing. Efforts have been made to develop specialized quantum computers for certain tasks, such as the D-wave annealing machines [19] and the aforementioned photonic boson sampling circuits [16; 17; 18]. Moreover, practical QEC has been explored in various physical systems, such as superconducting circuits [20; 21; 22; 23; 24], ion traps, semiconductors [25; 26], and nitrogen-vacancy (NV) centers in diamond [27; 28; 29]. Considering its speeding-up strides, the breakthrough toward universal fault-tolerant quantum computation is closely tangible. Another noteworthy achievement is the construction of functional quantum simulators [30; 31], digitally or analogously, towards practical problems in quantum chemistry [32] and condensed matter physics [33]. With ever-increasing abilities to precisely manipulate quantum-mechanical systems, the quantum computing community has been shifting the focus from laboratory curiosities to technical realities, from investigating the underlying physics to solving the engineering problems in building a scalable system, from searching for a well-behaved qubit to seriously addressing the question of how to make our near-term quantum hardware practically useful. During the first decade of the 21st century, superconducting qubits, the leading candidate for building scalable quantum computers, were used to demonstrate prototype algorithms with no more than two programmable qubits in most cases. Many efforts have been spent on proof-of-principle tests of various hardware modules. In 2014, two-qubit gate fidelities, an overall performance metric that evaluates the degree of control of a quantum processor, greater than 99% were achieved for the first time in a multi-qubit superconducting circuit, surpassing the error-correction threshold [34]. Since then, the community has seen a trend of growing system size, with 50-100 qubits integrated into state-of-the-art processors. It is remarkable that the average fidelities across these processors are also advancing; In Google's 53-qubit processor, an average of 99.4% two-qubit gate fidelity was achieved with simultaneous operations across the chip [13]; Such enhanced reproducibility indicates immense engineering efforts in all aspects of the experiment, including design, fabrication, wiring, electronics, and software. Along this grand trend, we have already entered the second stage of quantum computing--noisy intermediate-scale quantum (NISQ) [35] and cloud service of quantum computers, as shown in the left panel of Fig. 1. NISQ for quantum computing is analogous to the early stage of classical quantum computers, when analog/digital signals are hybrid as the exploration of the limit of information processing and the applications of the computing tasks are limited to several areas. During this stage, logic qubits and operation might reach the break-even point by encoding a limited but enough number of noisy qubits in medium-sized integrated systems. As a result, demonstrations of quantum algorithms can be performed using a small amount of logic qubits. And further quantum advantage utilizing quantum computational algorithms or quantum simulation would also be demonstrated with applications on quantum chemistry, variational quantum computers, quantum machine learning, or quantum optimization. Eventually, it is generally believed that fault-tolerant universal quantum computers would be realized in large-scale and integrated quantum systems. In addition to the advances in hardware, commercially valuable algorithms and applications are beginning to burgeon [36]. A typical example is the variational quantum eigensolver (VQE) algorithm [37], which is shown to be worthwhile eventually from a two-atom molecule calculation to bigger quantum systems. Algorithms for general purposes, in a similar spirit to the forerunner textbook algorithms--Shor's algorithm and Grover's algorithm [5], were developed recently [38], such as the Harrow-Hassidim-Lloyd (HHL) algorithm [39] and the quantum singular value transformation (QSVT) algorithm [40]. The paradigm of quantum computing research has been revolutionized over the years from solely academic research. Nowadays, the great impetus comes not only from its intrinsic scientific interest, but also from companies and societies [41]. With the aforementioned tremendous progress, the approach to full-stack quantum computing [41, 42, 43] approach is encouraging. As commercialization is becoming a trend, many large companies and start-ups are contributing to this field jointly with the scientific community. We shall see further contributions and incentives to the development of this field coming from commercialization, as has already happened for classical computers, genetic technology, and artificial intelligence. From a broader society scope, cloud quantum computing, such as IBM's quantum network, makes it possible for global users around the world to explore new quantum algorithms without their own hardware devices. As quantum computing lessons and experiences in schools and universities are becoming a routine for the next generation, more well-educated engineers and scientists are being enrolled in the field, equipped with insights and knowledge of quantum science. Thus, the positive feedback from society is creating a new norm for quantum computing research compared to its primitive days. In this review, we will focus on hardware platforms that have the potential to realize the ultimate large-scale quantum computers, including superconducting circuits, trapped ions, semiconductors, neutral atoms, NMR, NV centers, and photonics. In particular, we will focus more on the important advances in these platforms over the past decade. By following the guidelines of DiVincenzo's criteria [44], we will introduce how to implement the key elements of quantum computing in each physical system and their typical features and advantages. The scalability of each platform and critical challenges in recent developments will also be discussed here. Moreover, recent progress on quantum algorithms will also be mentioned in this review. By combining fast-developing hardware platforms and potential applications, we hope to shed light on the innovations that quantum computing can bring in the foreseeable future. Topological quantum computation provides another approach to tackling quantum errors by keeping the computational states to the desired pure quantum states without erroneous results. A typical type of topological qubits is made of Majorana zero modes, which are immune to environmental noise and thus overcome the inevitable decoherence at the hardware level through the Majorana non-locality and braiding operations. However, the non-topological in-gap states or trivial zero-energy states can also mimic the typical Majorana behavior, making the detection and other operations of Majorana zero modes difficult. So far, the non-locality and braiding operations to demonstrate the non-Abelian statistics have yet to be proved before the realization of topological qubits. A complete discussion of topological quantum computation is beyond the scope of this review. The interested reader is referred to the literature for further details [45, 46, 47]. Figure 1: Quantum computing development levels. The left panel illustrates the three development stages of quantum computing with some iconic progress classified as the physical and logical levels. The right panel lists some potential applications according to different stages. Detailed discussion about this diagram could be referred to the Sec. I. ## II Quantum algorithms _Introduction.--_ It is anticipated that quantum computers utilizing the exotic quantum features can solve computational problems more efficiently than their classical counterparts. For example, in the query model, given an oracle access to a function \(f:\{0,1\}^{n}\rightarrow\{0,1\}\), a classical computer can only query it once at a time, whereas a quantum computer can query the oracle once and obtain all \(2^{n}\) values simultaneously, a phenomenon known as quantum parallelism. Formally, \[\sum_{x}\ket{x}{0}\mapsto\sum_{x}\ket{x}{f(x)} \tag{1}\] can be achieved on a quantum computer. However, quantum parallelism alone is not useful because when one performs a measurement, the quantum state collapses, and only one bit of information can be obtained. To design quantum algorithms, quantum parallelism needs to be combined with other features such as interference and entanglement. In 1985, Deutsch combined quantum parallelism with interference to design the first quantum algorithm that can solve a black-box problem with fewer queries than a classical computer [48]. Specifically, in Deutsch's problem, one is given a function \(f:\{0,1\}\rightarrow\{0,1\}\) and asked whether the function is constant, that is, \(f(0)=f(1)\), or not. Classically, we would need two queries to solve this problem; but with quantum computers, only one query is needed. Later, it was generalized to a multi-qubit Figure 2: Schematic summary of different types of quantum bits (top half) and their corresponding pros and cons. (bottom half). \(F_{1}\) (\(F_{2}\)) is the one-qubit (two-qubit) gate fidelity. version called the Deutsch-Jozsa algorithm, which can achieve an exponential speedup over any classical deterministic algorithms [49]. However, the quantum speed-up vanishes in the presence of a small error probability. In 1993, Berstein and Vazirani proposed another problem and designed a quantum algorithm for it that can achieve polynomial speedup even over classical randomized algorithms [50]. After one year, Simon strengthened their result by designing Simon's problem and a quantum algorithm for it that yields an exponential speedup [51]. These early-stage explorations focused mostly on the search for problems that quantum computers can solve more efficiently than classical computers, instead of focusing on real-world applications. But interestingly, as it turned out later, Simon's algorithm inspired Shor to design quantum algorithms to solve discrete logarithmic and integer factoring problems [4], which are widely used in cryptography. _Quantum Fourier transform.--_ In the next stage of the development of quantum algorithms, several quantum algorithmic primitives emerged and appeared to be extensively used in designing new quantum algorithms. One such primitive is the quantum Fourier transform (QFT), which implements the Fourier transform matrix \[(F_{N})_{jk}:=\omega_{N}^{jk}/\sqrt{N} \tag{2}\] with a polynomial-sized quantum circuit on a quantum computer, where \(N:=2^{n}\) and \(\omega_{N}:=e^{2\pi i/N}\) is the \(N\)-th root of unity. Simon's algorithm uses a special instance of QFT, namely the Hadamard transform, which corresponds to the case \(N=2\) and \(\omega_{N}=-1\). From a group-theoretic point of view, \(F_{N}\) is the Fourier transform over \(\mathbb{Z}_{N}\), the additive group of integers modulo \(N\), consisting of elements \(\{0,1,\cdots,N-1\}\); Simon's Hadamard transform is the Fourier transform over \(\mathbb{Z}_{2}^{n}\)[52]. There are two steps in Shor's factoring algorithm, a classical polynomial-time reduction from integer factoring to period finding, followed by an efficient quantum algorithm for solving the period finding problem [4], which uses QFT over \(\mathbb{Z}_{N}\). Combining these two steps, Shor obtained a polynomial-time quantum algorithm for solving integer factorization, which has super-polynomial speedup over the best classical algorithm. Kitaev gave a generalized QFT over an arbitrary finite Abelian group, with which he designed a polynomial-time quantum algorithm for finding the stabilizer of an Abelian group; the Abelian stabilizer problem includes integer factoring and discrete logarithm as special instances [53]. It is worth mentioning that Kitaev also gave the phase estimation algorithm in the same paper, which estimates the phase \(\phi\) in \(U\left|\psi\right\rangle=e^{i2\pi\phi}\left|\psi\right\rangle\) and can be used to solve the period finding problem [53]. In a coherent picture, all these problems belong to the hidden subgroup problems category [54; 52]. _Quantum search.--_ The second primitive starts from Grover's search algorithm [55; 56], which concerns searching over an unsorted database for a target. Formally, given a function \(f:\{0,1\}^{n}\rightarrow\{0,1\}\) and the promise that there is exactly one \(x_{0}\) such that \(f(x_{0})=1\), the search problem is to find the target \(x_{0}\). Since there is no structure in this problem, a classical algorithm will need \(\Omega(2^{n})\) times of queries to find the target \(x_{0}\) with sizable probability. Grover's algorithm allows a quantum computer to find the target with \(O(\sqrt{2^{n}})\) queries to the database, which achieves a quadratic speedup over classical computation. Grover's algorithm repeatedly applies the Grover iterate \[G=(2\left|u\right\rangle\!\!\left\langle u\right|-I)(I-2\left|x_{0}\right\rangle \!\!\left\langle x_{0}\right|)\, \tag{3}\] which is the product of two reflections; here, \(\left|u\right\rangle:=\frac{1}{\sqrt{N}}\sum_{y}\left|y\right\rangle\) is the uniform superposition and \(I-2\left|x_{0}\right\rangle\!\!\left\langle x_{0}\right|\) is the quantum query operator. Grover's algorithm is optimal in the sense that any quantum algorithm that solves this problem will take at least \(\Omega(\sqrt{2^{n}})\) queries [57]. Grover's algorithm can be extended to amplitude amplification, which can handle the case of multiple numbers of targets [58; 59; 60]. More precisely, given a quantum (or classical) algorithm \(\mathcal{A}\) applied to \(\left|0^{n}\right\rangle\) that can output a correct target when measured with probability \(p\), it would require running the algorithm \(1/p\) times to obtain the targeted result. But amplitude amplification can obtain the target in time \(O(1/\sqrt{p})\), which is also a quadratic speedup. The fixed-point version of Grover's algorithm or amplitude amplification can even handle the case when the number of targets is unknown [61; 62; 63]. Grover's algorithm has inspired more applications than Shor's algorithm, as it can be used to speed up the search subroutine for solving many optimization problems [64; 65; 66; 67; 68]. One may consider the search problem in an alternative paradigm, namely, the Markov chains or random walks. The quantum version of random walks includes the continuous-time quantum walk [69; 70; 71] and the discrete-time quantum walk; we focus on the latter here. The framework of the discrete-time quantum walk was incrementally developed in several works [72; 73; 74; 75; 76; 77; 78]. Later, this framework was applied to obtain a different formulation of Grover's search algorithm [79]. In a breakthrough, Ambainis designed a quantum walk algorithm for element distinctness [80] that achieves better query complexity than a direct application of Grover's algorithm and matches the theoretical lower bound [81]. Ambainis' result was generalized subsequently [82; 83], and, in particular, Szegedy gave a general framework for quantizing classical Markov chains [83], which was further improved in [84]. This quantum walk-based search algorithm finds many applications [85; 86], including triangle finding [87], testing group commutativity [88], etc. _Hamiltonian simulation.--_ The third primitive that will be discussed in this review is Hamiltonian simulation, which approximates the time evolution operator \(e^{-iHt}\) of a Hamiltonian \(H\) on a quantum computer. In fact, Hamiltonian simulation is one of the initial motivations for developing quantum computing [2]. The first quantum algorithm for implementing the time evolution operator is given by Lloyd [89], which is based on the Lie-Trotter formula. For example, suppose that we have a local Hamiltonian \(H=H_{1}+H_{2}\) such that the time evolution of the local terms \(H_{1}\) and \(H_{2}\) can be efficiently implemented on a quantum computer. The Lie-Trotter formula gives \[e^{-iHt}=(e^{-iH_{1}t/s}e^{-iH_{2}t/s})^{s}+O(t^{2}/s)\, \tag{4}\] which means that \(e^{-iHt}\) can be implemented by alternating \(H_{1}\) and \(H_{2}\) over an incremental time \(t/s\). One can also use the higher-order formula [90, 91] to approximate the time evolution of \(H\). The general scheme is called the product-formula approach, or Trotterization, which was later applied to simulate a sparse Hamiltonian [92, 93, 94]. Later, a Hamiltonian simulation method based on quantum walk [95, 96] was proposed, which achieved linear gate and query complexities in the evolution time \(t\), matching the lower bound imposed by no fast-forwarding theorem [93]. Another important approach to simulate Hamiltonian dynamics is using a linear combination of unitaries (LCU) [97], and it is shown to have the optimal dependence on the simulation precision [98, 99]. The LCU approach is combined with the quantum walk approach to give an algorithm that has optimal dependence on all parameters of interest, such as precision, sparsity of the Hamiltonian, the evolution time, etc. [100]. Moreover, there is a subroutine used in the LCU approach that was later named block encoding, which turns out to provide a versatile toolkit in designing quantum algorithms. In a series of works, Low et al. gave improved Hamiltonian simulation algorithms based on block encoding and quantum signal processing [101, 102, 103, 104]. The idea is to encode the Hamiltonian as a block of a unitary, and then apply the polynomial transformation to the Hamiltonian using the quantum signal processing technique [101]. This method was further generalized to a framework called QSVT [105], which covers most existing quantum algorithms as special cases, achieving a grand unification of quantum algorithms [106]. However recently, an in-depth analysis of the Trotter error showed that the product-formula approach can achieve a competitive scaling of gate complexity compared to other approaches [107]. _Quantum linear algebra and quantum machine learning.--_ The previous primitives can be combined to design new quantum algorithms. Here, we discuss quantum algorithms for linear algebra and machine learning. Quantum linear algebra starts with the HHL algorithm, named after Harrow, Hassidim, and Lloyd [39]. The problem they considered is to solve linear systems of equations; that is, given a matrix \(A\) and a vector \(\mathbf{b}\), solve \(A\mathbf{x}=\mathbf{b}\) for \(\mathbf{x}\). Given a quantum state \(\left|b\right\rangle\) that encodes the vector \(\mathbf{b}\) in its amplitudes, HHL uses Hamiltonian evolution and phase estimation to approximately prepare the state \(\left|x\right\rangle=A^{-1}\left|b\right\rangle\). Provided that the whole description of the solution \(\mathbf{x}\) is not required and that the state \(\left|b\right\rangle\) can be prepared, the HHL algorithm can achieve exponential speedup over any classical algorithm [39]. HHL was applied to many quantum machine learning algorithms to obtain exponential quantum speedup, including quantum \(k\)-means clustering [108], quantum principal component analysis [109], quantum support vector machine [110], quantum data fitting [111], etc; see Ref. [112] for a review of these algorithms. However, it is not clear whether such an exponential quantum speedup is artificial or not. Specifically, these quantum machine learning algorithms typically made strong input assumptions, such as quantum random access memory (QRAM) with access to the classical data [113]. It might be possible to derive efficient classical algorithms in an analogous setting. In 2018, the breakthrough work by Tang [114] gave a classical algorithm that dequantizes the quantum algorithm for recommendation systems [115], which was previously believed to have an exponential speedup, with only a polynomial slowdown. Tang's result stimulated a series of subsequent work on dequantizing various quantum machine learning algorithms, such as those for principle component analysis [116], solving low-rank linear systems [117, 118], solving low-rank semidefinite programming [119], etc. The sample and query access model [116] to the input data is assumed in those works, which is a classical analogue to the input assumptions in many quantum machine learning algorithms. Since the QSVT provides a primitive for unifying quantum algorithms, especially quantum linear algebra, these dequantization results were later extended to a unifying framework by dequantizing the QSVT [40]. Therefore, whether exponential quantum speedup can be achieved in machine learning is still under debate. _Variational quantum algorithms.--_ Apart from quantizing machine learning algorithms with HHL, another exploration is inspired by neural networks, which are variational quantum algorithms. Variational quantum algorithms are hybrid quantum algorithms that prepare parameterized quantum states on a quantum computer and use classical computers to optimize the parameters. The first variational quantum algorithm is the VQE [37], designed to tackle quantum chemistry problems. Its goal is to find the ground state and ground energy of local Hamiltonians. Before VQE, a common approach was to use quantum phase estimation [120, 121]. However, such an approach, just like other quantum algorithms, imposes a stringent coherence requirement on the quantum devices, which is challenging in the current NISQ era [35]. Since VQE, more works have been done in this direction. Inspired by quantum adiabatic algorithm [122], Farhi et al. proposed the quantum approximate optimization algorithm (QAOA) for solving combinatorial optimization problems such as the max-cut problem [123]. The third family of variational quantum algorithms is the quantum neural networks, which aims to solve machine learning problems such as classification [124, 125] and generative modeling [126, 127]. In the current NISQ era, although we have demonstrated quantum computational supremacy in various models [128, 129, 13, 15], these problems are not designed to be of practical relevance. Variational quan tum algorithms are regarded as promising approaches for demonstrating "killer applications" on quantum computers. Such applications might appear in various areas including quantum chemistry, material science and biological science. For example, in quantum chemistry, VQE can be used to compute the low-energy eigenstates of electronic Hamiltonians, which helps understand chemical reactions and design new catalysts [130]. As for biological science, optimization is often involved in many fields like sequence analysis and functional genomics [131]. This opens opportunities for potential quantum speedup by using quantum neural networks, and quantum variational auto-encoders [132], etc. However, there is a long way to go along this path and continuous efforts should be put into the study of variational quantum algorithms. Moreover, to make it practical for the neat-term quantum devices, perhaps error mitigation techniques are also required [133, 134, 135, 136, 137]. ## III Superconducting qubits _Introduction.--_ Superconducting qubits are nonlinear superconducting circuits based on Josephson junctions, with quantized electromagnetic fields in the microwave frequency domain (typically 0.1-12 GHz). They operate at cryogenic temperatures (\(\sim 10\) mK; equivalent to \(k_{\rm B}T/h\sim 0.2\) GHz) provided by dilution refrigerators in order to suppress thermal fluctuations. Superconducting qubits recently emerge as a leading platform for scalable quantum information processing. Some recent milestones include the demonstration of quantum supremacy using a 53-qubit superconducting quantum processor [13], which is further strengthened with a 66-qubit processor [128]. Offering scalable high-fidelity control and configurable interactions, superconducting circuits have become a versatile playground for quantum computational tasks [138, 139, 140, 141, 125, 142, 143, 144, 145, 146, 147, 148, 149, 150], quantum annealing [151, 19], quantum chemistry [152, 153, 154, 155], exotic many-body physics [156, 157, 158, 159, 160, 161], new regimes for light-matter interaction [162, 163, 164, 165], quantum sensing [166, 167] and studying biological processes [168]. Some facts about superconducting qubits are summarized in Fig. 2(a), and a list of excellent reviews on superconducting qubits can be found in Refs. [169, 173, 174, 175, 176, 177, 178, 179, 180]. The charge carriers in superconductors, known as Cooper pairs, can flow without dissipation, a desirable feature for preserving quantum coherence of a macroscopic system. More importantly, non-trivial quantum properties emerge from the integration of a special superconducting circuit element, the Josephson junction, which is usually in the form of a sandwich structure consisting of two superconducting electrodes separated by a nanometer-thick insulating layer (Fig. 3a); Cooper pairs can tunnel through the insulating barrier with a supercurrent no larger than the critical current \(I_{\rm c}\) of the junction which depends on the material, thickness, and size [191, 192]. From a circuit point of view, a Josephson junction can be modeled as a native capacitor \(C_{\rm J}\) in parallel with a nonlinear inductor \(L_{\rm J}=\Phi_{0}/2\pi I_{\rm c}\cos\phi\), where \(\Phi_{0}=h/2e\) is the superconducting flux quantum and \(\phi\) is the superconducting phase difference across the junction. Two characteristic parameters of a Josephson junction are its Josephson energy \(E_{\rm J}=\Phi_{0}I_{\rm c}/2\pi\) and the charging energy \(E_{\rm C}=e^{2}/2C_{\rm J}\). _Qubit construction.--_There have been numerous explorations of how to construct a superconducting qubit using Josephson junctions. Traditionally, superconducting qubit designs are categorized into charge [193], flux [194, 195] and phase qubits [196]; all are successful in many early demonstrations [197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208]. In recent years, the transmon qubit [209] and a modified version for implementing scalability, i.e., the Xmon qubit [171] have become popular. These modified designs of the charge qubit shunt a Josephson junction with a large capacitor \(C_{\rm S}\) to strongly suppress their sensitivity to the charge fluctuations [210]. Typically, this shunt capacitor lowers the effective charging energy \(E_{\rm C}=e^{2}/2(C_{\rm S}+C_{\rm J})\) to the regime of \(E_{\rm J}/E_{\rm C}>50\) (Fig. 3b); as a result, the sensitivity to charge fluctuations is strongly suppressed. The fact that the transmon design has the simplest possible circuit geometry makes it more tolerant of fabrication variations and excellent at reproducibility. The Hamiltonian of the transmon qubit is the same one for the charge qubit and can be expressed as \[H=4E_{\rm C}n^{2}-E_{\rm J}\cos\phi, \tag{5}\] where \(n\) is the number of Cooper pairs that traverse the junction; \(n\) and \(\phi\) satisfy the commutation relation \([\phi,n]=i\). Note that the Hamiltonian is identical to the one that describes a quantum particle in a one-dimensional potential (Fig. 3c). In the \(C_{S}\gg C_{J}\) limit, the low-energy eigenstates are, to a good approximation, localized states in the potential well, and the superconducting phase \(\phi\) is small. We can therefore expand the potential term into a power series: \[-E_{\rm J}\cos\phi=\frac{1}{2}E_{\rm J}\phi^{2}-\frac{1}{24}E_{\rm J}\phi^{4}+ \mathcal{O}(\phi^{6}). \tag{6}\] The first quadratic potential term leads to a quantum harmonic oscillator with equidistant energy levels \(\hbar\omega_{10}\), whereas the second quartic potential arising from the Josephson nonlinearity introduces anharmonicity to the level structure, allowing the transition energy \(\hbar\omega_{21}\) between first-excited state \(|1\rangle\) and second excited state \(|2\rangle\) to be different from that between the ground state \(|0\rangle\) and first-excited state \(|1\rangle\). This nonlinearity allows one to define a qubit in the computational subspace consisting of the lowest two energy levels \(|0\rangle\) and \(|1\rangle\) only. The design may be further modified by replacing a single junction with a pair of junctions so that the effective Josephson energy, and consequently the qubit frequency, can be tuned by adjusting the magnetic flux threading the two-junction loop. The transmon design can be implemented with a qubit circuit embedded in a three dimensional cavity (Fig. 3d) and in the form of lithographically defined circuits based on superconducting materials such as aluminum and niobium (Fig. 3e) and many other variants [170; 171; 211; 212; 213; 214; 215; 216; 217; 218]. Qubit designs with alternative topology such as capacitively shunted flux qubit [210; 214; 215; 216; 217], fluxonium [218; 219; 220; 221], and 0-\(\pi\) qubit [222] have also been under active development and shown promising progresses. By engineering the energy-level spectra and the coupling matrix elements, some of these designs have a better-defined two-level system and intrinsic protection against external perturbations at the cost of increased circuit complexity. The remarkable flexibility in configuring the Hamiltonian offers a rich parameter space to search for desired qubit properties and therefore gives superconducting qubits the name "artificial atoms". _Readout and initialization.--_Having a well-defined two-level system is not enough for quantum computing; the ability to faithfully measure and initialize the qubit is also indispensable. The prevailing technique for discerning the qubit state is the dispersive readout scheme. Utilizing the cavity or circuit quantum electrodynamics (cQED) architecture, a qubit is strongly coupled to but sufficiently detuned from a readout resonator [223; 224]; the qubit induces a state-dependent shift in the resonator frequency from which the qubit state can be inferred by interrogating the resonator. The cQED scheme has been successful in achieving fast, high-fidelity, non-demolition readout, assisted by a list of technologies that have been invented around this approach. To avoid extra decoherence introduced by the readout resonator, a Purcell filter can be placed between the resonator and the external circuitry to reshape the environmental mode density seen by the qubit and the resonator [225; 226; 227; 228]; in this way, one may enhance the readout speed while inhibiting qubit relaxation. In addition, the use of Josephson parametric amplifiers (JPA) [229; 230; 231; 232; 233; 234; 235; 236; 237] at the beginning stage of the readout signal amplification can also bring an immediate improvement to the measurement fidelity. It is noteworthy that other techniques including multiplexed readout [238], multilevel encoding [239], and photon counting method [240] also help improve the measurement efficiency and scalability. Between consecutive measurements, superconducting qubits are typically initialized by simply waiting for the qubit to relax to its ground state. Various conditional and unconditional reset techniques have been developed for superconducting qubits to accelerate this process [241; 242; 243; 244; 245]. _Gates.--_Controlling superconducting qubits is challenging because performing a quantum logic or unitary operation is fundamentally an analog process governed by the Schrodinger equation and the realistic Hamiltonian is often far from ideal. A single qubit XY operation, rotation around an axis in the XY plane in the Bloch sphere, is commonly implemented by driving the qubit with a resonant microwave pulse. For weakly anharmonic qubits such as the transmon, the resonant drive can induce unwanted leakage to higher excited states and additional phase errors; the derivative-removal-by-adiabatic Figure 3: (a) Schematic of a Josephson junction composed of two superconductors separated by a thin insulating layer through which Cooper pairs can tunnel. Adapted from Ref. [169], Springer Nature Limited. (b) Circuit diagram of a transmon qubit consisting of a Josephson junction (Josephson inductance \(L_{\mathrm{J}}\), self-capacitance \(C_{\mathrm{J}}\)) and a shunt capacitor \(C_{\mathrm{S}}\) (\(C_{\mathrm{S}}\gg C_{\mathrm{J}}\)). (c) Potential profile and level diagram of the transmon qubit, a quantum anharmonic oscillator. (d) Image of a transmon qubit embedded in a three dimensional cavity. Adapted from Ref. [170]. (e) Image of a planar transmon qubit. Adapted from Ref. [171]. (f) Photograph of the Sycamore quantum processor. Adapted from Ref. [13], Springer Nature Limited. (g) Device schematic of the _Zuchongzhi_ quantum processor. Adapted from Ref. [128]. (h) Photograph of a modular quantum processor consisting of two nodes. Adapted from Ref. [172], Springer Nature Limited. gate (DRAG) scheme, which adds additional quadrature components to the pulse, has become a routine in pulse calibration to combat these coherent errors at no additional hardware cost [246, 247]. The single-qubit phase gate or Z gate, rotation around the Z axis, can be realized either by combining XY rotations or by adding a physical Z pulse provided that the qubit frequency is adjustable or by performing the more efficient virtual Z gate through shifting the phases of XY rotations [248]. Heat dissipation is another important concern for cryogenic experiments when scaling to a large number of qubits; a more energy-efficient approach for single-qubit operations using single-flux-quantum (SFQ) circuits has been demonstrated recently [249]. Entangling operations are currently the performance bottleneck for existing quantum processors. Among the numerous entangling gate schemes, most of them are between two qubits and they generally belong to two families. One general approach is to frequency-tune the relevant energy levels into resonance to initiate interactions; related demonstrations include the implementation of iSWAP or\(\overline{\mathrm{iSWAP}}\) gate [250, 251] and the controlled-Z gate [252, 253, 254, 255]. The other method is to apply microwave pulses at certain non-local transitions; for example, the cross-resonance gate [258, 259], resonator-induced phase gate [260] and parametrically driven gate [261, 262]. The gist for obtaining high-fidelity two-qubit gates is to engineer an effective interaction strength such that it is strong during gate operation for achieving short gate time while as weak as possible outside of the gate window for avoiding unwanted interactions, in another word, a high on/off ratio. In a fixed-coupling architecture where the qubit-qubit coupling strength \(g\) is almost constant, a straightforward way to turn the interaction on or off is to tune the qubit frequencies into or away from resonance. However, as the qubit-qubit connectivity increases, each qubit sees more transitions in its surroundings; it becomes increasingly difficult to manipulate the whole system in a clean fashion. This is known as the frequency-crowding problem, one of the main challenges in scaling up quantum processors. The problem also exists for alternative coupling schemes such as the all-to-all connection via a bus resonator [263]. It may be resolved by the tunable-coupling schemes in which the coupling strength \(g\) can be independently controlled over a large dynamic range [264, 265, 266, 267, 268, 269, 270, 271]. In recent years, a tunable-coupling architecture based on native capacitive coupling and interference effect [272] has become a trending solution; many research groups have made tremendous progress in gate fidelities, including some results approaching the 99.9% mark [273, 274, 275, 276, 277, 278, 279, 280]. Typical performance of superconducting processors is summarized in Table 1. _Decoherence.--_ Quantum information can be quickly destroyed by decoherence. The bad news is that superconducting qubits are extremely susceptible to external fluctuations due to their macroscopic nature. One immediate solution is, of course, to make the qubit lifetime longer. Ever since the first observation of quantum coherence in superconducting qubits [12], the lifetime of the qubit has been improved by six orders of magnitude from nanoseconds to milliseconds [282, 283, 284] in about 20 years. This remarkable progress is attributed to a combination of advances in design, material, fabrication quality, and testing environment. The current common belief is that spurious two-level systems (TLS) that reside in the vicinity of the qubits are a major source of decoherence [285] and unpredictable fluctuations in coherence and qubit frequencies, which can be troublesome in large-scale implementations [286, 287, 288, 289, 290]. Besides coherence improvement on the hardware side, another way to combat noise and decoherence is through quantum control methods. A particularly useful technique is dynamical decoupling (DD) [291] which uses tailored pulse sequences to correct for coherent noise, in particular, the notorious \(1/f\) noise [292] that is ubiquitous in these solid-state devices. Designing an optimal sequence requires detailed knowledge of the noise such as its spectral properties which may be extracted using various techniques at different frequency ranges [293, 294, 295, 251]. Given the limited coherence, the performance of a quantum processor may also be enhanced through optimized quantum compiling, i.e., to translate high-level operations into a shorter sequence of native gates [299]. Compiling on a superconducting quantum processor can be exceptionally challenging due to the planar geometry and limited connectivity; often, the final sequence to execute is too time-consuming. An effective strategy is to fully explore the hardware capabilities and diversify the available gate alphabets to optimize compilation. Recent progress on continuous gate set, multi-qubit gates, and qudit operations have shown considerable potential in this respect [300, 301, 302, 303, 304]. _Quantum error correction.--_ Since the state-of-the-art gate error rate (\(10^{-3}\)) is many orders of magnitude higher than that a logical qubit would require (\(10^{-12}\)-\(10^{-15}\)), QEC is necessary for building a universal quantum computer. Surface codes [305, 306], which encode logical qubits into a square lattice of physical qubits, are appealing for planar architectures. Recently, we have observed a surge of exciting experimental developments in this respect [307, 308, 309, 21, 22, 24]. In some of these experiments the performance is getting close to or partially exceeds the QEC threshold. Still, it is challenging to achieve substantial error-correction gain and most importantly, to have the performance reproducible at an even larger scale. In the near future, a logical qubit made of a few hundreds to a thousand physical qubits is highly anticipated; in the next five to ten years, we may have an idea about whether a fault-tolerant quantum computer is feasible and how powerful it can be. A relevant issue with increasing attention is the presence of cosmic rays which can cause chip-wide failure and is catastrophic to surface codes [310, 311, 312]. Another promising route to QEC is bosonic codes, where logical qubits are encoded in microwave photon states of three-dimensional superconducting cavities. Depending on how a logical qubit is encoded into harmonic oscillator states, there are different kinds of bosonic codes [313, 314, 315]. The cat codes and associated variants utilize a superposition of two photonic cats of the same parity as logical qubit [316, 317, 318, 319, 320]; the binomial codes instead use definite photon number parity as code words [321, 322]; the Gottesman-Kitaev-Preskill (GKP) codes implement a coherent state lattice in phase space [323, 324, 325, 326, 327] with the advantage that errors, measurements, and gates are simple displacements of the oscillators. To date, only bosonic codes have reached the break-even point in QEC experiments, which means that the error-corrected qubit has longer lifetime than the otherwise. This is due to the fact that microwave photons have fewer error syndromes and three-dimensional cavities usually have higher quality. _Scalability.--_Lastly, we would like to touch upon the most concerning question: how to make the superconducting quantum processor more scalable. With the continuous improvement in planar circuit design and fabrication and the development of flip-chip packaging, dozens of superconducting qubits have been integrated on a single processor so far, allowing for the demonstration of quantum supremacy [13] (Fig. 3f) or quantum computational advantage [128] (Fig. 3g). It is worth emphasizing that simply printing thousands of qubits is straightforward, but the real challenge is to achieve high-fidelity operations for all qubits, simultaneously. For this purpose, many existing architectures may need to be reinvented. First of all, hosting more qubits in a limited space requires reducing the qubit footprint. Recent developments show that the shunt capacitor can be miniaturized by 100 folds or more using two-dimensional materials while maintaining coherence [328, 329, 330]. Even if the qubits can be densely packed up on a chip (size: \(L\times L\)), it is nevertheless extremely difficult to route all the control wires from the perimeter (length \(\propto L\)) to individual qubits (density \(\propto L^{2}\)) due to the unmatched scaling law; let alone to avoid crosstalk between wires. In recent years, there have been substantial efforts to exploit the third dimension to relieve this pain with various technologies borrowed from the semiconductor chip packaging, such as flip-chip bonding and through-silicon vias [331, 332, 333, 334, 335, 336, 337, 338]. Aside from expanding the space for wiring, a different approach is to reuse the wire for multiple targets. Signal multiplexing and control line sharing schemes can alleviate the problem for future large-scale devices [339, 340, 304]. They also help reduce the cable density inside and out of the dilution refrigerator. In the future, we may end up with insurmountable engineering challenges, including available wafer size, device yield, and crosstalk, all constraining the scalability of monolithic quantum processor designs. This suggests the desirability of developing alternative modular approaches, where smaller-scale quantum modules are individually constructed and calibrated, then assembled into a larger architecture using high-quality quantum coherent interconnects [341, 342, 343, 344, 345]. Several recent experiments have demonstrated deterministic quantum state transfers (QSTs) between two superconducting quantum modules, with interconnects provided by commercial niobium-titanium (NbTi) superconducting coaxial cables [346, 347, 348], copper coaxial cables [349], and aluminum waveguides [350], showing fidelities up to \(\sim 80\%\), primarily limited by lossy components including connectors, circulators, and printed circuit board traces. More recent efforts using wirebond [172] or clamped [351] connections between the quantum modules and the superconducting cables have eliminated the need for normal-metal components, improving cable quality factors to \(\sim 5\times 10^{4}\) and inter-module QST fidelities to \(\sim 90\%\) (Fig. 3h). Flip-chip modular approaches have also been pursued [352], where the qubits on separate chips are closely spaced and directly coupled, achieving high fidelities while retaining many of the benefits of a modular architecture. In addition to the architectural design of quantum processors, supporting technologies are also crucial for scaled implementations. As a result of non-ideal fabrication conditions, the critical current of a Josephson junction usually varies by a few percent, equivalent to a few hundred megahertz in terms of qubit frequency; such unpredictable variation severely affects the quality of qubit operations. Techniques for improving junction uniformity during and after fabrication may open up new possibilities in hardware and software design [353, 354, 355, 356]. Moreover, the capacity of a dilution refrigerator will ultimately be limited by its cooling power. Promising solutions include a careful wiring plan [357] and energy-efficient cryogenic electronics [358, 359, 360]. Simultaneous high-fidelity operations require low crosstalk. Crosstalk mitigation is a must-do. Besides optimization through design and packaging [361, 362, 363, 364, 365, 216], various control techniques have been developed to reduce different kinds of crosstalk such as microwave signal crosstalk [366, 279], spectator effect [367, 368, 369], and residual \(ZZ\) interactions [370, 371]. In the future, by integrating together various technologies in a large system, more powerful quantum processors \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline \hline No. of qubits & \(T_{1}\) (\(\mu\)s) & \(T_{2}^{*}\) (\(\mu\)s) & \(\mathrm{t}_{\mathrm{t}_{1q}}\) (ns) & Err\({}_{\mathrm{t}_{1q}}\)(10\({}^{-3}\)) & \(\mathrm{t}_{2q}\) (ns) & Err\({}_{\mathrm{2q}}\)(10\({}^{-3}\)) & \(t_{t}\) (\(\mu\)s ) & Err\({}_{\mathrm{r}}\) (10\({}^{-2}\)) & Fridge & Size \\ \hline 53, 66 [128, 13] & \(16-30.6\) & \(\sim 5.3\) & \(\sim 25\) & \(\sim 1.4\) & \(\sim 12\) & \(\sim 5\) & \(\sim 1\)[281] & \(\sim 3.1\)[281] & \(\sim 20\) mK & \(\sim 1\) mm\({}^{2}\) \\ \hline \(<\)10 [273, 274, 275, 276, 277, 278, 279, 280] & \(15-76\) & \(12-105\) & \(\sim 30\) & \(\sim 1\) & \(10-200\) & \(\sim 1.5\) & & \\ \hline \hline \end{tabular} * \(\mathrm{t}_{\mathrm{t}_{1q}}\) (\(\mathrm{t}_{2q}\)) is the duration of one-qubit (two-qubit) gate. * \(\mathrm{t}_{\mathrm{r}}\) (Err\({}_{\mathrm{r}_{\mathrm{r}}}\)) stands for the duration (error rate) of the readout. \end{table} Table 1: Typical performance reported in superconducting qubits. based on superconducting qubits can be anticipated. ## IV Trapped-ion qubits _Introduction.--_ Trapped-ion systems are the leading physical platform in pursuit of fault-tolerant quantum computing. Laser-cooled atomic ions confined in an ultra-high vacuum environment are well isolated from noises, being able to encode high-quality qubits into a stable pair of electronic energy levels in each ion, as shown in Fig. 2(b). Ion qubits hold the longest coherence time beyond any other systems [372; 373; 374], and can be initialized and measured with extremely high fidelity [375; 376; 372]. Quantum logical operations are typically performed through tailored laser or microwave-ion interactions, and gate fidelities achieved experimentally on a few ion qubits have gone well beyond the fault-tolerant threshold [377; 378; 379; 377; 372]. Ion-based quantum processors can already primely manipulate dozens of qubits [380; 381], and quantum algorithms such as Shor's algorithm and Grover's search algorithm have been exemplified in small-scale systems [382; 383]. Meanwhile, quantum simulators with up to 53 qubits have been demonstrated to study various novel features of complex many-body systems represented by quantum spin models [384; 385; 386; 387; 388; 389; 390; 391; 392]. The above progress illustrates the great potential of utilizing large-scale trapped-ion systems for ultimate quantum computing. _Ion qubit.--_ Despite hundreds of atomic species that exist in nature, hydrogen-like ions are preferred for trapped-ion quantum computing due to their relatively simple atomic structures. Alkaline-earth ions like Be\({}^{+}\)[377], Mg\({}^{+}\)[393], Ca\({}^{+}\)[378; 379; 381; 394], Sr\({}^{+}\)[395], Ba\({}^{+}\)[396; 397], and rare-earth ions like Yb\({}^{+}\)[380; 398; 399; 400] are the most frequently utilized in current research. A qubit can be encoded into a pair of energy levels of a single ion, and a representative encoding scheme employs a combination of levels belonging to the ground manifold \({}^{2}S_{1/2}\) and the long-live metastable manifold \({}^{2}D_{5/2}\). This scheme is the prior choice for even-mass isotopes, such as \({}^{40}\)Ca\({}^{+}\)[381] and \({}^{88}\)Sr\({}^{+}\)[395], and these qubits have energy gaps on the order of optical frequencies; therefore, they are denoted as optical qubits. Although the typical lifetimes of metastable levels can reach a few seconds or so, they would ultimately limit the coherence time of optical qubits. For odd-mass isotopes, the encoding scheme is to utilize the hyperfine splittings of the ground manifold \({}^{2}S_{1/2}\) induced by non-zero nuclear spins. The lifetime of ground-state hyperfine levels can approach the age of the universe, resulting in an extremely long relaxation time (\(T_{1}\)) compared to that of the optical qubits. Meanwhile, a certain pair of hyperfine levels can form the so-called "clock state" under a suitable external magnetic field, and its energy gap is insensitive to the static magnetic field to the first order, thus having a relatively long coherence time (\(T_{2}\)). It is experimentally observed that the \(T_{2}\) time of a single \({}^{43}\)Ca\({}^{+}\) ion can reach 50 s [372]. This record is then extended to 10 minutes in a \({}^{171}\)Yb\({}^{+}\) ion qubit by using DD pulses and sympathetic cooling assisted by a \({}^{138}\)Ba\({}^{+}\) ion [373]. Most recently, the one-hour coherence time has even been approached by further reducing the potential noise from the external magnetic field and the leakage from microwave sources [374]. Such a long coherence time allows systems to execute millions of gate operations before losing quantum features. The cycle transition of \({}^{2}S_{1/2}\leftrightarrow{}^{2}P_{1/2}\) facilitates the realization of extremely low state-preparation-and-measurement (SPAM) errors on ion qubits. Qubit state initialization is achieved by optical pumping techniques. By choosing proper polarizations and frequencies of pumping lasers, a certain energy level of the qubit can be a dark state, and then the ion state would be pumped to this level with high probability and high speed. A typical initialization process can take a few microseconds, and infidelity can be suppressed close to \(10^{-4}\)[372]. State measurement is implemented by resonating one of the qubit levels to a short-live manifold and collecting the corresponding fluorescent photons simultaneously. Projected qubit states in a single shot can be distinguished by determining whether the number of collected photons reaches a certain threshold. Depending on the photon collection rate, the measurement duration could vary from several microseconds to milliseconds, while the error can reach below \(10^{-3}\)[375; 376; 372]. This error can be suppressed to around \(10^{-4}\) for ions with long-live levels for state shelving [401]. Several other methods, such as adaptive analysis or time-stamping of arriving photons [375; 401; 402], are employed to further reduce measurement infidelity or increase detection speed, and meanwhile, machine learning methods can be utilized for multiple-qubit detection to reduce crosstalk errors [403; 404]. _Quantum gates.--_ Quantum gates on ion qubits are typically performed by interacting ions with external laser or microwave fields, depending on the qubit encoding schemes. For example, single-qubit rotations on optical qubits can be applied through optical quadrupole transitions induced by a narrow linewidth laser, and that on hyperfine qubits can be realized by using microwaves or stimulated two-photon Raman transitions. Error rates below \(10^{-4}\) have been reached for either quadrupole transitions on optical qubits or Raman transitions on hyperfine qubits [377; 378]. Error rate of \(10^{-6}\) was even achieved on microwave-driven hyperfine qubits [372]. Although high-fidelity single-qubit rotations are readily accessible experimentally, the qualities of current quantum processors are mainly limited by the performance of entangling operations. The first proposal of the two-qubit gate on ion qubits, the Cirac-Zoller gate [6], is challenging to scale up due to the stringent requirements for ground-state cooling and sensitivity to thermal excitation on motional modes. However, this proposal inspired the idea of using collective motional modes of ion-chain to engineer effective qubit-qubit cou plings. The entangling schemes used today can be categorized into Molmer-Sorensen gates [411, 412] and light-shift gates [413, 414], which both rely on the notion of state-dependent forces. These schemes show excellent performance in experimental demonstrations. The error rate of the Molmer-Sorensen gate below \(8\times 10^{-4}\) is achieved with two \({}^{9}\)Be\({}^{+}\) ions [377], and that of the light-shift gate below \(9\times 10^{-4}\) is realized on two \({}^{43}\)Ca\({}^{+}\) ions [378]. Recently, light-shift gates on optical qubits are theoretically investigated and then experimentally demonstrated in a two \({}^{40}\)Ca\({}^{+}\) ions system [379, 415]. The gate infidelity, as low as \(6\times 10^{-4}\), is approached, representing \begin{table} \begin{tabular}{c|c|c|c} \hline \hline \multicolumn{2}{c|}{Qubit type} & Hyperfine qubit & Optical qubit \\ \hline \multirow{2}{*}{\(T_{2}\)} & 50 s [372] & \multirow{2}{*}{0.2 s [410]} \\ & 5500 s [374]b & & \\ \hline \multicolumn{2}{c|}{SPAM error} & \(6.9\times 10^{-4}\)[376] & \(8.7\times 10^{-5}\)[401] \\ \hline \multirow{2}{*}{1Q gate} & Duration & 1-10 \(\mu\)s typical & 1-10 \(\mu\)s typical \\ \cline{2-3} & Fidelity & 0.99996 [377] & 0.99995 [410] \\ \hline \multirow{2}{*}{2Q gate} & Duration & 10-100 \(\mu\)s typical & 10-100 \(\mu\)s typical \\ \cline{2-3} & Fidelityc & 0.9991 [377] & 0.9994 [379] \\ \hline \multicolumn{2}{c|}{Maximally entangled qubits} & 24 [381] \\ \hline \multicolumn{2}{c|}{Environment} & Ultra-high vacuum \(<10^{-11}\) Torr \\ \hline \hline \end{tabular} * Here we only include data from peer-reviewed publications. * With dynamical decoupling. * Two-qubit gate fidelities are estimated from the fidelities of the prepared Bell states. \end{table} Table 2: Selected state-of-the-art performance on ion qubits a. Figure 4: (a) Three-dimensional Paul trap and captured long ion chain. Compared to the conventional four-rod trap, electrodes shown here are transformed into blade-shape to enhance optical accessibility. A one-dimensional chain of ions is trapped along the null line of the radiofrequency field, and a tightly focused laser beam array individually controls ion qubits. (b) QCCD architecture, adapted from Ref. [405]. Large numbers of ions are distributed in large chip-type traps with multiple trapping zones. Ions can be manipulated independently in different functional zones to realize logical operations, storage, or readout. Quantum information can be interchanged by transporting ions between zones. (c) Remote-ion entanglement, adapted from Ref. [405]. Ions in different traps can be heraldedly entangled by generating ion-photon entanglements and then applying Bell measurement on photons. It paves the way for large-scale distributed systems. (d) Surface trap fabricated by Sandia National lab, adapted from Ref. [405]. (e) Integrated photonic system to deliver laser beams to ions position. Here we show the surface trap with multi-wavelength integration done by the MIT group, which is adapted with permission from Ref. [406], Springer Nature Limited. (f) On-chip detection of ion qubit. Several groups have been successfully demonstrated integrated single photon detector on the fabricated surface traps [407, 408, 409]. the best entangling gate achieved ever. Laser fields are mostly utilized to drive entangling gates on ion qubits due to their large spatial gradient of electric fields to provide efficient ion-motion couplings. However, microwave-driven entangling gates are also pursued due to the extreme stability of long-wavelength microwaves [416, 417]. Ion-motion coupling induced by microwave fields can be achieved by placing magnetic field-sensitive qubits into static magnetic fields with large spatial gradients or by exploiting near-field oscillating microwaves. The former scheme suffers from short coherence times induced by fluctuating magnetic fields, which can be overcome by utilizing microwave-dressed qubits [418, 419], while the latter one requires microwave sources close to the ions; therefore, the crosstalks should be taken care of. Experiments have demonstrated gate fidelities of about 98.5% [398] and 99.7% for each scheme [420, 421] respectively. Moreover, a recent experimental work has shown that with the microwave-driven laser-free gate, an almost perfect symmetric Bell state has been generated [393]. These advances promise a scalable way to achieve ion-based quantum computing with full microwave control [422]. However, most experimentally implemented entangling operations so far are relatively slow, usually on the order of tens to hundreds of microseconds, thus limiting the core speed of ion-based quantum processors. Therefore, fast gate implementation becomes one important topic of research in recent studies. The straightforward way to speed up entangling gates is to increase the laser power to enhance the laser-ion coupling strength. Along this routine, an entangling gate with a duration of 1.6 \(\mu\)s is achieved while the fidelity is still maintained at 99.8% [423]. However, the gate fidelity drastically drops to around 60% when the gate duration further reduces to 480 ns, due to the breakdown of the Lamb-Dicke (L-D) approximation. It might be solved by considering high-order qubit-motion couplings [424]. Another way to achieve fast gates is employing a sequence of ultrafast laser pulses to impose ultrafast state-dependent kicks on ion qubits [425, 426, 427], which does not require the ions to remain in the L-D regime. A Bell state with 76% fidelity is prepared within 1.96 \(\mu\)s in a recent experimental demonstration, and the main infidelities come from the imperfect kick control and off-resonant coupling to undesired energy levels [428]. High repetition-rate pulsed lasers can be helpful to further improve the gate speed [429]. Although recent implementations of fast gates still have limited fidelities, these schemes all show well scalability. _Scalability.--_A straightforward way to scale up ion-based quantum processors is to trap multiple ions in a linear array, as illustrated in Fig. 4 (a). By exploiting the collective motional modes of the entire ion chain, entangling gates can be applied to any two ion qubits by coupling to single or multiple motional modes. For the latter case, time-modulated state-dependent forces are required to decouple multiple motional modes from ion qubits simultaneously to guarantee high-fidelity operations [430, 431, 432, 433, 434, 435, 436]. Along this route, up to 14 \({}^{40}\)Ca\({}^{+}\) ion qubits were first used to generate the Greenberger-Horne-Zeilinger (GHZ) states [437], and then the qubit number was increased to 24 in a recent report [381]. Meanwhile, a programmable trapped-ion quantum processor was implemented in 2016 [438], consisting of 5 individually controlled \({}^{171}\)Yb\({}^{+}\) ion qubits, and later this system was extended to 11 qubits in 2019 [380]. So far, multiple research groups around the world have realized their quantum processors with long ion chains [395, 399]. One distinct advantage of using an ion chain is the so-called full connectivity [439], which, as already mentioned, allows ion qubits confined in the same potential to be directly entangled even if they are not spatially adjacent. This feature makes the decomposition of quantum circuits more efficient and makes it possible to realize multi-qubit entangling gates. Several theoretical works have pointed out that multi-qubit gates might bring polynomial or even exponential speedups to running quantum tasks [440, 441, 442, 443]. Therefore, researchers have been eager to explore scalable ways to achieve multi-qubit gates in recent years [444, 445, 399]. However, this linear-chain architecture also has several drawbacks, making it hard to reach a large scale. For example, the laser power required to entangle the ion qubits in a chain would increase as the size of the chain enlarges. The long chain's cooling also becomes imperfect, while gate operations become more sensitive to external noises. To further scale up to larger numbers of qubits, we can trap multiple ion chains in several independent potentials and construct some link channels for interconnections. One representative architecture is the quantum charged-couple device (QCCD) proposed in 2002 [445] (see in Fig. 4 (b)). Links between chains are achieved by modifying local electric potentials to redistribute ions between trap regions physically. To achieve this goal, shuttling operations [446] like linear transport, splitting or merging of ion-chain, and position swap should be included together with quantum gates in local chains as the basic operations of quantum processors. These operations must be performed fast enough so that they do not become processing speed bottlenecks. Several fast shuttling methods have been fully investigated and demonstrated to simultaneously satisfy these two requirements [447, 448], promising to construct a reliable highway of ion qubits to enable a large-scale QCCD architecture. However, the complexity of the ion trap is significantly increased compared to that used for a single linear chain. An ion trap with numerous independent control electrodes is required to realize multiple trap regions and precise control of the ion shuttle. Microfabricated chip traps are a satisfactory solution to these complexities [449, 450]. Recently, a series of 1D chip traps and 2D traps with X-type [451], Y-type [452], or T-type [453] junctions have been presented. Utilizing these well-controlled traps, 4 qubits GHZ state has been prepared in a shuttling-based way [394], and quantum gate tele portation has also been demonstrated [454]. Moreover, high-quality quantum processors based on QCCD architecture have shown excellent performance according to quantum volume measurement [455]. However, towards large-scale surface traps, there are still several challenging issues that should be addressed. One critical problem is the anomalous heating on motional modes of ion chains [456, 457, 458, 459, 460, 461, 462, 463, 464, 465, 466, 467, 468]. It would destabilize ion chains and become one of the main error sources for quantum gates and ion shuttling. Although many efforts have been made to reveal the origins of this heating effect, the problem is still not well solved. Other issues such as radio frequency (RF) potential barrier [459] in the junction transport and relatively low trap depth would also plague the development of scalable QCCD-based quantum processors. Nevertheless, QCCD architecture is still an outstanding approach for large-scale trapped-ion quantum computing. Another natural choice to link ion qubits in different regions is utilizing photons [460]. By exciting ion qubits to a short-live ancilla atomic level, the polarization of the spontaneously emitted photon would entangle with a decayed ion state. By applying Bell measurement to the photon pair from different ions, a heralded entanglement can be generated between ion qubits depending on the measurement outcomes, as shown in Fig. 4 (c). This remote-ion entangling method enables the feasibility of entangling ion qubits in different vacuum systems even if they are far away, leading to the paradigm of distributed quantum computing. The generating rate of the remote entanglement should be fundamentally determined by the scattering rate of the ancilla level. However, in practice, this rate is mainly limited by the collecting rate of the emission photons. In recent experimental demonstrations, a generation rate of 4.5 Hz [461] is first realized and then improved to 182 Hz [462], while the best fidelity of the heralded qubit entangling state is 94%. This value is much slower than the gate speed of directly entangling qubits in the same trap. Several methods have been proposed to further increase the generation rate, such as enlarging the numerical aperture of the photon collecting system and increasing the quantum efficiency of the single photon counter. Significant improvement might be achieved if we place ion qubits into a high finesse micro-cavity, enhancing the spontaneous emission through the Purcell effect [463]. Moreover, the conversion of a single photon from visible regime to telecom-wavelength has been demonstrated recently [464], although with quite low converting efficiency, making it possible to build distributed quantum systems with ultra-low optical loss. Practical remote-ion entanglement would facilitate the construction of large-scale quantum computing platforms and also quantum networks. Here we briefly talk about hybrid-ion systems. When we generate remote entanglement of two traps or apply mid-circuits measurement to certain ion qubits belonging to an ion register, we want to only excite targeted ion qubits without disturbing others resonantly. One solution is that the measured ion and others belong to different species, so the resonance frequency is quite different, significantly suppressing crosstalk. Consequently, entangling gates on ion qubits of mixed species are required in hybrid-ion systems. High-fidelity entangling operations on mixed-species ion qubits are well displayed on such as \({}^{9}\)Be\({}^{+}\)\({}^{25}\)Mg\({}^{+}\)[454], \({}^{40}\)Ca\({}^{+}\)\({}^{-}\)\({}^{43}\)Ca\({}^{+}\)[465], \({}^{43}\)Ca\({}^{+}\)\({}^{-}\)\({}^{88}\)Sr\({}^{+}\)[466], \({}^{171}\)Yb\({}^{+-}\)\({}^{138}\)Ba\({}^{+}\)[467, 468] and even a long chain of \({}^{9}\)Be\({}^{+}\)\({}^{-}\)\({}^{9}\)Be\({}^{+}\)\({}^{-}\)\({}^{40}\)Ca\({}^{+}\)[469]. Meanwhile, mixed-species systems make it feasible to apply sympathetic cooling, allowing to cool down ion register without destroying stored quantum information [470, 471, 472]. It is valuable for suppressing motional excitation during ion shuttling in the QCCD architecture. Meanwhile, proposals for hybrid encoding, by utilizing multiple energy levels of a single ion to encode different qubit types, have been made recently to construct hybrid systems with single ion species [473]. The interconversion of different qubit encoding on a single ion has been experimentally demonstrated [474]. It might open a new way for scalable trapped-ion quantum computing. _System integration.--_ System integration is inevitable in building scalable large-scale trapped ion quantum computers. One typical example is the microfabricated surface traps mentioned above. In the past decade, researchers have done several important works to promote the integration of trapped ion systems further, and on-chip integrated optics is one of them. Laser beams can be delivered and tightly focused at the location of the ions by embedding optical waveguides beneath the surface traps and fabricating properly designed gratings coupler at the end of each waveguide. Optical integration from single to multiple wavelengths has been well-demonstrated [475, 406]. Single qubit rotations and two-qubit entangling gates have been implemented using laser beams delivered through waveguides [475, 476], showing extreme robustness against vibrational noises. Furthermore, the integration of single photon detectors on chip traps has been successfully demonstrated recently [407, 408, 409], showing a scalable way for high-fidelity readout of multiple ion qubits on large-scale quantum processors. Conventional analogue voltage sources are also integrated on-chip [477], enabling an expandable approach to control numerous ion trap electrodes and laying the technical foundation for circuit integration in large-scale QCCD architectures. _Outlook.--_ This section briefly reviews the significant advancements in trapped ion quantum computing over the past decades, from excellent control of several ion qubits to demonstrations of scalable architectures. With fully controllable trapped-ion processors, several great advances have been made recently in QEC [481, 489, 490, 491, 492, 493, 494, 495, 496, 497, 498, 499, 499, 490, 492, 494, 495, 496, 497, 498, 499, 499, 490, 499, 491, 490, 492, 493, 494, 495, 496, 497, 498, 499, 499, 499, 499, 490, 4910, 493, 499, 4911, 492, 494, 495, 496, 497, 498, 499, 499, 499, 499, 499, 499, 490, 4911, 493, 499, 4912, 495, 496, 497, 498, 4 following decades. Meanwhile, by merging the techniques developed for trapped-ion quantum computing, we might also gain better performance in ion-based precision measurement. With continuous development, trapped-ion systems would remain an important platform and tool for future quantum information applications. ## V Semiconductor spin qubits _Introduction.--_ Spin qubits in semiconductors have made tremendous progress over the past few decades. Although most toolboxes have been built based on GaAs quantum dots [491], this field had a revival after the host material steered to silicon. Further momentum is gained after some recent vital breakthroughs, such as fault-tolerant quantum gates [492, 493, 494], rf-reflectometry spin readout [495, 496, 497, 498], spin-photon strong coupling [499, 500, 501, 502, 503], hot qubits [504, 505, 506], and cryo-CMOS controlling chip [507, 508]. Combined with its inherent scalability from the semiconductor industry and its small footprint, spin qubits are now well poised for the following milestones--quantum advantages over classical supercomputers, prototype machines of fault-tolerant quantum computing and QEC, and hybridization between classical and quantum electronics. _Qubit construction.--_ Semiconductor qubits are defined on the charge and spin degrees of freedom of the carriers trapped in quantum dots or dopants. There are several types of qubits, such as spin qubits [10, 509, 10, 510], charge qubits [511, 512], exchange-only qubits [513, 514, 44], hybrid qubits [515], and singlet-triplet qubits [516, 517]. A spin qubit is usually defined by the spin states of a single electron or hole trapped in a semiconductor quantum dot or a dopant in silicon (see Fig. 5) [10, 11], as \(|0\rangle=|\downarrow\rangle\) and \(|1\rangle=|\uparrow\rangle\), where \(|\downarrow\rangle\) and \(|\uparrow\rangle\) denote spin down and spin up, respectively. Chosen states of an interacting multi-spin system can also be defined as a qubit, such as the singlet-triplet qubit (i.e., the singlet state \(|S\rangle=(|\uparrow\downarrow\rangle-|\downarrow\uparrow\rangle)/\sqrt{2}\) and the unpolarized triplet state \(|T_{0}\rangle=(|\uparrow\downarrow\rangle+|\downarrow\uparrow\rangle)/\sqrt{2}\) of two exchange-coupled spins). Here we will focus on the spin qubits in this review, as they are becoming the central topic in the field in recent years. We will also briefly introduce other types of qubits. Interested readers can find more information in the relevant references [518, 519, 520, 521, 509, 508, 509, 510, 509, 511, 508, 512, 513, 514, 515, 516, 517, 518, 519, 520]. _Decoherence.--_ The development of semiconductor host materials and related fabrication technologies underpins progress in this field. GaAs/AlGaAs heterostructure quantum well has been the key substrate for gate-defined quantum dots [529], where this field has accumulated many of its foundational building blocks, such as single charge sensing, single-shot spin readout, qubit operations, and interactions, to mention only a few. Nevertheless, the nucleus of the host GaAs forms a fluctuating magnetic field, namely the Overhauser field, which limits the spin dephasing time \(T_{2}^{*}\) in GaAs to around the range of tens of nanoseconds [529, 510, 491]. Although GaAs quantum dots still stand out nicely as a demonstration platform for quantum simulation [530, 531] and quantum physics research [532, 533], the limited spin dephasing time hinders the development of high-fidelity quantum gates for quantum computing. A host material consisting of zero nuclear spin isotopes is necessary to embrace a long dephasing time. In his seminal paper 20 years ago, Bruce Kane emphasized that group IV materials are ideal options as they feature stable zero-nuclear spin isotopes with high natural abundance [10]. The nuclear-spin free isotope \({}^{28}\)Si, for example, could be accessed via purifying the natural silicon. Another advantage of the silicon substrate is the long spin \(T_{1}\) times, which comes from the different spin relaxation behavior compared with the group III-V materials [510]. Since \(T_{1}\) sets an upper bound on \(T_{2}\) for a spin system via the relation \(2T_{1}\geq T_{2}\), a long \(T_{1}\) is the prerequisite for having a long \(T_{2}\). Therefore, silicon spin qubits have become the working horse in this field, and the mainly explored platforms include silicon metal-oxide-semiconductor (MOS) [523, 489, 525], silicon-on-insulator (SOI) [534, 488], the dopant in silicon [535, 490, 525, 536], silicon-germanium heterostructures based Si/SiGe [537, 492, 524, 536], and Ge/SiGe [538, 539, 541]. After extensive studies, competitive \(T_{2}\) and \(T_{2}^{*}\) compared to the other major quantum computing platforms are demonstrated in silicon spin qubits. Up to now, \(T_{1}\) values ranging from 160 milliseconds [522] to 30 seconds [542] and \(T_{2}^{\text{Hahn}}\) values ranging from 99 microseconds [524] to \(\sim\)1 second [489] are observed in silicon dopant devices, Si/SiGe gate-defined systems, and Si-MOS systems (for detailed comparison, please refer to Table 3). Other silicon spin systems also exhibit typical long coherence times, such as hole spins in SOI [488] and Ge/SiGe gate-defined quantum dots [539]. Long coherence sustains even at 1.1 K\(\sim\)4.2 K in MOS [504, 505] and SOI devices [506]. _Gates.--_ Over the past years, prominent progress has been made for quantum gate fidelities in semiconductor qubits. A single-qubit operation could be performed utilizing a few different mechanisms for different qubits. For singlet-triplet qubits [517, 543], exchange-only qubits [513, 44, 514], and hybrid qubits [515], exchange coupling plays the key role. In comparison, electron spin resonance (ESR) [544, 545, 525] and electrical dipole spin resonance (EDSR)[546, 547, 524, 538] are the main driving mechanisms for a spin qubit. They control the spin rotation by either oscillating magnetic or electrical fields. After years of persistent quest for fault-tolerant operations, single-qubit gate fidelities well beyond 99% have all been demonstrated in the donor [489], Si-MOS [485], and Si/SiGe [548, 514, 524, 548] (see Table 3). Compared to single-qubit gate operations, two-qubit gates are more daunting. Exchange coupling, capacitive coupling, or hyperfine coupling can be utilized to realize a two-qubit gate. In the singlet-triplet qubit, capacitively coupled two-qubit gates have been shown [550, 549]. Figure 5: Representative semiconductor qubit systems. All the devices are presented with two panels, where a top panel shows the top view of the device, and a bottom panel shows the lateral structure corresponding to a white line cut of the active region in the top panel. Heterostructure quantum dots include (a) Si-MOS [485], (b) Si/SiGe [486], and (c) Ge/SiGe [487] systems, where Si-MOS and Si/SiGe are mainly used for electron qubits, and Ge/SiGe is a hole qubit platform. System **(a-c)** belong to the gate-defined quantum dot category. In (a) Si-MOS, quantum dots are formed close to the silicon-oxide interface, with fabricated top gates providing lateral Coulomb confining potentials. On this device, an electron spin driving ESR antenna and a spin readout single electron transistor are integrated as well. (b) Si/SiGe quantum dots are formed in the middle silicon quantum well layer and sandwiched between SiGe layers on both sides. (c) Ge/SiGe is a hole spin platform, using a Ge well to form a two-dimensional hole gas and combined with top gates to form hole quantum dots. (d) CMOS nanowire field-effect transistor [488], where quantum dots are formed in the silicon nanowire sitting on a buried silicon oxide layer (BOX), and the surrounding gates are fabricated using industrial microelectronics technology. It is a hole-spin system, and the dot potential well is formed at the valance band top. (e) and (f) are the donor qubits in silicon. They are electron spin systems, where the host donor nuclear spin is also a key resource to encode qubits. (e) represents a donor in MOS [489] system, where a phosphorus atom is implanted in a fabricated MOS device which has a \({}^{28}\)Si layer to prolong electron spin coherent time. (f) Donor device with STM lithography technique [490], where the donors can be placed with atomic precision and in-plane gates are formed by dense conducting phosphorus atoms in a lithographical manner. Top panel of (a) is adapted with permission from Ref. [486], Springer Nature Limited. Top panel of (b) is adapted with permission from Ref. [486], American Association for the Advancement of Science. Top panel of (c) is adapted with permission from Ref. [487], Springer Nature Limited. Top panel of (e) is adapted with permission from Ref. [489], Springer Nature Limited. Capacitively coupled two-qubit gates have been realized in charge qubits as well [512]. In spin qubits, the well-established two-qubit gate protocols, such as SWAP [516, 490], C-Phase [523], and C-ROT [486], all need the existence of exchange coupling. Since either the capacitive or exchange coupling hinges on the charge degrees of freedom, charge noise couples into the system and renders a challenge for high-fidelity gate operations. Different methods have been pursued to realize high-fidelity two-qubit gates. A fixed exchange coupling was used for Si/SiGe quantum dots [492], where the unwanted rotation of the off-resonant states was removed by carefully matching the Rabi frequency \(f_{R}\) and the exchange J relation, and a 99.8% fidelity two-qubit gate was realized. The other two teams both utilized the tunability of the exchange coupling to perform CZ gates in the Si/SiGe platform. After detailed calibration and pulse optimization, two-qubit gate fidelities of \(F_{\rm CZ}=99.65\%\)[494] and \(F_{\rm CZ}=99.81\%\)[551] were demonstrated. In the silicon donor system, a two-qubit CZ gate with \(F_{\rm CZ}=99.37\%\) is shown on two donor nuclei with a shared electron [493] using a geometric gate and hyperfine coupling. Despite of the milestone breakthroughs in the fault-tolerant qubit gates, the semiconductor qubits still have much room to improve the qubit operation fidelities. Especially for the charge noise issue [552, 553, 524, 525], qubit host material engineering is necessary to have a purer environment, such as fewer nuclear spins, charge traps, and defects. Moreover, more sophisticated control methods or encoding could be combined, such as dressed qubits [554], global control [555], and profile optimized pulses [556]. Besides, the sweet spot in the qubit energy [552, 557] and composite pulse sequences [558] could also help against the charge noise. We stay optimistic about further improvements in gate fidelities from optimized device fabrication and control-level engineering. _Readout and initialization.--_Reducing SPAM errors is as important as improving gate fidelities for pushing the fault-tolerant quantum computing. Single-shot spin readout is vital, as some state initialization protocols can be performed by just performing a readout. The key is to find a suitable state-to-charge conversion process, where Pauli spin blockade [559, 534], hence the related latching mechanism [560], and Elzerman type spin-dependent tunneling process [561] are explored. Hyperfine coupling is a key anchor for the nuclear spin readout in the dopant system where the state information can be converted to a spin ESR signal [562]. Next, the corresponding charge signal shall be correlated to the capacitive difference, picked up by a single-electron transistor (SET) or quantum point contact (QPC), and amplified further [563]. In singlet-triplet qubits [559], a readout fidelity of 98.4% is reported. For single spins, the readout fidelity is pushed to \(F_{\rm M}=99.8\%\), beyond the fault-tolerant level [542]. Single-lead rf-reflectometry spin readout [564] is a pivotal technology to reduce the gate density for spin readout and is compatible with surface code scalable architectures for the fan-out issue. It was demonstrated nearly simultaneously by four groups [495, 496, 497, 498]. Using this technique, readout fidelities above 98% were shown [497, 498], and a readout time of 6 \(\mu\)s was proved [498], comparable to the gate operation speed. To remedy the broadening of the Fermi surface and related obstacles for the Elzerman protocol at high temperatures (1 K-4 K), the Pauli spin \begin{table} \begin{tabular}{c|c|c|c|c} \hline \hline **Qubit type** & **Si-MOS** & **Si-SiGe** & **P donor n** & **P donor e** \\ \hline \(T_{1}\) & 2.6 s [521] & 160 ms [522] & 39 min [489] & 30 s [489] \\ \(T_{2}^{\rm 2}\) & 120 \(\mu\)s [523] & 20 \(\mu\)s [524] & 600 ms [489] & 268 \(\mu\)s [489] \\ \(T_{2}^{\rm Hahn}\) & 1.2 ms [523] & 100 \(\mu\)s [524] & 1.75 s [489] & 0.95 ms[489] \\ \hline \(T_{\rm single}\) & 2.4 \(\mu\)s [523] & 20 ns [524] & 24 \(\mu\)s [493] & 150 ns [525] \\ \(T_{\rm two}\) & 1.4 \(\mu\)s [485] & 103 ns [492] & 1.89 \(\mu\)s [493] & 0.8 ns [490] \\ \(F_{\rm 1RB}\) (\%) & 99.957(4) [526] & 99.861(5) [524] & 99.99 [527] & 99.95 [527] \\ \(F_{\rm 2RB}\) (\%) & 98.0(3) [485] & 99.51(2) [492] & 99.37(11)\({}^{\rm 4}\)[493] & 86.7(2)\({}^{\rm 1}\)[490] \\ \hline \(Q_{1}\)\({}^{c}\) & 50 & 1000 & 25000 & 1800 \\ \(Q_{2}\)\({}^{c}\) & 86 & 194 & 302\({}^{d}\) & \(3.4\times 10^{5}\) \\ \(N_{\rm Q}\)\({}^{e}\) & 2 [485] & 6 [528] & 2 [493] & 2 [490] \\ \(N_{\rm E}\)\({}^{f}\) & 2 [485] & 3 [528] & 2 [493] & 2 [490] \\ \hline Env & \(B\sim 1.4\) T & \(B\sim 0.5\) T & \(B\sim 1\) T & \(B\sim 1\) T \\ & \(T<1.5\) K & \(T<1.5\) K & \(T<1.5\) K & \(T<1.5\) K \\ \hline Flying qubit & N/A & N/A & N/A & N/A \\ Footprint size & \(\sim 100\) nm & \(\sim 100\) nm & \(\sim 3\) nm & \(\sim 100\) nm \\ \hline \hline \end{tabular} \({}^{\rm a}\) CZ gate. \({}^{\rm b}\)\(\sqrt{\rm SWAP}\) gate. \({}^{\rm c}\)\(Q_{1}\equiv T_{2}^{*}/T_{\rm single}\) and \(Q_{2}\equiv T_{2}^{*}/T_{\rm two}\), where \(T_{\rm single}\) and \(T_{\rm two}\) are the time for the single-qubit and the two-qubit operations. \({}^{\rm d}\)\(T_{2}^{*}\)\(=\)\(570\)\(\mu\)s [489] is used here for a P nuclear spin with a bounded electron. \({}^{\rm e}\)\(N_{Q}\) is the demonstrated number of qubits with individual control. \({}^{\rm f}\)\(N_{E}\) is the number of entangled qubits. \end{table} Table 3: Comparison of reported values of different qubits in silicon. blockade, latching mechanism [539, 560], and double SET readout [565] are valuable approaches. Also, to improve the signal-to-noise ratio (SNR), quantum noise-limited JPA [566] and other amplification methods were combined. Multiple spin readout has been shown with a single electron box [567] and done by frequency multiplexing [568]. Several teams have already demonstrated enhanced readout fidelities for S-T qubits and spin qubits in a non-demolition method [569, 570], which shall find their importance in fault-tolerant computing and in studying spin state collapse problems for fundamental quantum mechanics. Moreover, cascade readout [571], dispersive spin readout [499, 534], triple-dot cavity dispersive readout [572], and ramped spin measurement [573] have been shown. In general, the spin readout techniques are more mature and ready for the future scaling-up stage. Further progress will focus on high-level multiplexing and integrating with the qubit design in a scalable manner. Potential readout signal crosstalk also needs further investigation and engineering design. To facilitate spin initialization, except the usual applied readout-assisted state initialization, a hot spot on the energy level due to valley mixing [574, 521] and spin-orbital coupling could also be used to enhance the \(T_{1}\) relaxation rate. _Scalability.--_An unique advantage for silicon qubits comes from its industry backbone, very large-scale integration (VLSI) technology. This advantage becomes more significant as integrated cryo-CMOS and hot qubits techniques emerge. An industry-level CMOS-compatible hybrid quantum computing chip with classical control unit and quantum processing unit is becoming possible. Along with this spirit, progress has been made in recent years, such as Intel's horse-ridge cryo-CMOS chip demonstrating qubit control fidelities rivaling room temperature bulky control instruments [507]. A proof-of-principle experiment shows a low-temperature classical control unit bonded to a quantum unit [508]. Also, on the quantum side, CMOS compatible "hot" qubits operated at increased temperature (\(T>1.5\) K) for the electron spin [505, 504] and hole spin systems [506] are realized nearly simultaneously. On the industry side, advanced fabrication technologies are touching down on qubit physics with the industrial foundry's massive production process [575]. CEA-Leti [576], Imce [577], and Intel all processed silicon qubits with 300 mm technology [578], and Intel has shown promising quantum dot uniformity and basic spin qubit operations. In the quest for fault-tolerant computing, scaling up would be an inevitable technological hurdle to overcome [579]. Several scaling architectures have been designed for phosphorus donor qubits [557, 580], Si/SiGe gate-defined dots [581], and MOS quantum dots [582, 583]. Also, research teams have realized multi-qubit devices and few-qubit algorithms, such as three qubits [584] and six qubits in the Si/SiGe systems [528], where entanglement states were shown. Similarly, a four-qubit in the Ge/SiGe hole spin system also made its debut, where a four-qubit GHZ state was demonstrated [539]. Meanwhile, methods for multiple quantum dots tuning using virtual gates [585] and automated machine learning were developed [586, 587] and multiplexed quantum dots readout was realized [568, 588]. Toward QEC, a three-qubit phase error correction algorithm was carried out by two groups [25, 26]. Moreover, qubit networks could be a remedy for the dense packaging problem and could reduce the fan-out overhead [579]. Therefore, the coupling of spin qubits at a distance is necessary. The effective spin-spin coupling could be realized by using microwave cavities [502, 503], mediated big quantum dot [589], spin array state transfer [532], and surface acoustic waves [590]. For the cavity approach, strong coupling [591] and photon-mediated spin-spin interactions are demonstrated [502, 503]. The next step would be to realize cavity-mediated two-qubit gates for spins at a distance and hence spin networks in a distributed manner. _Conclusion.--_High-quality materials and advanced fabrication technologies are the cornerstones of semiconductor qubits. To realize a higher gate fidelity, interfaces with low charge noise are critical, which could be achieved by importing industrial techniques of integrated circuits and encouraging a transfer from label-level engineering to foundry-level fabrication. The typical overheads for scalable quantum computing, such as crosstalk, gate heating, and frequency crowding, should also be carefully considered. Subsequently, semiconductor spin qubits will join the other sophisticated quantum computing platforms for next-level applications, such as fault-tolerant operations and quantum simulations on intermediate-scale multi-qubit devices. In summary, semiconductor qubits, especially the silicon spin qubits, are well-positioned for scaling up with all those above-mentioned breakthroughs and technological improvements. We are confident that by embracing its industrial advantages, silicon spin systems will speedily scale up their qubit numbers and join the next-level quantum computing endeavor together with superconducting and ion trap platforms. ## VI NV Centers _Introduction.--_ NV center is a point defect in the diamond, where a vacancy and a nitrogen atom substitute for two carbon atoms along the quantization axis (assumed to be the \(\hat{z}\) axis), as shown in Fig. 6 (a). The negative charge state NV\({}^{-}\) is of greatest interest, where there are five unpaired electrons originating from the nitrogen atom and three carbon atoms, respectively, together with an additional electron captured from the environment. This six-electron system is equivalent to an electron with spin projection \(S=1\), whose spin state can thus be employed as a qutrit, or a qubit if only the \(|m_{\text{s}}=0\rangle\) and \(|m_{\text{s}}=-1\rangle\) energy levels are considered. The NV center is a promising candidate for the quantum computer by virtue of the following merits. First, a single NV center can be optically resolved and located, and the polarization and measurement can also be achieved with a laser pulse. Second, the NV center has an excellent coherence property even at room temperature. At low temperature, it can be resonantly excited to enable efficient single-shot readout. Third, the nuclear spins near NV centers serve well as abundant available memory qubits for solid-state quantum information processors. _Qubits and coherence.--_ The exceptional lifetime of NV electron spins even under ambient conditions is experimentally favorable for quantum computation. The inhomogeneous magnetic fluctuations due to the \({}^{13}\)C spin bath are the main noise source responsible for the dephasing of NV electron spins. However, with the widely used DD technique, the dephasing time can be extended from the order of microseconds (\(T_{2}^{*}\)) to milliseconds (\(T_{2}\)), where the quasi-static noise is mostly suppressed. In addition to the central electron spin, nearby nuclear spins are a rich resource for memory qubits. The relatively low gyromagnetic ratio (around three orders of magnitude less than that of electron spin) of nuclear spins is mainly responsible for their extraordinarily long coherence time, up to \(T_{2}=2\) s at room temperature [592]. In addition to the \({}^{14}\)N and \({}^{13}\)C nuclear spins [593] (up to 27 spins nowadays [594]), researchers have been endeavoring to explore more available qubits in diamond, including P1 centers [595, 596, 597] and long-lived carbon nuclear spin pairs [598] (\(T_{2}=1\) min and \(T_{1}>6\) min at 4 K [599]). Further improving the coherence time of NV electron spins compared to the timescale of these memory qubits is desired. Since \(T_{2}\sim\) ms at room temperature is limited by the spin relaxation time \(T_{1}\), a straightforward solution is to lower the temperature, where \(T_{2}\) has been extended to 0.6 s at 77 K [600]. At 4 K, however, \(T_{2}\) has reached 1.5 s with carefully-designed sequences decoupling unwanted interactions, and \(T_{1}\) has exceeded 1 h [598]. _Initialization and readout.--_ NV center can be optically initialized and readout due to the spin-dependent inter-system crossing (ISC) [601, 602] (see Fig. 6 (a), illustrated by grey dashed lines). Thus, the electron spin can be initialized to \(|m_{\mathrm{s}}=0\rangle\) under continuous optical pumping. On the other hand, ISC leads to a nonradiative transition through singlet states, which enables the discrimination of the spin states according to the fluorescence difference. Furthermore, at low temperature, the resonant optical excitation allows high-fidelity single shot readout of NV electron spins [603]. The preparation and readout fidelity have achieved 99.9% [604] and 98% [605], respectively. The approach to initializing nuclear spins is less straightforward and varies with the different coupling strengths. Specifically, for \({}^{14}\)N and some strongly-coupled \({}^{13}\)C nuclear spins (\(\geq 1/T_{2}^{*}\sim 10^{2}\) kHz) [606, 607], applying a well-aligned magnetic field \(\sim 500\) Gauss leads to the excited state level anti-crossing (esLAC) and the polarization of nuclear spins [608]. However, esLAC fails to enable the efficient polarization if the quantization axis of the hyperfine interaction in the excited state differs from that in the ground state, or if the nuclei-electron quantization axis differs from that of the NV itself [607]. On the other hand, the strongly-coupled nuclear spins are less abundant than the weakly-coupled ones, and hence researchers have focused more on weakly-coupled \({}^{13}\)C nuclear spins. The initialization of the weakly-coupled \({}^{13}\)C nuclear spins employs a swap-like gate constructed by the DD sequences (see below) [28]. In addition to the approaches discussed above, several strategies exist for different purposes. For example, dynamic nuclear spin polarization (DNP) is designed to polarize the whole \({}^{13}\)C spin bath by imposing the Hartmann-Hahn double resonance [609], and has been improved to be more robust [610]. Projective measurement-based initialization is also preferred, especially in the case of simultaneous multiqubit or multidegree-of-freedom initialization with high fidelity [611, 597]. The single-shot readout associated with projective measurements not only provides an efficient way to polarize nuclear spins, but also enables the direct test of non-classical correlations and active feedback in QEC protocols. Single-shot readout of \({}^{14}\)N [612], weakly-coupled nuclear spins [613], nuclear spin pairs [599], and P1 centers [597] have been realized, respectively. _Gates.--_ The control techniques have been well developed to implement quantum logic gates with high precision as well as narrow pulse widths, where quantum optimization algorithms have been exploited. Combining the composite pulses with a modified gradient ascent pulse engineering (GRAPE) algorithm has yielded the records of the fidelity of single-qubit gate and two-qubit gate being 0.99995 and 0.992 [617], respectively, which almost hits the threshold value required by QEC. Remarkably, this highly accurate two-qubit gate has a duration of 700 ns, which is three orders of magnitude smaller than the coherence time. Meanwhile, typical single-qubit gates are in the order of \(\sim 10\) ns, and gigahertz Rabi oscillations are also possible with proper design [618, 619]. Moreover, optimized ultra-fast single-qubit gates beyond the rotating wave approximation have been realized with the application of chopped random basis (CRAB) quantum optimization algorithm, with fidelity for \(\pi/2\) and \(\pi\) pulses being \(0.95\pm 0.01\) and \(0.99\pm 0.016\), respectively [620]. Manipulating multiple qubits while maintaining coherence is a crucial task for quantum computing. In particular, in hybrid systems, where the timescale of each component may differ, it is desirable to implement all the control sequences before any of the components decohere. Instead of counting on the isotopically purified samples [621], an active way to extend the coherence time is the well-known DD techniques, during which the quasi-static noise is flipped and canceled, and hence can also be construed as a frequency filter [622]. In this sense, the DD sequence applied to NV electron spins enables the detection of the resonant frequencies corresponding to surrounding interactions [623, 624, 625]. Consequently, con ditional two-qubit gates have been designed based on DD sequences to control \({}^{14}\)N [626], where an RF pulse acts equivalently as a transverse hyperfine coupling to drive the nuclear spin flip conditioned on the state of the electron spin. Similarly and subsequently, universal DD-based gates on weakly-coupled nuclear spins have been achieved [627; 28], through which the nuclear spins can be initialized and measured via swap-like gates. Recently, an active phase compensation scheme named DDrf has freed the dependence of interpulse delay on hyperfine parameters and enabled the optimization of the interpulse delay to protect electron coherence, eventually entangling up to seven nuclear spins in a ten-qubit register [628]. An alternative is to utilize decoherence-protected subspaces, where the evolutions of quantum states are purely unitary [629; 630]. Moreover, geometric gates are also intrinsically noise-resilient, where the dynamical phase vanishes. Both non-adiabatic [631; 632; 633] and adiabatic [634] universal (non-Abelian) [633] geometric gates have been demonstrated. Nevertheless, towards large-scale quantum technologies, QEC is expected to be a more essential strategy, with respect to encoding the physical qubits subjected to the deleterious noise from the environment into reliable logical qubits. Armed with state-of-the-art multi-qubit control techniques, QEC protocols [27; 28] and a related work deploying a robust coherent feedback control [635] have been realized. Most recently, fault-tolerant operations on the logical-qubit level have been achieved on a seven-qubit NV quantum processor, indicating a major step toward fault-tolerant quantum information processing [636]. _Quantum simulation and quantum algorithm.--_ Sophisticated control of spins in diamond promises rich applications in diverse fields. Various exotic physical phenomena have been simulated, such as the emulations of tensor monopoles [638] and quantum heat engines [639], opening avenues for the exploration of fundamental physics. NV center quantum simulator also expands the scope of experimental investigations on quantum topological phases [640; 641]. Ref. [642] proposed a feasible and universal approach to investigate the non-Hermitian Hamiltonian in Hermitian quantum systems and observed parity-time symmetry breaking in an NV quantum simulator [637; 643]. Besides, simulations of non-Markovian dynamics of open systems [644], many-body localized discrete time crystals [614] and emergent hydrodynamics [645] are distinguished from other artificial platforms due to the real quantum nature and shed light on condensed-matter physics. On the other hand, many quantum algorithms have also been demonstrated, including the Deutsch-Jozsa algorithm [646], adiabatic quantum factorization [647], Figure 6: (a) Physical system and level structure of the NV center. (b) NV center and the nuclear spins nearby form a multi-qubit quantum information processor [614]. (c) NV center with the nuclear spins as memory qubits form a node in the quantum network [615]. (d) NV center as a quantum sensor [616]. Grover's search algorithm with very high efficiency [648], quantum-enhanced machine learning [649], and resonant quantum principal component analysis [650]. _Quantum network.--_ With the help of flying photons, two remote NV nodes (\(>1\) km) with memory qubits can be entangled [605, 651, 652]. Combining entanglement distillation [653] and deterministic entanglement delivery with more experimentally-favorable single-photon scheme [654], a three-node quantum network with more than one long-lived memory qubits has been realized [615, 655]. _Quantum sensing.--_ Owing to the robustness and micro- or nanoscale features of diamond, NV centers have demonstrated the potential for high-sensitivity magnetic sensing in condensed-matter physics [656, 657, 658, 659, 660, 661] and biophysics [662, 663, 664]. Moreover, NV sensors are also capable of detecting electric field [665, 666, 667, 668], temperature [669, 670] and pressure [671]. Recently, the boundaries of NV quantum sensing have been pushed into some special or extreme-condition areas, such as at zero or low magnetic field [672, 673, 674], under high pressure [674, 675, 676, 677, 678], and at high temperatures [679]. In parallel, quantum optimization algorithms [680] and QEC [681, 682, 683] have also been incorporated into quantum metrology so as to improve the sensitivity in the presence of noise. _Outlook.--_ Scalability is an inescapable question to which every candidate physical system for quantum computers should give an answer. There are two main challenges for the NV center-based quantum computing, namely device fabrications and multi-qubit control techniques. On the one hand, the deterministic schemes yielding NV centers with satisfactory precision are desired to produce large-scale functional devices [684, 685]. Meanwhile, since the underlying destruction and noises induced by the implantation may affect the coherence of the spin qubits [686], a trade-off solution for controllable production of NVs while preserving the coherence time should be developed. Accordingly, the demands in the fabrication arrays of nanostructures, such as nanopillars [687] and parabolic reflectors [688], which significantly enhance the collection efficiency, are also stringent. Additionally, it is also inspiring to explore the photocurrent-based electric readout of NV signals [689, 690]. On the other hand, with the growth of qubit number, the mechanism of noises in the system becomes more and more complicated. There is a need for techniques that integrate high-precision multi-qubit control techniques with decoupling techniques that suppress errors and crosstalk between multiple qubits. ## VII NMR system _Introduction.--_ NMR spectroscopy is a powerful and widely used analytical tool for the structural characterization of various organic matter. For nearly eighty years, it has spawned numerous scientific and technological applications in diverse areas of physics, chemistry, and life science. At the end of the twentieth century, motivated by a strong interest in quantum information science, there arose the idea of using liquid-state NMR to construct a quantum computer [7, 8, 9]. It was found that NMR is capable of emulating many of the capabilities of quantum computers, including unitary evolution and coherent superpositions. Actually, NMR quantum computing soon became one of the most mature technologies for implementing quantum computation [691, 692]. For instance, as early as 2001, researchers at IBM reported the first successful implementation of Shor's algorithm on a 7-qubit liquid-state NMR quantum computer \begin{table} \begin{tabular}{c|c|c|c|c|c} \hline **Property** & **Parameter** & **Qubit** & **Value** & **Condition** & **Reference** \\ \hline \hline \multirow{8}{*}{**Coherence**} & & e & 36 \(\mu\)s & 506 G, 300 K & Ref. [637] \\ \cline{3-6} & \multirow{2}{*}{\(T_{2}^{*}\)} & N & 25.1 ms & & Ref. [628] \\ \cline{3-6} & & \({}^{13}\)C & 17.2 ms & 403 G, 3.7 K & Ref. [628] \\ \cline{3-6} & & \({}^{13}\)C-\({}^{13}\)C pair & 1.9 min & & Ref. [599] \\ \cline{2-6} & \multirow{2}{*}{\(T_{2}\) (echo)} & e & 1.8 ms & 690 G, 300 K & Ref. [621] \\ \cline{3-6} & & N & 2.3 s & 403 G, 3.7 K & Ref. [628] \\ \cline{3-6} & & \({}^{13}\)C & 770 ms & & Ref. [628] \\ \cline{2-6} & \multirow{2}{*}{\(T_{1}\)} & e & \(>1\) h & 403 G, 3.7 K & Ref. [598] \\ \cline{3-6} & & \({}^{13}\)C & \(>6\) min & & Ref. [628] \\ \hline \multirow{2}{*}{**Gate time**} & Single-qubit & e & \(<10\) ns & 850 G, 300 K & Ref. [618] \\ \cline{2-6} & Two-qubit & e-N & 700 ns & 513 G, 300 K & Ref. [617] \\ \hline \multirow{2}{*}{**Gate fidelity**} & Single-qubit & e & 99.995\% & \multirow{2}{*}{513 G, 300 K} & Ref. [617] \\ \cline{2-6} & Two-qubit & e-N & 99.2\% & & \\ \hline \end{tabular} \end{table} Table 4: Reported values on the NV center quantum platform. [693]. Based on its well-established experimental technologies, NMR has now achieved universal control of up to 12 qubits [694, 695, 696], and allows investigation of a wide range of quantum information processing tasks, such as quantum simulation, quantum control, quantum tomography, and quantum machine learning. In the following, we briefly introduce the basics of NMR quantum computation and its impressive achievements. _Basic Principle.--_ In order to physically realize quantum information, it is necessary to find ways of representing, manipulating, and coupling qubits to implement non-trivial quantum gates, prepare a useful initial state, and readout the answer. _Qubit.--_ NMR quantum computation uses spin-1/2 nuclei in molecules to encode qubits. Due to the Zeeman effect, a spin-1/2 placed in an external magnetic field has two possible orientations, spin up \(\ket{\uparrow}\) and spin down \(\ket{\downarrow}\), which naturally offers a two-level system, or a qubit. In choosing a sample to be a quantum register, one property must be satisfied, that is, the spins should be distinguishable in frequency to allow individual qubit addressability. For heteronuclear molecules, because different types of nuclei have different gyromagnetic ratios, they can be easily distinguished. In the case of homo-nuclear molecules, although the nuclei have the same Zeeman splitting, they may stay in different electron environments, and the resulting nuclear shielding effect could induce different amounts of frequency shifts. In reality, precession frequencies for the nuclear spins can vary substantially, so it is best to choose such nuclei to form the quantum register. Fig. 7 shows the schematic of the NMR spectrometer and the commonly used molecules to encode qubits. _Initialization.--_ Conventionally, quantum computation starts from a pure state with all qubits initialized to the computational basis vector \(\ket{0}\). However, due to the low polarization of the NMR spin ensemble at room temperature, it is practically rather difficult to get a genuine pure state. Alternatively, one can use the concept of pseudo-pure state (PPS) as a substitute [7]. PPS is a mixture of the maximally mixed state and a pure state, thus having similar behavior to that pure state under quantum gates and quantum measurements. To prepare PPS from the thermal equilibrium state, it would be necessary to involve non-unitary operations, which can be realized by applying gradient field pulses or utilizing relaxation effects. Currently, there exist a number of methods for PPS preparation, such as spatial averaging [698], line selective [699], and labeled-PPS [694, 700], etc. Ref. [701] analyzed and compared the efficiencies of these methods based on the theory of optimal bounds on state transfer under quantum channels. Overall, PPS has proven to be a convenient and useful tool for small-scale NMR quantum computation, yet when the number of qubits grows, there is a significant scalability challenge, i.e., the achievable purity of PPS scales very unfavorably. Approaches that attempt to address the scalability issue include algorithmic cooling [702] and parahydrogen-induced polarization [703], which have demonstrated the ability to prepare NMR spin systems with purities even above the entanglement threshold. _Operation.--_ One-qubit gates are just rotations on the Bloch sphere, which can be easily implemented in NMR with soft radio-frequency pulses. Soft pulses are usually of predefined shapes, such as the Gaussian waveform [704]. They contain energy only within a limited frequency range, and thus can selectively excite those spins that locate in this range. Therefore, a natural way to implement a single-qubit gate is to use a resonant, rotating Gaussian pulse with sufficient selectivity. But one should be careful that, when going back to the lab frame, there may be some phase errors that must be compensated to get the genuine target gate [705]. In NMR, a two-qubit gate is realized by making use of the natural \(J\)-coupling between the nuclei. In the case of multi-qubit gates, since all the \(J\)-couplings between the spins are evolving, one has to design refocusing schemes that are composed of a sequence of \(\pi\) pulses to effectively turn off the unwanted \(J\)-coupling terms [706]. The usual way to quantify the level of coherent control is the randomized benchmarking protocol. Using randomized benchmarking, an average error rate for one- and two-qubit gates of \(4.7\pm 0.3\times 10^{-3}\) on a three-qubit system was reported [707]. Another work has used a unitary 2-design and twirling protocol to estimate the average fidelities of Clifford gates on a seven-qubit NMR processor, finding an average experimental fidelity of 55.1% [708]. NMR has also explored other types of quantum gate implementation such as geometric quantum computation [709, 710]. Finally, we remark that NMR also provides non-unitary control means such as gradient field pulse and phase cycling, which are modeled as random unitary channels and can be used to destroy unwanted coherences. _Measurement.--_ NMR measurement is implemented by observing the free induction decay (FID) of the transversal magnetization of the spins by a detection coil wound around the sample. The recorded time-domain FID signal is Fourier transformed to obtain a frequency-domain spectrum, which is then fitted to obtain information about the spin state. Unlike projective measurements in other quantum systems, we can directly measure the expectation value of a single coherent Pauli observable in NMR, which is, in fact, an ensemble system. In order to measure the Pauli operators other than the directly observable single quantum coherences, it is necessary to apply an appropriate readout pulse to the spin state before acquiring the FID signal. To estimate an unknown quantum state, that is, to perform quantum state tomography, one needs to measure a complete set of basis operators. However, this is generally a challenging task since the number of degrees of freedom to be determined grows exponentially with system size. _NMR quantum control.--_ Over 50 years of development, researchers have developed abundant pulse control techniques in the NMR spin system, such as frequency selective pulses, composite pulses, refocusing schemes, and multiple pulse sequences, to name a few. These pulse techniques originated in demand for precise spectroscopy of complex molecules, and continue to be useful for NMR quantum information processing experiments [706]. Despite this, it is still desirable to further improve NMR control techniques to realize sufficiently high-fidelity gates that fulfill the fault-tolerant quantum computation requirement. Therefore, the interdisciplinary field of NMR and quantum control theory naturally arose, resulting in novel and more efficient pulse design and optimization techniques. For small-sized systems, one can employ time optimal control theory to reduce gate time so as to reduce decoherence effects [711, 712]. For relatively larger systems, it is usually hard to derive analytical control solutions, and then one needs to resort to numerical means. One of the most successful approaches in this regard is the GRAPE technique developed in Ref. [713], which is flexible, easy to use, and can produce smooth, optimal, and robust shaped pulses. GRAPE and its many variants have found broad applications not just in NMR but also in other experimental platforms. However, these numerical approaches are intrinsically unscalable. It would be a rather resource-consuming task to simulate controlled quantum evolution with a classical computer, even for a system with over tens of qubits. One possible approach to overcome this problem is to use subsystem-based quantum optimal control [705]. Another promising strategy is the hybrid-classical version of GRAPE, which employs a quantum simulator to efficiently simulate the controlled evolution [714]. This is essentially a closed-loop strategy, and has been experimentally tested first on a seven-spin system [714] and later on a twelve-spin system [696] to create multiple-correlated spin states. Besides scalability, noise is another major obstacle for high-fidelity quantum control, which could be addressed by robust control or open quantum system control. For example, more advanced DD sequences were put forward on solid-state NMR, resulting in much-improved robustness against different types of experimental errors while retaining good decoupling efficiency [715, 716]. It is worth mentioning that, the above mentioned control methods, such as composite pulse, GRAPE, spin echo, and DD, though developed from NMR firstly, are not at all restricted to NMR. Actually, many of these methods have already been successfully applied to other physical systems. Therefore, it is fair to say that NMR is an excellent platform and testbed for developing quantum control methods [705]. _NMR quantum processor.--_The NMR field has well-established quantum control methods and experimental technologies, enabling a series of influential fundamental or applicative researches in quantum computing, quantum simulation, quantum cloning [717, 718, 719], QEC [720, 721], quantum thermodynamics [722, 723, 724, 725], quantum contextuality [726], etc. In the following, for short, we only review a few developments related to quantum algorithms, quantum simulation, and quantum learning. _Quantum algorithm.--_Since the early stage of NMR-based quantum computing, there have been reported experimental realizations of some of the well-known quantum algorithms, such as Deutsch-Jozsa algorithm [727] on a two-qubit carbon-13 labeled chloroform molecule, Grover's search algorithm [728] on another two-qubit sample partially deuterated cytosine, QFT algorithm on Figure 7: (a) Schematic of a high-field liquid-state NMR spectrometer [697], which can be used for quantum computation. (b) A list of some commonly used molecules for nuclear spin quantum registers, ranging from two-qubit to twelve-qubit samples. The labels indicate nuclei \({}^{13}\)C, \({}^{1}\)H, or \({}^{19}\)F (all having spin number 1/2) that are chosen as qubits candidates. a three-qubit sample [729], and Shor's quantum factoring algorithm [693] on a seven-qubit system. _Quantum simulation.--_NMR has been used as a quantum simulator to explore a variety of interesting quantum phenomena, ranging from quantum many-body physics, quantum chemistry, biology, and even cosmology. Simulating the equilibrium and non-equilibrium dynamics of quantum many-body systems is one of the most fascinating topics in the field of quantum simulation, and NMR seems to be very suitable for this task. For example, a three-spin frustrated magnet was simulated with NMR, in which the phase of the system as a function of the magnetic field and temperature was explored [730]. The phase diagram of the ground state of the Hamiltonian with three-body interactions was simulated [731] and the phase transition of the long-range coupling model was first observed by monitoring Lee-Yang zeros [732]. In another research, the authors employed a four-qubit NMR simulator to explore the use of out-of-time order correlators to probe quantum information scrambling [733] and equilibrium or dynamical quantum phase transitions [734] in a chaotic Ising chain model. NMR has also found applications in various chemistry problems by directly simulating molecules or chemical reactions, such as computing the ground-state energy of a hydrogen molecule [735], finding the energy spectrum of a water molecule [736], and exploring the prototype laser-driven isomerization chemical reaction dynamics[737]. Besides, the NMR quantum simulator can also be used to investigate topological orders by simulating the ground state of a topological Hamiltonian [738, 739, 740, 741]. _Quantum machine learning.--_NMR has been one of the experimental platforms where quantum machine learning algorithms can be demonstrated, and it is in the initial stages of exploring the use of quantum machine learning to directly process classical image information. For instance, a hand-written image recognition task to discriminate between 9 and 6 is realized by implementing a quantum support vector machine on a four-qubit NMR processor [742]. The boundary that separates different regions of an image is detected experimentally by implementing a quantum image processing algorithm [743]. Quantum principal component analysis, an important tool for pre-processing data in machine learning, has also been experimentally implemented on NMR for the first time for small-scale human face recognition tasks [744]. _Outlook.--_Primary challenges for liquid-state NMR quantum computation include a lack of appropriate molecules to serve as quantum registers, unavailability of high-purity quantum states and quantum resources such as entanglement, and difficulty in achieving scalable and high-fidelity control on large spin systems. One approach that may overcome some of these limitations is to shift to solid-state NMR. Solid-state NMR has already been used in demonstrating quantum heat engine [745], exploring many-body localization [746, 747], and observing prethermalization [748, 749]. Some other promising approaches that are closely related to NMR are the silicon-based nuclear spin quantum computer, which is a hybrid between the quantum dot and the NMR [10], and the recent technology of nuclear electric resonance [750]. Finally, while NMR has certain intrinsic difficulty in becoming a scalable route to large-scale quantum computation, the many lessons learned in the past decades' research are very likely to be relevant for advancing the development of other quantum technologies. ## VIII Neutral atom arrays Over the past two decades, deterministically prepared neutral atom arrays have emerged as a promising platform for quantum computing and quantum simulation [751, 752, 753, 754, 755]. Controlled interactions between atomic qubits are mediated by the long-range dipole-dipole interactions via Rydberg states. These long-range Rydberg interactions allow creating specific quantum Hamiltonians and easy analog quantum simulations. They are also the workhorse for constructing digital gates and realizing any physical models. Most experiments to date focus on alkali atoms Rb and Cs, which have single valence electrons and can be simply laser-cooled and manipulated. In recent years, with two valence electrons, alkaline-earth(-like) elements Sr and Yb have attracted growing attention due to their appealing features, such as narrow and ultra-narrow optical transitions and magic-wavelength optical traps for Rydberg states [756, 757, 758, 759]. The schematic diagram of atomic qubits and their pros and cons are summarized in Fig. 2(e). _Scalability.--_A neutral atom quantum computer is based on an array of single atoms localized in optical tweezers, as depicted in Fig. 8. Quantum information is encoded in electronic spin states of alkali atoms or nuclear spin states of alkaline-earth(-like) atoms. Neutral atom platforms have a notable advantage for scalability. It is relatively easy to expand the number of atomic qubits. In a microscopic optical tweezer, either one atom or zero atoms can be trapped each with a probability of roughly 50% due to light-assisted collisions [760, 761]. The stochastic loading efficiencies have been enhanced to 80%-90% for alkali species and 96% for alkaline-earth(-like) species by accurately tuning parameters under blue-detuning lasers and using artful cooling techniques [762, 763, 764, 765, 766]. After probabilistic loading in tweezers, single atoms can be rearranged into defect-free arbitrary patterns using a real-time control system and dynamically moving tweezers [767, 768, 769]. Up to date, large-scale platforms consisting of more than 100 neutral atoms have been created, such as an array with an average number of 110 \({}^{133}\)Cs atoms [770], defect-free square and triangular arrays of 196 and 147 \({}^{87}\)Rb atoms [771], and a defect-free programmable array of up to 256 \({}^{87}\)Rb atoms [772]. The rearrangement process takes a total time of hundreds of milliseconds and results in a high filling fraction of up to 99% [771, 772]. In the rearrangement of larger arrays, atom losses from tweezers will limit filling fractions, as the rearrangement time will increase with the system size. The typical trap lifetime is about tens of seconds due to collisions with background gas in a vacuum chamber. In order to reduce the residual gas pressure, optical tweezers can be placed in a cryogenic environment at a temperature of a few kelvins. The trapping of single Rb atoms in cryogenic arrays of optical tweezers has been demonstrated with a measured lifetime up to 6000 s, a 300-fold improvement compared to the room-temperature setup [773]. In this cryogenic experimental setup, large arrays consisting of more than 300 \({}^{87}\)Rb atoms have been realized with an unprecedented probability of \(\sim 37\%\) to prepare defect-free arrays [774]. We anticipate that the number of atomic qubits in a neutral atom processor will be increased from hundreds to thousands. Furthermore, several individual processors will be coupled together with atom-photon interconnects. _Initialization and readout.--_Atomic qubits encoded in the internal states can be initialized using optical pumping techniques. The estimated preparation fidelity for single alkalis is \(F>99.5\%\)[775]. A widely used technique for qubit readout relies on the state-selective ejection of neutral atoms. When illuminating with a resonant laser pulse, any atoms in one state are pushed out of the optical tweezers, whereas atoms in the other state are not influenced and remain trapped. Subsequently, trapped atoms are detected by collecting laser-induced fluorescence which is not state-selective. The typical measurement fidelity is \(F>98\%\)[775]. Atom losses in this technique prevent from measuring qubits in the middle of quantum circuit execution. As an alternative, for a lossless fluorescent state detection, only a small number of photons should be scattered to minimize the atom heating. This approach has been demonstrated for one qubit and multiple qubits in optical tweezers [776, 777, 778, 779]. In the future, atom losses due to heating will be reduced to a level that allows implementing repetitive QEC for quantum computation. _Gates.--_We summarize the state-of-the-art gates implemented in Table 5. Single-qubit operations are performed through microwave or optical spectroscopy. In large atom arrays, a universal approach for single-site addressing is the focused laser beam scanning or the application of static field gradients. The gate operation time is \(t_{1}=0.1-10\;\mu\)s. Single-qubit gate errors are caused by the fluctuations of the pulse amplitude and detuning [775, 780]. An average fidelity of \(F_{1}=99.83\%\) was measured in the \({}^{133}\)Cs experiment using randomized benchmarking [781]. Recently, the gate fidelities for \({}^{87}\)Rb atoms have been enhanced by using composite pulse sequences, which make gate errors highly insensitive to pulse errors [782]. The estimated gate errors are about \(3\times 10^{-4}\). In the platform of alkaline-earth(-like) atoms, the average single-qubit gate error of \(5.2\times 10^{-3}\) has been extracted [766]. Controlled interactions between neutral atoms are a fundamental requirement for entangling particles. One strategy is to implement local entangling operations via ultracold spin-exchange interactions, which has been demonstrated with two individual atoms in movable tweezers [783]. However, it is a great challenge to overlap the atomic wavefunctions in neutral atom arrays. Another strategy is based on the strong and long-range dipole-dipole interactions between Rydberg atoms. When two atoms are in close proximity to each other, two-qubit gates and entanglement have been realized via the Rydberg blockade effect [784, 785]. Most researchers give attention to the latter one owing to its feasibility. The corresponding gate time is \(t_{2}=0.4\)-\(2\;\mu\)s. For two-qubit gates via Rydberg interactions, the dominant sources of gate errors are the ground-Rydberg Doppler dephasing, the spontaneous emission from the intermediate state in the two-photon excitation process, and the excitation laser phase noise [775]. In the \({}^{87}\)Rb atom arrays, the fidelities of the two-qubit entanglement operations have been extracted to be \(F_{2}\geq 97.4\%\) by sup Figure 8: Schematic of a neutral atom quantum computer. In a microscopic tweezer array, single atoms are rearranged into defect-free arbitrary patterns. Atomic qubits can be encoded in electronic spin states or nuclear spin states. Single-qubit operations are performed through microwave or optical spectroscopy. Two-qubit gates and entanglement are realized based on long-range Rydberg interactions. EMCCD, Electron Multiplying Charge-Coupled Device. SLM, Spatial Light Modulator. 2D AOD, Two-Dimensional Acousto-Optic Deflector. pressing Rydberg laser phase noise via a reference cavity [786, 787]. In an array of \({}^{171}\)Yb atoms, a two-qubit gate with the fidelity of \(F_{2}=83\%\) has been firstly demonstrated in [788]. In this experiment, the gate error is attributed to Raman scattering from the gate beam and autoionization from a small Rydberg population. For two individually trapped \({}^{88}\)Sr atoms, a Bell state has been created with a high fidelity of \(>99.5\%\), in which qubits are encoded in a metastable state and a Rydberg state [789]. In the Rydberg excitation process, we must consider the effect of the different trapping potentials for both ground and Rydberg levels. In the experiments with single \({}^{87}\)Rb or \({}^{133}\)Cs atoms, the tweezers are turned off for a short duration to mitigate anti-trapping of the Rydberg states. Atom losses and heating limit the Rydberg excitation time and the number of excitation loops. When using \(0.5\;\mu\)s drops for each two-qubit gate, hundreds of drops can be made before atom loss becomes significant [782]. It should be noted that the ion core polarizability of the alkaline-earth(-like) atoms can be used to trap Rydberg states in conventional, red-detuned optical tweezers. The Rydberg states of single \({}^{174}\)Yb atoms have been stably trapped by the same red-detuned optical tweezer that also confines the ground state [790]. Therefore, the interaction time of Rydberg states for alkaline-earth(-like) atoms can be extended. _Coherence.--_Neutral atoms are well isolated from the environment and exhibit long coherence times. For Rb and Cs atoms [782, 791], the hyperfine qubit relaxation time is about \(T_{1}\sim 4\;\mathrm{s}\), which is limited by the spontaneous Raman scattering of photons from the trapping laser. The inhomogeneous dephasing originates from the energy distribution of trapped atoms. The typical dephasing time is \(T_{2}^{*}\sim 4\;\mathrm{ms}\). For the homogeneous dephasing, common mechanisms are the intensity fluctuations of the trapping laser, magnetic field fluctuations, and heating of atoms. The homogeneous dephasing time of \(T_{2}^{*}\sim 1\;\mathrm{s}\) has been observed using XY8 and XY16 DD sequences [792, 716]. By analyzing these mechanisms, we observe that the differential light shift of qubit states is the root of the dephasing. A magic-intensity trapping technique allows mitigating the differential light shift. The coherence time has been enhanced to \(225\;\mathrm{ms}\), where the extracted inhomogeneous dephasing time is \(T_{2}^{*}\sim 1.5\;\mathrm{s}\)[793]. In comparison with the electronic spin qubits, nuclear spin qubits in alkaline-earth(-like) atoms are robust to perturbation by the optical tweezers. The estimated coherence time of single \({}^{87}\)Sr atoms are \(T_{1}\gg 10\;\mathrm{s}\), \(T_{2}^{*}=21\;\mathrm{s}\), and \(T_{2}^{\prime}=40\;\mathrm{s}\) in the spin echo process [794]. In addition, coherence properties of single \({}^{171}\)Yb atoms have been measured [766, 788], as listed in Table 5. _Digital quantum operations.--_Digital gate-based circuits on programmable neutral atom processors were demonstrated by two experimental groups. In Ref. [791], researchers at Wisconsin employed an architecture based on individual addressing of single atoms with tightly focused beams. Quantum circuits were decomposed into global microwave rotations, local phase rotations and local two-qubit CZ gates. Scanning Rydberg excitation beams enabled coherent and simultaneous addressing of pairs of atoms. In this platform, researchers demonstrated the preparation of GHZ states with up to 6 qubits, quantum phase estimation algorithm for a chemistry problem, and QAOA for the maximum cut graph problem. In Ref. [782], researchers at Harvard employed another architecture based on the coherent transport of entangled neutral atoms. Two-qubit CZ gates were implemented in parallel by two global Rydberg laser beams. Subsequently, entangled qubits were coherently transported to change the connectivity and perform the next layer of quantum operations. This architecture was used to generate a 12-qubit cluster state, a 7-qubit Steane code state, topological surface, and toric code states. Finally, researchers realized a hybrid analogue-digital evolution and measured the entanglement entropy. These results represent a key step toward realizing a quantum computer with neutral atoms. _Analog quantum operations.--_Combined with the wide tunability of array geometry, Rydberg atom arrays are suitable to implement various Hamiltonians [753, 754, 755]. When the spin states are encoded in the ground level and the Rydberg level, the quantum Ising-like models are obtained. In a one-dimensional chain with up to 30 atoms and a \(7\times 7\) atoms array, the excitation dynamics and the pair correlation functions of quantum Ising models were explored after suddenly switching on the Rydberg excitation pulse [796, 797]. The similar quench dynamics were also studied in the linear and zigzag chains [798]. For three-dimensional arrangements of Rydberg atoms, quantum Ising Hamiltonians mapped on various connected graphs were constructed with tens of spins [799, 800]. Sweeping the Rydberg excitation detunings allows probing more abundant many-body dynamics. Quantum phase transitions into \(\mathbb{Z}_{n}\) ordered phases and the critical dynamics were demonstrated in a one-dimensional chain with tunable interactions [801, 802]. Antiferromagnetically ordered states were further explored in two-dimensional arrays with up to hundreds of atoms [771, 772, 803]. Although the Rydberg interactions generally lead to thermalization in many-body systems, it was realized that quantum many-body scars avoided rapid thermalization when preparing the two-dimensional atoms array in the antiferromagnetic initial state [804]. Besides the Ising-like models mentioned above, recent works include observing topological phases in a quantum dimer model and a Su-Schrieffer-Heeger model [805, 806], engineering the XXZ spin model using a periodic external microwave field [807], and investigating quantum optimization algorithms for solving the maximum independent set problem [795]. _Outlook.--_Recent breakthroughs in Rydberg atom arrays exhibit the ability to study many-body physics and realize highly programmable and scalable quantum com puting. Primary challenges for this platform are higher fidelity of two-qubit gates, quantum nondemolition measurements, and low crosstalk between individual qubits in a large array. Two-qubit gate fidelity can be improved by further cooling alkali atoms via Raman sideband cooling [808, 809] or using alkaline-earth(-like) elements that have exhibited excellent gate performance. Combined with lossless fluorescent state detection, introducing a second atomic element allows monitoring quantum processors via quantum nondemolition couplings to auxiliary qubits [810]. In addition, a dual-element platform enables low crosstalk manipulations of the homonuclear and heteronuclear interactions when increasing system size [811]. ## IX Photonic quantum computing _Introduction.--_Amongst all platforms for quantum computing, photon has several unparalleled advantages: i) one of the best candidates for room-temperature quantum computing, owing to the inherent advantage that it weakly couples with the surrounding environment, and ii) natural interface for distributed quantum computing, which acts as a flying qubit to connect many quantum nodes, and iii) compatible with CMOS technologies, bringing optical quantum computing to a new cutting-edge stage. Optical quantum computing can be dated back to 2001, when Knill, Laflamme and Milburn (KLM) pointed out that it is possible to create universal quantum computing solely with linear optical elements [812]. This landmark work opens a way for linear optical quantum computing. However, the daunting resource overhead makes the KLM scheme extremely hard to implement. In 2010, a much more feasible and intermediate model--boson sampling--was proposed and analyzed by Aaronson and Arkhipov [813]. Compared to the KLM scheme, boson sampling is a much easier linear optical quantum computing model which can beat all classical computers with only 50-100 photons, but at a cost that it's no longer universal. In 2017, a variant called Gaussian boson sampling (GBS) was developed by Hamilton _et al._, in which the input is replaced as single-mode squeezed states rather than single photons [814]. It is a new paradigm that not only can provide a highly efficient approach to large-scale implementations but also can offer potential applications in graph-based problems and quantum chemistry. In the past two decades, we have witnessed great progress in linear optical quantum computing [815, 816], especially on single-photon sources, linear optical networks, and single-photon detectors. These achievements have enabled a series of essential experimental results in the preparation of large-scale entangled states [817, 818, 819], and quantum computational advantage through boson sampling [18, 16, 820]. _Photon qubit.--_ Photon has the richest degrees of free \begin{table} \begin{tabular}{c|c|c|c} \multirow{2}{*}{**Property**} & \multicolumn{2}{c|}{**Alkali atom**} & \multicolumn{2}{c}{**Alkaline-earth(-like) atom**} \\ \cline{3-4} & \multicolumn{2}{c|}{**Electronic spin**} & \multicolumn{2}{c}{**Nuclear spin**} \\ \cline{3-4} & \multicolumn{2}{c|}{**85,87Rb/\({}^{133}\)Cs**} & \multicolumn{2}{c|}{**87Sr**} & \multicolumn{2}{c}{**171Yb**} \\ \hline \multirow{4}{*}{**Coherence**} & \(T_{1}\) & 4 s [782, 791] & \(\gg 10\) s [794] & \(10-100\) s [766] \\ \cline{2-4} & \(T_{2}^{*}\) & 4 ms [782, 791] & 21 s [794] & 3.7 s [766] \\ \cline{2-4} & \(T_{2}^{\prime}\) & \(\sim 1\) s1 [782, 791] & 40 s1 [794] & 7.9 s1 [766] \\ \hline \multirow{2}{*}{**Gate time**} & \(t_{1}\) & \(0.1-10\)\(\mu\)s & & 0.7 \(\mu\)s [766] \\ \cline{2-4} & \(t_{2}\) & \(0.4-2\)\(\mu\)s & & 0.9 \(\mu\)s [788] \\ \hline \multirow{2}{*}{**Gate fidelity**} & \(F_{1}\) & \(\sim 99.97\%\)2 & & 99.48\% \\ \cline{2-4} & \(F_{2}\) & 97.4\%\)3 & & 83\%\)4 \\ \hline \multirow{2}{*}{**Qubit number**} & \(N_{\rm d}\) & 61 & 242 & & \\ \cline{2-4} & \(N_{\rm a}\) & 289 [795] & & \\ \hline \multicolumn{2}{c|}{**Environment**} & \multicolumn{2}{c|}{Ultra-high vacuum \(P\sim 10^{-11}\) Torr, magnetic field \(B\sim 10\) G} \\ \end{tabular} \end{table} Table 5: Reported state-of-the-art performance of neutral-atom qubits. \(T_{1}\) is the spin relaxation time. \(T_{2}^{*}\) and \(T_{2}^{\prime}\) refer to the inhomogeneous and homogeneous dephasing time. \(F_{1}(F_{2})\) and \(t_{1}(t_{2})\) are the gate fidelity and the operation time of single(two)-qubit manipulation. \(N_{\rm d}\) and \(N_{\rm a}\) refer to qubit numbers in digital quantum processors and analog quantum simulators, respectively. dom to encode as a qubit. In the following, we summarize several frequently-used bases for photonic qubits. \(\bullet\)_Polarization_: The qubit can be encoded on the two orthogonal geometrical orientations of electromagnetic field. It is widely used in linear optical quantum computing. \(\bullet\)_Path_: Two transmission paths of single photons can be a qubit. Usually, the phase of the path in free space is unstable, while it's perfect for integrated photonics. \(\bullet\)_Time bin_: The former and latter arrival time of single photons is encoded as a qubit. \(\bullet\)_Frequency bin_: Frequency-bin qubit refers to the superposition state of a photon with two different frequencies (colors). \(\bullet\)_Photon number_: Vaccum and single-photon states encode qubit's 0 and 1, respectively. \(\bullet\)_Orbital angular momentum (OAM)_: OAM describes the spatial distribution of light. In quantum theory, OAM of a photon has a value of \(L_{z}=m\hbar\). Any two OAM states with different \(m\) values form a photonic qubit. _Quantum light sources.--_Single-photon source is one kind of quantum light source that emits one and only one photon at a certain time, in a well-defined polarization and spatial-temporal mode. Specifically, the single photons should possess the same polarization, spatial-temporal mode and transform-limited spectral profile for a high-visibility Hong-Ou-Mandel-type quantum interference [821]. Spontaneous parametric down-conversion (SPDC) sources [822; 823] play a vital role in many fundamental quantum optics experiments. Notably, this year's physics Nobel prize is partially "for experiments with entangled photons". However, SPDC is intrinsically probabilistic and unavoidably mixed with multiphoton components. The single-photon efficiency is typically as low as \(\sim\)1% to suppress unwanted two-photon emission. To overcome this issue, an alternative way is to multiplex many SPDC sources to boost the efficiency of single-photon sources [824]. Another approach is directly generating high-quality single photons from a two-level system. Amongst all platforms [825; 826; 827; 828; 829; 830; 831; 832], semiconductor quantum dots [833] provide state-of-the-art single-photon sources with an overall efficiency of 57% [834]. This mainly benefits from a polarized microcavity developed by Wang _et al._[835], which has a polarization-dependent Purcell enhancement of single-photon emission so that the overall efficiency can surpass the 50%. In near future, the single-photon efficiency can be improved over 70% by better sample growth and boosted collection efficiency, which should surpass the efficiency required for universal quantum computing [836]. In quantum optics, another quantum light source is squeezed state, which refers to a quantum state that the uncertainty of the electric field strength for some phases is smaller than that of a coherent state. Such a state is commonly generated by strongly pumping nonlinear mediums [837]. It was shown that continuous-variable (CV) quantum computing can be constructed [838] by using squeezed states and simple linear optical elements, such as beam splitters and phase shifters. Then, Gottesman, Kitaev, and Preskill (GKP) proposed a robust QEC code over CVs to protect against diffusive errors [839]. Until now, the record squeezing is 15 dB from type I optical parametric amplifier [840], and many quantum experiments were executed towards large-scale CV quantum computing [841; 842; 843]. Compared to discrete-variable (DV) quantum computing, CV has a valuable feature that the entanglement can deterministically emerge by mixing two squeezed states with a simple beam splitter, while it is hard to obtain in DV case since such a nonlinear interaction is so weak that we have to generate entanglement by a conditional fashion, namely as post selection. Nevertheless, CV quantum information processing can never be perfect, because the quality of entanglement strongly depends on the amount of squeezing that it's extremely sensitive to the loss. _Linear optical networks.--_The interferometer acts as a unitary transformation on the single-photon Fock state or the single-mode squeezed state. In 1994, Reck _et al._ showed that a universal unitary transformation can be realized by beam splitters and phase shifters arranged in a triangular configuration [844]. In this scheme, the optical depth is \(2(n-1)-1\), the number of beam splitters is \(\frac{n(n-1)}{2}\), where \(n\) is the number of modes. In 2016, Clements _et al._ demonstrated that an interferometer with a rectangular configuration is equivalent to a triangular one [845]. Moreover, the optical depth is reduced to \(n-1\), and the number of beam splitters is reduced to \(\frac{n^{2}-2n+2}{2}\). This is a more compact and robust design with a symmetry configuration. For boson sampling, the linear optical network should combine high transmission, Haar randomness, high spatial and temporal overlap simultaneously. There are many different implementation approaches, such as micro-optics [846; 847; 16; 820], time-bin loops [848; 18], and integrated photonic circuits [849; 850; 851]. For universal quantum computing, it should further be programmable. Micro-optics possesses the highest transmission efficiency, while it lacks the demonstration of programmability. Time-bin loops and integrated on-chip circuits are programmable but suffer from serious losses. How to reduce the losses meanwhile programmable is a long-sought goal in the future. _Boson sampling.--_In 2011, Aaronson and Arkhipov argued that a passive linear optics interferometer with single-photon state inputs cannot be efficiently simulated [813]. This model is so-called boson sampling, a non-universal quantum computing model much easier to build than universal quantum computing. In boson sampling, \(n\) identical bosons are sent into an \(m\)-mode (\(m\)\(\gg\)\(n\)) Haar-random interferometer and sampling the output distribution in the photon number basis. Because of the bosonic statistics, the probability amplitudes of the final state are related to the permanent of submatrices, a problem known to be in the complexity class of #P-complete. It is strongly believed that a moderate-size boson sampling machine, even an approximate one with a multiplicative error, will be intractable to be simulated with state-of-the-art classical computers [813]. More importantly. boson sampling is a strong candidate to demonstrate quantum computational supremacy [852], an important milestone in the quantum computing field. In 2013, simultaneously, four groups reported the small-scale proof-of-principle boson sampling experiments [849, 853, 848, 851]. Indeed, the final photon distribution is proportional to the square of permanent modulus. However, all these experiments are based on SPDC sources, which were intrinsically probabilistic and mixed with multi-photon components. In an attempt to solve the intrinsic probabilistic problem of SPDC, scattershot boson sampling was proposed in 2014 by Lund _et al._[854] and was first demonstrated in 2015 by Bentivega _et al._[855]. Then, Zhong _et al._ improved the photon number to five [817]. Though scattershot Boson sampling is theoretically beautiful, it is hard to realize quantum advantage experimentally due to the extensive challenges of ultra-high heralding efficiency, fast optical switches, and an excess of SPDC sources required. A direct way to solve the problems of SPDC is directly using on-demand single photon sources based on coherently driving a quantum two-level system. In 2017, Wang _et al._ successfully performed the first five-photon boson sampling experiment using an actively demultiplexed quantum-dot single-photon source and an ultra-low loss photonic circuit, and showed a high sampling rate that is 24,000 times faster than all previous experiments, beating early classical computers -- ENIAC and TRADIC [846]. In 2019, Wang _et al._ demonstrated a boson sampling with 20 input photons and a 60-mode interferometer [847]. Finally, at most 14 photons are detected at the output, and the output state Hilbert space reaches up to 3.7\(\times 10^{14}\) dimensions, which is over 10 orders of magnitude larger than the previous works. A more efficient way to demonstrate quantum computational advantage is through GBS, thanks to the single-mode squeezed state inputs, of which more than one-photon components are allowed while its computational complexity is as hard as original boson sampling. In 2020, a landmark experiment was executed by Zhong _et al._[16] and successfully demonstrated quantum computational advantage. Then, GBS was improved with 50 single-mode squeezed-state inputs and a 144-mode interferometer, and up to 113-click coincidences are detected [820] (see Fig. 9). These rudimentary photonic quantum computers, named as _Juzhang_, in honor of an ancient Chinese mathematical classic -- "The Nine Chapters on the Mathematical Art", yield an output state space dimension of \(10^{43}\) and a sampling rate that is \(10^{10}\) faster than using the state-of-the-art simulation strategy and supercomputers. Technically, _Juzhang_ is partially programmable owing to the precisely controlling the phases of input TMSSs. To make boson sampling fully programmable, a notable way is to encode photonic modes to time bins [856]. In Figure 9: Experimental setup of _Juzhang_. It is mainly composed of five parts. At the upper left region, high-intensity transform-limited pulse laser with wavelength of 775 nm are prepared to pump 25 two-mode squeezed state (TMSS) sources (at the left region, labeled in orange). Meanwhile, continuous-wave 1450-nm laser are guided and co-propagates with the 25 TMSS sources. The 1550-nm two-mode squeezed light is collected into temperature-insensitive single-mode fiber, of which 5-m bare fiber is winded around a piezo-electric cylinder to control the source phase (at the center region). In the center-right region, by using optical collimators and mirrors, 25 TMSSs are injected into a photonic network and 25 corresponding light beams (colored in yellow) with wavelength of 1450 nm and intensity power of about 0.5 \(\mu\)W are collected for phase locking. The 144 output modes are distributed into four parts using arrays of tunable periscopes and mirrors. Finally, the output modes are detected by 144 superconducting nanowire single-photon detectors and registered by a 144-channel ultra-fast electronics coincidence unit. this approach, the splitting ratio and phase of every time bin can be changed as will in real time. In 2017, He _et al._ reported the first time-bin-encoded boson sampling combining a on-demand quantum dot single photon source [848]. It is worth noting that this time-bin multimode network is fully electrically programmable, and fully connected (without zero elements). Recently, combining the aforementioned GBS approach, Madsen _et al._ reported quantum advantage by building a GBS machine with time-bin loops, where 216 single-mode squeezed state inputs and 16 photon-number-resolving detectors are used, and finally up to 219 photons are detected [18]. Noting that this machine is partially programmable, and most of the matrix elements are zeros, namely partially connected. Because the output probability of GBS is related to hafnian, which corresponds to the number of perfect matching of a graph, it links to several potentially practical applications [857, 858, 859, 860]. Next, GBS will naturally be developed as a special-purpose photonic platform to investigate these real-world applications, as a step toward NISQ processing [861]. _Scalability.--_ For large-scale quantum information processing, the photonic platform faces two major hurdles in the current stage -- the loss and the ultra-weak interaction between independent photons. In principle, the loss is both locatable and detectable, it should be much easier to be solved than computational errors (such as X, Z errors). For DV quantum computing, theoretical analysis [862, 863] suggests that at most 50% loss is allowed for scalable quantum computing, which is much less stringent and restrictive than a threshold of \(\sim\)1% for surface code to address computational errors. In one-way quantum computing, some works pointed out that for \(m\)-photon cluster state, at least a fusion success probability of \(1/(m-1)\) is required for universal quantum computation [864], while the upper bond needs further works in the future. For instance, the upper percolation threshold of the 3-photon clusters is bonded by 0.5898 [864]. For CV quantum computing, loss will cause the squeezed states to move closer to the vacuum state, meanwhile losing their quantum feature. The exciting thing is that the fault-tolerant quantum computing with GKP qubits only requires a squeezing level of \(\sim\) 10 dB [865], which allows a high loss threshold for scalable quantum computing. In summary, losses in both DV and CV photonic quantum computing can be handled with a relatively large threshold. Photon-photon interaction at a single-photon level is a fundamental question both in photonic quantum computing and quantum optics. It is strongly believed that nonlinear interactions are needed to deterministically generate entanglement between photons [866]. Over the last two decades, several approaches were developed to address this issue, such as electromagnetically induced transparency [867], atom-cavity interaction [868], atom-atom interaction [869] and atoms in chiral waveguide [870], etc. In 2016, Hacker _et al._ reported a photon-photon gate with an efficiency of 4.8% and a fidelity of 76.2% [871], which suffers from inefficient photon storage and retrieval during the whole process, and the gate fidelity is limited by the precision of spin characterization. The same issues happen to several recent experiments utilizing Rydberg blockade [872, 873, 874]. By storing single photons in a long-lived Rydberg state, the efficiency of single-photon storage and retrieval has been improved to 39% [873]. In the future, by harnessing the strong nonlinearity in \(\chi^{(2)}\) mediums and cQED systems, photon-photon gates can be realized with both high fidelity and efficiency which surpass the thresholds required for fault-tolerant quantum computing. In this case, the photonic platform will provide a perfect stage and a potential leading platform working at room temperature for fault-tolerant quantum computing. ## X Outlook and Conclusion We have reviewed prominent quantum computing platforms that have seen significant advances over the last decade. Currently, these quantum platforms are in different stages of maturity, where each system exhibits both advantages and limitations. To achieve better control fidelity and scalability of the various quantum platforms, challenges must be addressed to match the requirements for large-scale quantum computing on different platforms. For solid-state quantum systems, high-quality materials and advanced fabrication technologies are essential for the quality of qubits. For example, low-charge noise interfaces are critical for semiconductor qubits, which could be improved by importing industrial techniques of IC and encouraging a transfer from laboratory-level engineering to foundry-level fabrication. For photonic atom-based qubit systems, fundamentals and advanced techniques in precise control of individual atoms and atom-photon interconnectors between multiple processors could be developed. In the cases of NMR systems, one approach that may overcome difficulty in achieving scalable and high-fidelity control on large spin systems in liquid NMR quantum computation is to shift to solid-state NMR. While for qubits based on NV centers, the deterministic and controllable production of NV centers while preserving the coherence time needs to be further developed. For all quantum computing platforms, the typical overheads for scalable quantum computing, such as crosstalk, gate heating, and frequency crowding, should also be considered carefully. The techniques that integrate high-fidelity control techniques, such as DD schemes that suppress errors and crosstalk between multiple qubits, are necessary for high-fidelity multi-qubit control with the growing number of qubits, where system size should increase without compromising control quality. By merging the techniques developed for quantum computing, we might also gain better performance in quantum metrology and quantum simulations. In addition to techno logical advances to address the challenges, efforts in integrating multiple quantum platforms may also be essential for utilizing the advantages of different platforms for future applications. For instance, tasks for computation, communication, and storage, may be allocated to different units. Moreover, developments of hybrid quantum-classical algorithms are also critical for developing "killer applications" for near-term quantum computers. Proper integration of different quantum platforms and hybrid classical and quantum computing may provide significant advantages in real-world applications. In the future, significant breakthroughs, such as fault-tolerant quantum operations, and quantum algorithms, will be achieved in medium-sized quantum systems. It is also desirable to achieve quantum advantage for applications in quantum chemistry, quantum machine learning, etc. Further developments could include specialized quantum machines, quantum clouds, and applications of quantum computing systems in quantum sensing and simulations. Finally, as quantum platforms develop toward fully scalable fault-tolerant quantum computing, we anticipate the emergence of broad real-world applications of quantum computing. ###### Acknowledgements. We thank Fei Yan for his contributions to the superconducting qubits section, and thank valuable discussions with Xiaodong He, we thank Chao-Yang Lu, Andrea Morello and Lieven M. K. Vandersypen for valuable comments, and we also thank Jiasheng Mai for figure polishing. This work was supported by the National Natural Science Foundation of China (Grants No. U1801661, 12174178, 11905098, 12204228, 12004165, 11875159, 12075110, 92065111, 12275117, 11905099, 11975117, 12004164, 62174076, 92165210, 11904157, 1166116108, 11927811, 12004371), National Key Research and Development Program of China (Grant No. 2019YFA0308100 and No. 2018YFA0306600), the Key-Area Research and Development Program of Guangdong Province (2018B030326001), the Guangdong Innovative and Entrepreneurial Research Team Program (2016ZT06D348, 2019ZT08C044), the Guangdong Provincial Key Laboratory (2019B121203002), the Guangdong Basic and Applied Basic Research Foundation (Grant No. 2021B1515020070 and No. 2022B1515020074), the Natural Science Foundation of Guangdong Province (2017B030308003), the Science, Technology and Innovation Commission of Shenzhen, Municipality (Grants No. KYT-DPT20181011104202253, KQTD20210811090049034, K21547502, ZDSYS20190902092905285, KQTD20190929173815000, KQTD20200820113010023, JCYJ20200109140803865, JCYJ20170412152620376), Shenzhen Science and Technology Program (RCBS20200714114820298, BCYX2020071414522109), the Shenzhen-Hong Kong Cooperation Zone for Technology and Innovation (HZQB-KCZYB-2020050), the Anhui Initiative in Quantum Information Technologies (Grant No. AHY050000), the Innovation Program for Quantum Science and Technology (Grant No. 2021ZD0303205), Research Grants Council of Hong Kong (GRF No. 14308019), the Research Strategic Funding Scheme of The Chinese University of Hong Kong (No. 3133234). F.N. is supported in part by: Nippon Telegraph and Telephone Corporation (NTT) Research, the Japan Science and Technology Agency (JST) [via the Quantum Leap Flagship Program (Q-LEAP), and the Moonshot R&D Grant Number JPMJMS2061], the Japan Society for the Promotion of Science (JSPS) [via the Grants-in-Aid for Scientific Research (KAKENHI) Grant No. JP20H00134], the Asian Office of Aerospace Research and Development (AOARD) (via Grant No. FA2386-20-1-4069), and the Foundational Questions Institute Fund (FQXi) via Grant No. FQXi-IAF19-06. **Author Contributions** M.-H.Y., Y.H., X.-H.D., J.L., D.L., B.-C.L., P.H., and Y.L. wrote the abstract and introduction. M.-H.Y. and B.C. wrote the quantum algorithms section. X.G., Y.Z., and F.N. wrote the superconducting qubits section. Y.L. wrote the trapped-ion qubits section. Y.H., P.H., and G.H. wrote the semiconductor spin qubits section. D.L. and C.Q. wrote the NV centers section. J.L., T.X., and X.-H.P. wrote the NMR system section. S.Y. wrote the neutral atom arrays section. H.W. wrote the photonic quantum computing section. X.-H.D., P.H., Y.H., and M.-H.Y. wrote the outlook and conclusion. The manuscript was revised by X.-H.D., P.H., J.Z., S.Z., F.N. and D.Y. with input from all other authors. D.Y. supervised the review project.
2308.06504
Accelerating Relaxation Dynamics in Open Quantum System with Liouvillian Skin Effect
We investigate a non-Hermitian model featuring non-reciprocal gradient hoppings. Through an in-depth analysis of the Liouvillian spectrum and dynamics, we confirm the emergence of the Liouvillian skin effect resulting from the non-reciprocal nature of hoppings in this model. Furthermore, we observe that the presence of gradient hopping strength leads to an accelerated relaxation time for the system. Through numerical investigations of the Liouvillian gap, relaxation time, and steady-state localization length, we discover that the relaxation time in this model cannot be explained by the currently established relationship associated with the Liouvillian skin effect. This discrepancy highlights the need for further exploration and theoretical advancements to fully comprehend the intricate mechanisms underlying quantum relaxation processes. Motivated by these findings, we propose a theoretical approach to realize this non-Hermitian model in an atomic system with a sideband structure by employing adiabatic elimination technique. These results contribute to our deeper comprehension of quantum relaxation dynamics and provide theoretical backing for the development of techniques aimed at controlling quantum relaxation processes.
Zeqing Wang, Yao Lu, Yi Peng, Ran Qi, Yucheng Wang, Jianwen Jie
2023-08-12T08:41:48Z
http://arxiv.org/abs/2308.06504v1
# Accelerating Relaxation Dynamics in Open Quantum System with Liouvillian Skin Effect ###### Abstract We investigate a non-Hermitian model featuring non-reciprocal gradient hoppings. Through an in-depth analysis of the Liouvillian spectrum and dynamics, we confirm the emergence of the Liouvillian skin effect resulting from the non-reciprocal nature of hoppings in this model. Furthermore, we observe that the presence of gradient hopping strength leads to an accelerated relaxation time for the system. Through numerical investigations of the Liouvillian gap, relaxation time, and steady-state localization length, we discover that the relaxation time in this model cannot be explained by the currently established relationship associated with the Liouvillian skin effect. This discrepancy highlights the need for further exploration and theoretical advancements to fully comprehend the intricate mechanisms underlying quantum relaxation processes. Motivated by these findings, we propose a theoretical approach to realize this non-Hermitian model in an atomic system with a sideband structure by employing adiabatic elimination technique. These results contribute to our deeper comprehension of quantum relaxation dynamics and provide theoretical backing for the development of techniques aimed at controlling quantum relaxation processes. ## I Introduction The study of open quantum systems, which takes into account the interactions with the surrounding environment, is a fundamental and captivating research field [1; 2]. Many open quantum systems can be effectively described by the non-Hermitian Hamiltonians, which have attracted widespread attentions in the past two decades [3; 4; 5; 6; 7; 8; 9]. Unlike closed quantum systems, open quantum systems experience a breakdown of time reversibility due to the stochastic coupling with the environment. This breakdown leads the open quantum system to eventually reach a steady state, where it remains throughout the evolution. This evolution is referred to as the relaxation process and occurs on a characteristic timescale known as the relaxation time, denoted as \(\tau\). The relaxation time serves as a significant intrinsic timescale for understanding open quantum systems. In a specific class of open quantum systems characterized by the Markovian Lindblad master equation, the relaxation time \(\tau\) is typically inversely proportional to the Liouvillian gap \(\Delta\) of the system [10; 11]. Furthermore, considerable attention has been given to the skin effect in Markov process-based open quantum systems. The existence of the skin effect in such systems has recently been confirmed and named the Liouvillian skin effect (LSE) [12; 13]. Compared with the non-Hermitian skin effect (NHSE) that describes the localization of non-Hermitian Hamiltonian eigenstates [3; 4; 5; 6; 7; 8; 9], here the LSE denotes the localization of Liouvillian eigenmodes. In the LSE, the system tends to relax towards the boundaries of system. Interestingly, it has been discovered that the relaxation processes are slowed down in the presence of the LSE, even without the closing of the Liouvillian gap. The relationship between the relaxation time \(\tau\) and the Liouvillian gap \(\Delta\) is modified by the ratio of the system size \(N\) to the localization length \(\xi\) of the Liouvillian skin mode [12]: \[\tau\sim\frac{1}{\Delta}(1+\frac{N}{\xi}). \tag{1}\] This relationship significantly advances our understanding of relaxation physics in open quantum systems. It raises the question of its universality across all open quantum systems with the LSE. Moreover, if the relationship is not universal, it prompts further investigation into whether systems deviating from it exhibit even more intriguing Liouvillian dynamics. To address these inquiries, we investigate a non-Hermitian model with non-reciprocal gradient hopping. Firstly, we establish the existence of the LSE by examining the Liouvillian eigenmodes and dynamics of the model. Additionally, we observe a significant acceleration of the relaxation process towards the steady state due to the presence of gradient non-Hermitian hopping, which modifies the relaxation relation stated in Eq. (1). Fur thermore, we propose a method to implement this non-Hermitian model in atomic systems based on the side-band structure, utilizing the adiabatic elimination technique. In the following, we introduce the non-Hermitian model and confirm the presence of the LSE through an analysis of the Liouvillian spectrum and dynamics in Section II. We then delve into the investigation of the relaxation time of the non-Hermitian model in Section III. Next, we discuss our proposal for realizing the non-Hermitian model in atomic systems in Section IV. Finally, we present our concluding remarks in Section V. ## II Liouvillian skin effect Here we study the non-Hermitian model described by the Lindblad master equation as following, \[\dot{\hat{\rho}}=-i\left[\sum_{n=0}^{N-1}E_{n}|n\rangle\langle n|,\hat{\rho} \right]+\sum_{n=1}\sum_{j=L,R}\mathcal{D}[\hat{L}_{n,j}]\hat{\rho}, \tag{2}\] where the Lindblad super-operator is defined as \(\mathcal{D}[\hat{\mathcal{A}}]\hat{\rho}=\hat{\mathcal{A}}\hat{\rho}\hat{ \mathcal{A}}^{\dagger}-\{\hat{\mathcal{A}}^{\dagger}\hat{\mathcal{A}},\hat{ \rho}/2\}\) and the Lindblad jump operator \(\hat{L}_{n,L(R)}\) denotes as \[\hat{L}_{n,L}=\sqrt{J_{n,L}}|n-1\rangle\langle n|,\hat{L}_{n,R}=\sqrt{J_{n,R}} |n\rangle\langle n-1|, \tag{3}\] with the left hopping strength \(J_{n,L}\) and right hopping strength \(J_{n,R}\). \(N\) is the number of site \(\{|n\rangle\}\), i.e., the system size. In this model, both on-site energy \(E_{n}\) and non-coherent hoppings \(J_{n,R(L)}\) can be gradient. The Lindblad master equation we considering in Eq. (2) can be rewritten as \[\dot{\hat{\rho}}=\mathcal{L}[\hat{\rho}], \tag{4}\] where \(\mathcal{L}\) is the Liouville super-operator defined in a \(N^{2}\)-dimensional Hilbert space [1]. Then the right and left eigenmodes of \(\mathcal{L}\) are defined as \[\mathcal{L}[\hat{\rho}_{k}^{\tau}]=\lambda_{k}\hat{\rho}_{k}^{\tau},\] \[\mathcal{L}^{\dagger}[\hat{\rho}_{k}^{l}]=\lambda_{k}^{\ast}\hat {\rho}_{k}^{l}, \tag{5}\] with \(k=0,1,2,\cdots,N^{2}-1\). Here \(r(l)\) denotes to the normalized right (left) eigenmode as \(\mathrm{Tr}[\sqrt{(\hat{\rho}_{k}^{\tau(l)})^{\dagger}\hat{\rho}_{k}^{\tau(l) }}]=1\). Thus, any initial state of the system \(\hat{\rho}_{\mathrm{ini}}\) can be expanded in terms of the eigenmodes as \[\hat{\rho}_{\mathrm{ini}}=\sum_{k=0}^{N^{2}-1}c_{k}\hat{\rho}_{k}^{\tau}, \tag{6}\] where the coefficients \(c_{k}\) are given by \(c_{k}=\mathrm{Tr}[(\hat{\rho}_{k}^{l})^{\dagger}\hat{\rho}_{\mathrm{ini}}]/ \mathrm{Tr}[(\hat{\rho}_{k}^{l})^{\dagger}\hat{\rho}_{k}^{\tau}]\). As a result, the system evolves to the state \[\hat{\rho}(t)=\sum_{k=0}^{N^{2}-1}c_{k}e^{\lambda_{k}t}\hat{\rho}_{k}^{\tau}. \tag{7}\] where \(\lambda_{k}\) represents the decay rate associated with the eigenmode \(\hat{\rho}_{k}^{\tau}\). Figure 1: (Color online) Liouvillian spectrum. The eigenvalues of Liouvillian operator: (a) with both on-site potential and nonreciprocal hoppings being gradient [\(E_{n}=nE,J_{n,R(L)}=nJ_{R(L)}\)]; (b) with only on-site potential being gradient [\(E_{n}=nE,J_{n,R(L)}=J_{R(L)}\)]; (c) with only nonreciprocal hoppings being gradient [\(E_{n}=E,J_{n,R(L)}=nJ_{R(L)}\)]. (d) The eigenmodes of Liouvillian operator as remarked in (a). The number of sites is \(N=20\). Other parameters: \(E=1.0\)MHz, \(J_{L}=184.3\)Hz, \(J_{R}=118.0\)Hz. The time evolution of an open quantum system is characterized by quantum dynamical semigroups, which states that the fate of system is determined by the steady state \(\hat{\rho}_{s}\), while the contributions from all other eigenmodes decay completely. The steady state \(\hat{\rho}_{s}\) corresponds to the eigenmode of the Liouvillian superoperator \(\mathcal{L}\) with a zero eigenvalue (excluding pure imaginary eigenvalues, which would lead to non-stationary steady states [14; 15]). In other words, \(\mathcal{L}[\hat{\rho}_{s}]=0\). This implies that the real parts of all other eigenvalues are negative, allowing us to order the eigenvalues \(\lambda_{k}\) in descending order of their real parts as \(0=\lambda_{0}>\mathrm{Re}[\lambda_{1}]\geq\cdots\geq\mathrm{Re}[\lambda_{N^ {2}-1}]\). The Liouvillian gap, denoted as \(\Delta=|\mathrm{Re}[\lambda_{1}]|\), is defined as the real part of the eigenvalue of the Liouvillian superoperator with the largest nonzero real part. This gap is typically associated with the asymptotic decay rate [16]. The time-dependent density matrix can be expressed as [12] \[\hat{\rho}(t)=\hat{\rho}_{s}^{r}+\sum_{k=1}^{N^{2}-1}c_{k}e^{\lambda_{k}t}\hat {\rho}_{k}^{r}. \tag{8}\] To investigate the LSE in our system, we first numerically solve Eq. (5) to obtain the Liouville spectrum as shown in Fig. 1, considering a system with 20 sites. As depicted in Fig. 1 (a), the eigenvalues with non-zero imaginary parts appear within a central region, indicating their contribution to the periodic oscillations in the relaxation dynamics. The inset plot confirms the uniqueness of the steady state, primarily resulting from the breaking of all the symmetries of system by the Lindblad jump operator in Eq. (3) [17]. In contrast, Fig. 1 (b) displays the spectrum when the gradient of hoppings is turned off, resulting in reduced absolute values of the real parts of the eigenvalues. Furthermore, the distribution of the real parts of eigenvalue modes with non-zero imaginary parts in the middle section of the spectrum becomes more uniform, indicating a more consistent decay rate for these modes. Figure 1 (c) demonstrates that when the on-site potential gradient is eliminated, eigenmodes with non-zero imaginary parts disappear. In all cases shown in Figs. 1 (a-c), the steady state remains unique, as depicted in the inset plot, indicating that the non-reciprocal hoppings do not alter the symmetry of system. Additionally, Figs. 1 (a-b) show that the Liouvillian gap is larger when gradient hopping is present, suggesting a faster relaxation rate towards the steady state. In Fig. 1 (d), we present the density matrices of the eigenmodes labeled in Fig. 1 (a). The steady state is localized at the left boundary of the system [see Fig. 1 (d-i)], and as the real part of the eigenvalues increases, the corresponding eigenmodes tend to occupy sites near the right boundary [see Fig.1(d-iv)]. For eigenmodes with non-zero imaginary parts of eigenvalues, their density matrices exhibit non-zero off-diagonal elements, leading to oscillatory decaying behavior during the relaxation process towards the steady state. Taking the system size as \(N=101\) and setting the initial state of the system to \(|n=50\rangle\), we make noteworthy observations. When the system undergoes reciprocal hoppings [Figs. 2(a-b)], it displays symmetric dynamical evolution across the system. However, in the case of non-reciprocal hoppings [Figs. 2(c-d)], the system's symmetric dynamical evolution breaks down, and it evolves towards the boundaries, remaining there indefinitely. This intriguing phenomenon is known as the LSE. Furthermore, consistent with the findings in Fig. 1, when the hoppings in the system are gradient [Fig. 2(a,c)], the system relaxes to the boundaries at a faster rate. As the hoppings in our model exhibit a gradient nature, we will quantitatively study the key factors influencing the relaxation process under such special hoppings: the Liouvillian gap, relaxation time, and localization length. ## III Relaxation time The relaxation time in a quantum open system refers to the characteristic duration it takes for the system to reach its equilibrium or steady state [1]. This time span is influenced by several factors, including the strength of the system's interaction with the environment, the properties of the environment itself, and the specific dynamics governing the system. Experimental determination of the relaxation time involves observing the temporal evolution of relevant observables or analyzing the decay rates of specific quantities. Understanding the relaxation time is crucial in the study of open quantum systems as Figure 2: (Color online) Liouville dynamics. The hopping strengths are gradient in (a,c) [\(J_{n,R(L)}=nJ_{R(L)}\)] and are homogeneous in (b,d) [\(J_{n,R(L)}=J_{R(L)}\)]. (a,b) show the absence of LSE with hopping strengths \(J_{R}=J_{L}=184.3\)Hz, and (c,d) confirm the LSE with nonreciprocal hopping strengths \(J_{R}=100J_{L}=184.3\)Hz. The number of sites is \(N=100\). Other parameters: \(E_{n}=nE,\ E=1.0\)MHz. The panels in the same row share the same \(y\)-axis. it provides valuable insights into system behavior, stability, and the timescales associated with achieving a steady state. Moreover, it holds significant importance in practical applications like quantum information processing, where effective control and mitigation of relaxation processes are essential for preserving the coherence and reliability of quantum states and operations [18]. To discuss the variation of the Liouvillian gap and relaxation time with system size and the hopping strength, we consider, without loss of generality, the case that the steady state of the system \(\rho_{s}\) localizes on the left boundary site \(|0\rangle\), namely \(J_{n,L}>J_{n,R}\). Therefore, we initialize the system in the right boundary site \(|N-1\rangle\) and define the relaxation time \(\tau\) as the time when the decay of the population of the right boundary site \(|N-1\rangle\) reaches \(1/e\) of \(\rho_{s,N-1}\). Then the the relaxation time \(\tau\) is given by \[\rho_{s,N-1}-\rho_{N-1}(\tau)=\frac{\rho_{s,N-1}}{e}, \tag{9}\] where \(\rho_{s,n}\) represents the occupation probability of the steady state on the site \(|n\rangle\). In Fig. 3, we consider two cases: homogeneous hopping strength (a, c) and gradient hopping strength (b, d). When the system undergoes reciprocal and homogeneous hoppings [double-dotted dashed purple lines in Figs. 3 (a) and (c)], we observe that the relaxation time \(\tau\) (proportional to \(N^{2}\)) and the Liouvillian gap \(\Delta\) (proportional to \(N^{-2}\)) follow the relationship \(\tau\propto 1/\Delta\), reflecting diffusive relaxation [19; 20; 21; 12]. However, when the hopping strength is gradient and reciprocal, the system relaxes to the steady state at an accelerated rate (\(\tau\propto N\)), as indicated by the double-dotted dashed purple lines in Figs. 3 (b, d). Interestingly, when the hopping strength is non-reciprocal and homogeneous, the simple relationship \(\tau\propto 1/\Delta\) is broken. As non-reciprocity increases, the value of \(\Delta\) tends to remain invariant with the system size [see the double-dotted double-dashed blue lines in Figs. 3 (a-b)], while the relaxation time \(\tau\) still scales with the system size [see the double-dotted double-dashed blue lines in Figs. 3 (c-d)]. In this case, the relationship between the Liouvillian gap and the relaxation time needs to be described by Eq. (1), where the localization length \(\xi\) is size-independent. However, as shown in Fig. 3(b) and Fig. 3(d), when the hopping is gradient, with increasing non-reciprocity, the value of \(\Delta\) still does not change with the system size, but the relaxation time \(\tau\) scales as \(N^{1/5}\). This result indicates that the gradient significantly accelerates the relaxation process. Furthermore, if the system still follows Eq. (1), i.e., \(\tau\sim\Delta(1+N/\xi)\propto N^{1/5}\), it implies that the localization length of the system \(\xi\) is size-dependent and scales as \(\xi\sim N^{4/5}\). To verify whether the localization length of the system follows the above analysis, we present the plot of the localization length as a function of size in Figs. 4(a-b). We observe that when the system exhibits the LSE, regardless of whether the hopping strength is homogeneous [Fig. 4(a)] or gradient [Fig. 4(b)], the profile of steady states for smaller systems is overlapped by the profile of steady states for larger systems, indicating that the localization length of the steady state is independent of the system size. This size-independent behavior is further supported by Figs. 4(c-d), which show that the localization length of the steady state is solely determined Figure 3: (Color online) Liouvillian gap and relaxation time. The hopping strengths are homogeneous in (a,c) [\(J_{n,R(L)}=J_{R(L)}\)] and are gradient in (b,d) [\(J_{n,R(L)}=nJ_{R(L)}\)]. The solid gray lines give the reference of the size scaling. Other parameters: \(E_{n}=nE,~{}E=1.0\)MHz, \(J_{L}=184.3\)Hz. The panels in the same row share the same \(y\)-axis. Figure 4: (Color online) Localization length. The hopping strengths are homogeneous in (a,c) [\(J_{n,R(L)}=J_{R(L)}\)] and are gradient in (b,d) [\(J_{n,R(L)}=nJ_{R(L)}\)]. (a-b) show the profile of steady states for different system size with \(\sqrt{J_{R}/J_{L}}=0.8\). (c-d) show the localization length of the steady states.Other parameters are same to Fig. 3. The panels in the same row share the same \(y\)-axis. by the nonreciprocal hopping ratio \(J_{R}/J_{L}\), regardless of whether the hopping strength is homogeneous [Fig. 4(c)] or gradient [Fig. 4(d)]. Therefore, the relaxation behavior of our model cannot be described by Eq. (1). This indicates that the general relationship governing the relaxation time for open quantum systems with the LSE has not yet been discovered. To gain further insight into the physics underlying our results, we closely follow the analysis presented in [12]. The acceleration of the relaxation process can be understood using Eq. (8). Among all the eigenmodes, the real part of the eigenvalues of \(\rho_{1}^{r}\) has the smallest absolute value, leading to the slowest relaxation to the steady state. Consequently, the entire relaxation time scale is determined by the coefficient corresponding to \(\rho_{1}^{r}\), denoted as \(c_{1}\). It is always possible to prepare the initial state \(\hat{\rho}_{\rm ini}\) with an overlap of \(O(1)\) with \(\rho_{1}^{l}\). When the system exhibits Liouvillian skin effect (LSE) under homogenous hopping, all the eigenmodes are localized near the boundary and decay exponentially, resulting in \(c_{1}=O(1)/{\rm Tr}[(\hat{\rho}_{1}^{l})^{\dagger}\hat{\rho}_{1}^{r}]\sim e^{ O(N/\xi)}\) (without LSE, \(c_{1}\) is independent of the system size \(N\), and the relaxation law is \(1/\Delta\)). At this point, the relaxation time \(\tau\) is determined by \(e^{O(N/\xi)}e^{-\tau\Delta}\sim e^{-1}\), leading to the relation given in Eq. (1). In Fig. 5(a), we numerically calculate \(-\ln|{\rm Tr}[(\hat{\rho}_{1}^{l})^{\dagger}\hat{\rho}_{1}^{r}]|\) and verify the size scaling \(N\) for the homogenous hopping. Furthermore, we observe that for the gradient hopping, \(-\ln|{\rm Tr}[(\hat{\rho}_{1}^{l})^{\dagger}\hat{\rho}_{1}^{r}]|\) shows a tendency to converge to \(N^{1/5}\) as the system size increases. Following the analysis above, we infer that \(c_{1}=O(1)/{\rm Tr}[(\hat{\rho}_{1}^{l})^{\dagger}\hat{\rho}_{1}^{r}]\sim e^{ O(N^{1/5}/\xi)}\) for the gradient hopping. Consequently, based on the results in Figs. (3-5), we deduce the relaxation time for our model as \(\tau\sim\Delta^{-1}(1+N^{1/5}/\xi)\) in the thermalization limit. We numerically check the size scaling \(N^{1/5}\) in Fig. 5(b), where the relaxation time shows the tends of convergence to the \(N^{1/5}\) in the larger system size. This can be understood as the system size \(N\) being effectively shortened to \(N^{1/5}\) by the gradient hopping, which breaks the translation symmetry of the system by inducing an additional effective force. As a result, regardless of whether the system evolves from the left to the right or from the right to the left, the overall dynamics are accelerated, with the only difference being the time stage for the accelerations. To precisely verify and prove our results, we need to find new methods to analytically solve Eq. (2). This remains our primary focus for future research. ## IV Proposal of non-hermitian model with gradient hopping We use the trapped-ion system as an example to illustrate the effective non-Hermitian model. The method employed here is applicable to other atomic systems that possess the motional sideband structure [22; 23; 24]. As depicted in Fig. 6(a), the trapped-ion system consists of two internal electronic energy levels: the ground state \(|g\rangle\) and the excited state \(|e\rangle\), which are described by the Hamiltonian (we set \(\hbar=1\) throughout this work), \[\hat{H}_{i}=\frac{\omega_{0}}{2}\left(|e\rangle\langle e|-|g\rangle\langle g| \right). \tag{10}\] The trap employed here provides dynamical confinement in the \(y\)-\(z\) plane and static confinement in the \(x\) direction [25]. The motional sidebands of the internal states are constructed using the energy levels \(\{|n\rangle,n=0,1,2,\cdots\}\) of the harmonic trap in the \(x\) direction, which are separated by the frequency \(\nu\) and given by \[\hat{H}_{e}=\sum_{n=0}^{N-1}\left(\frac{1}{2}+n\right)\nu|n\rangle\langle n|, \tag{11}\] with the system size \(N\) (the number of motional sideband levels). We introduce two independent lasers for coupling the internal states to the external motional sideband states. The couplings are described by \[\hat{V}_{j}=\Omega_{j}\left(|g\rangle\langle e|+|e\rangle\langle g|\right) \cos\left(k_{j}\hat{x}_{S}-\omega_{j}t+\phi_{j}\right), \tag{12}\] where \(k_{j},\omega_{j},\phi_{j}\), and \(\Omega_{j}\) correspond to the wave vector, frequency, initial phase, and Rabi frequency of laser \(j\), respectively. Here, the subscripts \(j=r,b\) refer to the red-detuned laser (\(\omega_{r}<\omega_{0}\)) and the blue-detuned laser (\(\omega_{b}>\omega_{0}\)), respectively. We apply the rotating wave approximation to the system in the rotating frame \(\hat{U}_{R}=e^{-i(H_{i}+H_{e})t}\), resulting in the Hamiltonian of the trapped-ion: \[\hat{H}_{R}^{\rm RWA}=\sum_{j=r,b}\frac{\Omega_{j}}{2}e^{-i[k_{j}\hat{x}_{R}-( \omega_{j}-\omega_{0})t+\phi_{j}]}|g\rangle\langle e|+{\rm h.c.}, \tag{13}\] Figure 5: (Color online) (a) Overlap of the left eigenmode and the right eigenmode for different system size with \(\sqrt{J_{R}/J_{L}}=0.8\).(b) Relaxation time for different system size. The solid gray lines give the reference of the size scaling \(N^{1/5}\). Other parameters are same to Fig. 3. with \(\hat{x}_{R}=\hat{U}_{R}^{\dagger}\hat{x}_{S}\hat{U}_{R}\). The system enters the Lamb-Dicke regime when the spatial extension of the ion \(x_{0}=\sqrt{1/2M\nu}\) (\(M\) is the mass of the ion) is much smaller compared to the wavelengths of all the applied lasers. In this regime, the recoil energies of the lasers have a negligible impact on the trap frequency \(\nu\). Hence, the Lamb-Dicke parameter satisfies \(\eta_{j}=k_{j}x_{0}\ll 1\) (\(j=r,b\)). We can then expand Eq. (13) in terms of \(\eta_{j}\) to obtain the Hamiltonian: \[\hat{H}_{R}^{\mathrm{LD}}=\sum_{n=0}\frac{\sqrt{n+1}}{2}\left[ \eta_{r}\Omega_{r}e^{i(\delta_{r}t-\phi_{r})}|g,n+1\rangle\langle e,n|+\mathrm{ h.c.}\right]\] \[+\sum_{n=0}\frac{\sqrt{n+1}}{2}\left[\eta_{b}\Omega_{b}e^{i( \delta_{b}t-\phi_{b})}|g,n\rangle\langle e,n+1|+\mathrm{h.c.}\right], \tag{14}\] where \(\delta_{r}=\omega_{r}-(\omega_{0}-\nu)\) and \(\delta_{b}=\omega_{b}-(\omega_{0}+\nu)\) are respective the red detuning and blue detuning of lasers, and satisfy \(\delta_{r,b}\ll\nu\). The spontaneous decay of the excited sideband state \(|e,n\rangle\) to the ground sideband state \(|g,n\rangle\) with a decay rate \(\gamma\) can be described by the Lindblad operators: \[\hat{L}_{n}=\sqrt{\gamma}|g,n\rangle\langle e,n|. \tag{15}\] This leads to the Lindblad master equation for the system: \[\dot{\hat{\rho}}_{t}=-i\left[\hat{H}_{R}^{\mathrm{LD}},\hat{\rho}_{t}\right]+ \sum_{n=0}\mathcal{D}[\hat{L}_{n}]\hat{\rho}_{t}, \tag{16}\] where the Lindblad super-operator \(\mathcal{D}[\hat{\mathcal{A}}]\hat{\rho}_{t}=\hat{\mathcal{A}}\hat{\rho}_{t} \hat{\mathcal{A}}^{\dagger}-\{\hat{\mathcal{A}}^{\dagger}\hat{\mathcal{A}}, \hat{\rho}_{t}/2\}\). Our discussions are based on the resolved motional sidebands, which requires the system both working in the Lamb-Dicke regime and satisfying the energy scale relations \(\gamma\ll\nu\). Furthermore, we are interested in the weak motional sidebands coupling regime given by \(\eta_{j}\Omega_{j}\ll\gamma\)[26]. Therefore, as shown in Fig. 6(b), the trapped-ion will immediately decay to \(|g,n-1\rangle\) following the red bandside hopping from \(|g,n\rangle\) to \(|e,n-1\rangle\). Exploiting this fact, we can adiabatically eliminate the unstable excited sideband states \(|e,n-1\rangle\) and obtain the effective unidirectional hopping from \(|g,n\rangle\) to \(|g,n-1\rangle\) with strength \(nJ_{r}\). These processes form a dissipative cascade, cooling the system to the ground state \(|g,0\rangle\). Figure 6(c) illustrates similar processes for the blue bandside hoppings, which results in effective unidirectional hopping from \(|g,n-1\rangle\) to \(|g,n\rangle\) with strength \(nJ_{b}\) and construct a gain cascade, heating the system to ground sideband states with higher energy. Then, as depicted in Fig. 6(d), the effective non-Hermitian model of the ground sideband state \(|g,n\rangle\) comprises a semi-infinite ladder \(|n\rangle\) with non-reciprocal blue-detuned gradient hopping \(nJ_{b}\) and red-detuned gradient hopping \(nJ_{r}\). According to the spirit of the adiabatic elimination method [27], the effective master equation in the Schrodinger picture for this model can be written as Figure 6: (Color online) Illustration of the proposed non-Hermitian model. (a) The motional sidebands structure of a trapped-ion system encoded with two internal states \(|g\rangle\) and \(|e\rangle\). \(\Delta\) and \(\nu\) are the level spacings of the internal states and the motional sidebands states, respectively. \(\delta_{r,b}\), \(\eta_{r,b}\), and \(\Omega_{r,b}\) are respectively the detunings, Lamb-Dick parameters, and rabi frequencies of the independent red-detuned laser and blue-detuned laser. \(\gamma\) is the decay rate of the excited internal state \(|e\rangle\). (b) and (c) show the adiabatical elimination processes of the excited state \(|e,n-1\rangle\) via red-detuned laser and of the excited state \(|e,n\rangle\) via blue-detuned laser. \(nJ_{r,b}\) is the effective hopping strength. (d) The ground internal state \(|g\rangle\) based effective non-Hermitian model with non-reciprocal gradient hoppings \(nJ_{r,b}\). (see Appendix A for details): \[\dot{\hat{\rho}}=-i\left[\hat{H}_{\text{eff}},\hat{\rho}\right]+\sum_{n=1}\left( \mathcal{D}[\hat{L}_{n,r}]\hat{\rho}+\mathcal{D}[\hat{L}_{n,b}]\hat{\rho}\right), \tag{17}\] where \(\hat{L}_{n,r(b)}\) is the effective Lindblad operator and denotes as \[\hat{L}_{n,r}=\sqrt{nJ_{r}}|n-1\rangle\langle n|,\ \ \hat{L}_{n,b}=\sqrt{nJ_{b}}|n \rangle\langle n-1|, \tag{18}\] with the effective hopping strengths \[J_{r(b)}=\frac{\gamma|\Omega_{r(b)}|^{2}\eta_{r(b)}^{2}}{4\delta_{r(b)}^{2}+ \gamma^{2}}. \tag{19}\] The effective Hamiltonian reads (here we have removed the constant terms) \[\hat{H}_{\text{eff}}=\hat{U}_{R}\hat{H}_{R}^{\text{LD}}\hat{U}_{R}^{\dagger}= \sum_{n=0}n(E_{r}+E_{b}+\nu)|n\rangle\langle n|, \tag{20}\] where \(E_{r(b)}=\delta_{r(b)}|\Omega_{r(b)}|^{2}\eta_{r(b)}^{2}/(4\delta_{r(b)}^{2}+ \gamma^{2})\) is the energy shift induced by the lasers. We have derived a non-Hermitian model described by Eq. (17) in the trapped-ion systems. In this model, the hoppings between different sideband levels are governed by a non-Hermitian quantum jump operator, and the non-reciprocal hopping strength is achieved by adjusting parameters such as the Rabi frequency of the laser. This non-Hermitian model is useful for describing the sideband cooling and related sideband phonon excitation effects [25; 26; 28; 29; 30]. ## Appendix A Derivation of the effective master equation in Eq. (17) In this section, we will derive the effective master equation as shown in Eq.(17) in the main text by the effective operator formalism for open quantum systems [27]. Our system consist of two distinct subspaces, i.e. \(|g,n\rangle\) and \(|e,n\rangle\). In the rotating frame \(\hat{U}_{R}\), the Hamiltonian only contains the perturbative coupling between these two subspaces. We firstly rewrite Eq.(17) as \[\hat{H}_{R}^{\text{LD}}=\sum_{j=r,b}\sum_{n=0}\hat{V}_{+}^{(n,j)}(t)+\text{h. c.},\] (S1) where \(\hat{V}_{+}^{(n,j)}(t)=\hat{v}_{+}^{(n,j)}e^{-i\delta_{j}t}\) is a time dependent perturbative field applied to couple the \(|g,n\rangle\) to \(|e,n\rangle\). Each oscillator state \(|n\rangle\) is coupled by two laser fields, i.e. a red-detuned laser and a blude detuned laser, being labeled as \(j=r,b\), \[\hat{v}_{+,r}^{(n)}= \sqrt{n+1}\frac{\Omega_{r}}{2}\eta_{r}e^{i\phi_{r}}|e,n\rangle \langle g,n+1|,\] (S2) \[\hat{v}_{+,b}^{(n)}= \sqrt{n+1}\frac{\Omega_{b}}{2}\eta_{b}e^{i\phi_{b}}|e,n+1\rangle \langle g,n|.\] (S3) Here we consider the Lindblad operators in the rotating frame \(\hat{U}_{R}\) as \[\hat{L}_{n,R}=\sqrt{\gamma}e^{-i\omega_{0}t}|g,n\rangle\langle e,n|.\] (S4) Then we can perform the adiabatic elimination to arrive at an effective master equation for the subspace \(\{|g,n\rangle\}\) as \[\dot{\hat{\rho}}_{R}=-i\left[\hat{H}_{R,\text{eff}},\hat{\rho}_{R}\right]+\sum _{n=1}\mathcal{D}[\hat{L}_{n,R,\text{eff}}]\hat{\rho}_{R},\] (S5) where the effective Hamiltonian in the rotating frame \(\hat{U}_{R}\) is given by \[\hat{H}_{R,\text{eff}}=-\frac{1}{2}\left[\hat{V}_{-}(t)\sum_{j=r,b}\sum_{n=0}( \hat{H}_{\text{NH}}^{(j)})^{-1}\hat{V}_{+}^{(n,j)}(t)+\text{h.c.}\right]\] (S6) with the transition operator \(\hat{V}_{-}(t)=\sum_{j=r,b}\sum_{n=0}\hat{V}_{-}^{(n,j)}(t)\), which describes the the effective transition process from \(|g,n\rangle\) to \(|e,n\rangle\) and then back to \(|g,n\rangle\). The strength of this effective transition is determined by the propagator \[(\hat{H}_{\text{NH}}^{(j)})^{-1} = \sum_{m=0}|e,m\rangle\frac{1}{-\frac{i}{2}\hat{L}_{m}^{\dagger} \hat{L}_{m}-\delta_{j}}\langle e,m|,\] (S7) \[= \sum_{m=0}|e,m\rangle\frac{1}{-\frac{i}{2}\gamma-\delta_{j}} \langle e,m|.\] Then we can straightforwardly obtain the following effective Hamiltonian \[\hat{H}_{R,\text{eff}} =\sum_{n=0}\left[nE_{r}+(n+1)E_{b}\right]|g,n\rangle\langle g,n|\] (S8) \[+ \sum_{n=0}\left[J_{n,n+2}e^{-\text{i}(\delta_{b}-\delta_{r})t}|g, n+2\rangle\langle g,n|+\text{h.c.}\right],\] where \[E_{r(b)}= |\Omega_{r(b)}|^{2}\eta_{r(b)}^{2}\frac{\delta_{r(b)}}{4\delta_{r (b)}^{2}+\gamma^{2}},\] (S9) \[J_{n,n+2}= e^{\text{i}(\phi_{b}-\phi_{r})}\sqrt{(n+1)(n+2)}\Omega_{r}\Omega_{b }\eta_{r}\eta_{b}\] \[\times\frac{(\delta_{r}+\delta_{b})/2}{4\delta_{r}\delta_{b}+ \gamma^{2}+2\text{i}\gamma(\delta_{r}-\delta_{b})}.\] (S10) The first term in Eq. (S8) is the on-site energy shift induced by the two lasers. The second term is corresponding to the long-range coherent hoppings comprising one blue detuning transition process and one red detuning transition process. The dissipative part of Eq. (S5) is determined by the effective Lindblad operators in the rotating frame \(\hat{U}_{R}\), which are given by \[\hat{L}_{n,R,\text{eff}}= \hat{L}_{n,R}\sum_{j=r,b}\sum_{m=0}(\hat{H}_{\text{NH}}^{(j)})^{- 1}\hat{V}_{+}^{(m,j)}(t),\] (S11) \[= \sqrt{(n+1)}j_{r}e^{-i(\delta_{r}+\omega_{0})t}|g,n\rangle \langle g,n+1|\] \[+\sqrt{n}j_{b}e^{-i(\delta_{b}+\omega_{0})t}|g,n\rangle\langle g,n-1|,\] where \[j_{r(b)}= \frac{\sqrt{\gamma}}{-i\gamma/2-\delta_{r(b)}}\frac{\eta_{r(b)} \Omega_{r(b)}e^{i\phi_{r(b)}}}{2}.\] (S12) We will obtain the time-independent terms proptional to \(|j_{b(r)}|^{2}\) and the time-dependent cross terms (\(\sim e^{-i(\delta_{r}-\delta_{b})t}\)) of \(j_{r}\) and \(j_{b}\), when we expand the Lindblad super-operator \(\mathcal{D}[\hat{L}_{n,R,\text{eff}}]\hat{\rho}_{R}=\hat{L}_{n,R,\text{eff}} \hat{L}_{n,R,\text{eff}}^{\dagger}-\frac{1}{2}\{\hat{L}_{n,R,\text{eff}}^{ \dagger}\hat{L}_{n,R,\text{eff}},\hat{\rho}_{R}\}\) using Eq. (S11). Here we can neglect those fast oscillating cross terms. The long-range coherent hoppings in Eq. (S8) are also can be neglected according to the same analysis. Then we define the hopping strength as \[J_{r(b)}=|j_{r(b)}|^{2}=\frac{\gamma|\Omega_{r(b)}|^{2}\eta_{r(b)}^{2}}{4\delta _{r(b)}^{2}+\gamma^{2}},\] (S13) and obtain the effective Lindblad operators in the Schrodinger picture as \[L_{n,r}= \sqrt{nJ_{r}}|g,n-1\rangle\langle g,n|,\] (S14) \[L_{n,b}= \sqrt{nJ_{b}}|g,n\rangle\langle g,n-1|.\] (S15)
2301.05506
On the feasibility of attacking Thai LPR systems with adversarial examples
Recent advances in deep neural networks (DNNs) have significantly enhanced the capabilities of optical character recognition (OCR) technology, enabling its adoption to a wide range of real-world applications. Despite this success, DNN-based OCR is shown to be vulnerable to adversarial attacks, in which the adversary can influence the DNN model's prediction by carefully manipulating input to the model. Prior work has demonstrated the security impacts of adversarial attacks on various OCR languages. However, to date, no studies have been conducted and evaluated on an OCR system tailored specifically for the Thai language. To bridge this gap, this work presents a feasibility study of performing adversarial attacks on a specific Thai OCR application -- Thai License Plate Recognition (LPR). Moreover, we propose a new type of adversarial attack based on the \emph{semi-targeted} scenario and show that this scenario is highly realistic in LPR applications. Our experimental results show the feasibility of our attacks as they can be performed on a commodity computer desktop with over 90% attack success rate.
Chissanupong Jiamsuchon, Jakapan Suaboot, Norrathep Rattanavipanon
2023-01-13T12:17:01Z
http://arxiv.org/abs/2301.05506v1
# On the feasibility of attacking Thai LPR systems with adversarial examples ###### Abstract Recent advances in deep neural networks (DNNs) have significantly enhanced the capabilities of optical character recognition (OCR) technology, enabling its adoption to a wide range of real-world applications. Despite this success, DNN-based OCR is shown to be vulnerable to adversarial attacks, in which the adversary can influence the DNN model's prediction by carefully manipulating input to the model. Prior work has demonstrated the security impacts of adversarial attacks on various OCR languages. However, to date, no studies have been conducted and evaluated on an OCR system tailored specifically for the Thai language. To bridge this gap, this work presents a feasibility study of performing adversarial attacks on a specific Thai OCR application - Thai License Plate Recognition (LPR). Moreover, we propose a new type of adversarial attack based on the _semi-targeted_ scenario and show that this scenario is highly realistic in LPR applications. Our experimental results show the feasibility of our attacks as they can be performed on a commodity computer desktop with over \(90\)% attack success rate. adversarial attacks, Thai OCR systems, Thai LPR systems, machine learning security ## I Introduction Optical character recognition (OCR) is a technology to recognize characters from printed or handwritten images. In the last few decades, OCR has been adopted in many real-world applications mainly due to the rise of deep neural network (DNN) development. With DNN, OCR can now perform the character recognition task at high speed, enabling its use in many mission-critical and time-sensitive applications. For instance, an OCR system can be deployed in an airport to recognize passport information automatically [1]; or modern license plate recognition systems employed by law enforcement rely heavily on OCR in their core engine [9]. Besides the timing performance, the security of OCR is also paramount to the underlying application. Unfortunately, OCR inherits the same security weakness as DNN since it is also vulnerable to an attack based on _adversarial examples_[8]. The aim of this attack is to confuse the DNN model, causing it to misclassify a specific input image. It is typically carried out by introducing subtle but deliberate changes to the input. These changes can be in the form of noise perturbation or small pixel images that are carefully crafted in such a way that they do not look suspicious to the human eyes. As OCR has become widely adopted, it presents more incentives for an adversary to use this type of attack for his/her own benefit. This attack, for instance, can cause the OCR model to misinterpret passport data, license plate numbers, or financial documents, resulting in financial damages or crime detection avoidance. A number of prior works explore different techniques to generate adversarial examples in black-box [10] and white-box [7] environments, in targeted [15] and untargetd [11] scenarios, and with different OCR languages, e.g., English [14], Chinese [5], and Arabic [2]. Despite this rich literature, to the best of our knowledge, there has been no prior work to demonstrate the attack success on an OCR system based on _Thai_ language. Due to the idiosyncratic features of the Thai alphabet (e.g., some letters contain an upper/lower symbol - 0/t/q), it remains unclear whether these existing attack techniques are still effective for Thai OCR systems. To this end, we set out to answer this question by demonstrating whether it is feasible to generate adversarial examples that can be used to fool the state-of-the-art Thai OCR system. To achieve this goal, we turn our attack focus to a specific but widely-used OCR application - License Plate Recognition (LPR) system. In particular, our attack targets an LPR system based on Google Tesseract [13] with Thai language support. Contrary to the previous works in [15] or [11], we consider our LPR attack scenario _semi-targeted_, in which a successful adversarial example can mislead the LPR model to output any element in _the set of adversary-chosen incorrect classes_ (e.g., a set of valid license numbers other than the true number). This is distinct from the targeted scenario, which aims to misguide the model to return a _particular_ adversary-chosen incorrect class (e.g., a specific fake license number), or the untargeted scenario, which tricks the model into predicting _any_ of the incorrect classes (e.g., any sequence of Thai characters/digits other than the true license number). We also propose a transformation that converts the existing targeted attack into the semi-targeted attack considered in this work. Finally, we perform implementation experiments to evaluate our proposed LPR attack. The results indicate the realism of our attack as it obtains a high attack success rate and requires only a reasonable amount of resources (i.e., runtime and RAM usage) that can feasibly be acquired from a regular desktop computer. Overall, we believe this work represents the first step towards raising awareness of the threats posed by Thai OCR systems and eventually towards securing these systems against adversarial examples. The contribution of our work can be summarized as follows: 1. We present a systematic approach to demonstrate the feasibility of constructing adversarial examples to fool the state-of-the-art Thai OCR-based LPR system. 2. We explore an alternative attack scenario, called semi-targeted, and show it is highly realistic for attacking LPR applications. 3. Our evaluation results show the feasibility of our attack; it can achieve up to 91% attack success rate and can be carried out realistically using only a commodity computer. ## II Background and Related Work ### _License Plate Recognition (LPR)_ LPR is the process that automatically reads and extracts vehicle license plate information from an image. It typically consists of three steps: localization, segmentation, and identification. In the first step, an LPR system scans through the entire image to detect and locate a license plate. Then, the segmentation step extracts the regions from the detected license plate where each region contains exactly a single character. Finally, LPR leverages OCR technology to classify and recognize each character and outputs the digitized license information in the identification step. While numerous OCR techniques have been proposed for LPR systems, the most common one used by modern LPR systems is based on DNNs. For example, Tesseract [13] is the state-of-the-art DNN-based OCR engine developed by Google and has been used in many LPR systems [12]. The current version of Tesseract uses LSTM DNNs and supports more than 50 languages, including Thai. Besides LPR, Tesseract has been adopted to recognize Thai characters in other settings, e.g., Thai document digitization [6]. ### _Adversarial Attacks_ An adversarial attack was first introduced and investigated by Szegedy et al. in 2013 [15]. They show that by optimizing DNN's prediction error, an adversary can generate a small perturbation that can be applied to an input image in such a way that the resulting image (called _an adversarial example_) is misclassified by the DNN model. The work in [15] has inspired many subsequent studies to improve upon, and/or proposed different settings for, adversarial attacks. Techniques in adversarial attacks can often be categorized using two orthogonal dimensions - adversarial knowledge and goal: 1. **Adversarial knowledge** can be further divided into white-box and black-box environments. White-box attacks assume a powerful adversary that has complete knowledge of the DNN model's architecture, including parameters, weight values, and/or its training dataset. Black-box attacks, on the other hand, consider a weaker adversary which can only query the DNN model but has no access to the model's internal information. 2. **Adversarial goal** is often classified as either targeted or untargeted scenarios. Targeted attacks aim to deceive the model into classifying an adversarial example as a targeted adversarial class, whereas an untargeted attack misleads the classification to an arbitrary class other than the correct one. Prior works have explored various techniques for adversarial example generation targeting OCR systems with: (i) black-box [3] and white-box [14] environments, (ii) targeted [5] and untargeted [16] scenarios, and (iii) English [14], Chinese [5], and Arabic [2] languages. In this work, we aim to assess the feasibility of performing an adversarial attack in Thai LPR systems with a realistic black-box and semi-targeted adversarial setting. ## III Adversary's Goal & Threat Model We consider a realistic adversary which aims to trick an automatic LPR system to misclassify a specific potentially illegal license plate into a different but still valid (i.e., well-formed) license number. The adversary is assumed to have oracle access to the black-box LPR model, i.e., he/she can query for the model's prediction output on any given image input. However, as the model is usually proprietary and confidential, he/she has no access to the model's internal parameters. Figure 1 shows a scenario for performing an adversarial attack on a Thai LPR system. The attack is carried out by generating an adversarial example from an illegal license plate. Then, it is considered a successful attack if the following requirements hold: **[R1]** The generated adversarial example looks similar to the illegal license plate input in human eyes. This is to ensure that only a small change needs to be applied on the physical license plate, and as a result, the modified license plate can still fool the LPR system without being noticed by humans. **[R2]** The adversarial example's prediction class is different from its true class but still considered a _valid_ license number. The rationale behind this requirement is that to better evade detection, the adversary wants to avoid the DNN model returning an invalid and thus suspicious class, e.g., a malformed/unassigned license number since it can easily be detected in software or by police officers. Without loss of generality, we simplify **[R2]** by considering a license number _valid_ if it consists of two Thai consonants followed by a four-digit number. For example, **u**n3456 is valid but **u**n1234 or **u**n123 are not. In practice, **[R2]** can be satisfied by using a database of legal license plate numbers. Due to **[R2]**, it becomes clear that the traditional targeted and untargeted scenarios are not directly suitable in this attack setting. Specifically, the untargeted scenario could return an invalid number (e.g., **u**n123), violating **[R2]**; whereas the targeted scenario can be too restrictive. Hence, in this work, we introduce a relaxed concept of the targeted scenario, called **semi-targeted**, which accepts an adversarial example if its prediction class falls into a specific adversary-chosen set (as opposed to a specific class in the targeted scenario), e.g., a set of valid license numbers in the LPR application. ## IV Methodology ### _Overview_ Our methodology for attacking Thai OCR systems consists of two phases, as shown in Figure 2. The first phase performs the black-box semi-targeted adversarial attack on an input license plate image and outputs an adversarial example. The second phase takes as input, the adversarial example, and evaluates whether this adversarial example constitutes a successful attack or not. We now discuss each phase in detail. ### _Phase-1: Black-box Semi-targeted Adversarial Attack_ As illustrated in Figure 3, our black-box semi-targeted attack requires three input parameters: (1) an original image - _img_; (2) a set of valid classes - \(s\); and (3) the number of candidates to be considered in this attack - \(n\). In the context of LPR, _img_ represents a license plate image; \(s\) corresponds to a set of valid license numbers, where, in this work, \(s\) is set to common license patterns in Thailand with two Thai consonants followed by a four-digit number. The attack starts in \(\bigcirc\). It generates \(n\) classes from the given input with a constraint that all of these \(n\) classes must: (1) be non-repetitive and (2) contain at least one Thai consonant different from the _img_ class. Then, we can apply the state-of-the-art black-box targeted attack for each individual class, resulting in \(n\) candidates for adversarial examples in \(\bigcirc\). Finally, in \(\bigcirc\), we display these \(n\) candidates to the user, ask the user to select the one that is closely similar to _img_, and output it as the adversarial example. Note that this phase will always yield the adversarial example satisfying **[R2]**. This is because the targeted attack in \(\bigcirc\) guarantees to produce an adversarial example that will be classified as the targeted class \(class_{i}\), which, by construction in \(\bigcirc\), is valid (i.e., \(class_{i}\in s\)) and different from the _img_ class. ### _Phase-2: Adversarial Example Assessment_ To assess the generated adversarial example, we recruit participants from our university, present them with the adversarial example image, and interview them with two questions: **Q1:** Are all characters legible in the presented image? **Q2:** What license number can you read from the image? Figure 1: Adversarial attacks on Thai LPR systems Figure 2: Methodology for attacking Thai OCR systems The attack is considered successful if the participant responds "yes" to the first question and the answer from the second question matches the license number in _img_. If any of these conditions are not fulfilled, we return "Attack failure". As a result of these two carefully-crafted questions, the adversarial example can only pass this phase when still resembling _img_, thus satisfying **[R1]**. ## V Feasibility Results ### _Experimental Setup_ All of our experiments were conducted on an Ubuntu 20.04 machine with an Intel i7-11700k [email protected] GHz. To measure the attack success rate, we performed our attack on 100 unique software-generated Thai license plate images. The OCR system used in our attack was based on Tesseract v5.2.0 and ran with the following parameters: psm=10,oem=1. Lastly, we used HopSkipJumpAttack [4] as the underlying black-box targeted attack algorithm; for each sample, we ran this attack until it reached 300 iterations. **Ethics.** Our experiments were conducted using synthetic, instead of real, license plates for ethical reasons. This work was conducted solely for academic purposes and we do not condone using it for real-world attacks. Further, we did not gather any personally identifiable information during our interviews with participants. ### _Experimental Results_ **Attack Success Rate (ASR).** Figure 4 shows ASR of our attack while varying \(n\). ASR improved drastically as we moved from the targeted attack (\(n=1\)) to the semi-targeted attack (\(n>1\)), with \(ASR=91\%\) for \(n=10\), compared to \(ASR=70\%\) for \(n=1\). This highlights the effectiveness of the semi-target scenario for attacking Thai OCR systems. We present a selection of generated adversarial examples for various \(n\) values in Table I, where Suc. refers to "Attack success". **Attack Resource Consumption.** In terms of resource consumption, generating adversarial examples requires a moderate amount of RAM (\(\sim 1.8-2\)GB) on our machine, independent of the \(n\) value. On the other hand, the runtime for adversarial example generation linearly depends on \(n\), as shown in Figure 4. For \(n=10\), the attack takes less than 2 hours to complete, which we consider to be reasonable because it only needs to be done once for any given license plate. ## VI Conclusion This paper presents the first feasibility study of performing adversarial attacks on Thai OCR-based LPR systems. In addition, it proposes a new type of attack scenario, called _semi-targeted_, and argues that this scenario is more practical for attacking LPR systems than the traditional targeted and untargeted scenarios. Our experiments demonstrate the feasibility of our attack as it achieves a high success rate and can be carried out only using a commodity computer. Figure 4: Attack success rate and execution time Figure 3: Black-box semi-targeted attacks
2310.13425
An overview of optimization approaches for scheduling and rostering resources in public transportation
Public transport is vital for meeting people's mobility needs. Providers need to plan their services well to offer high quality and low cost. Optimized planning can benefit providers, customers, and municipalities. The planning process for public transport involves various decision problems, such as vehicle and crew planning. These problems are usually solved by providers. More and more studies suggest that integrated solution approaches for these problems are better than sequential and iterative ones. Integrated optimization of multiple planning phases allows more flexibility in planning, which can reduce operational costs and improve service quality. This paper reviews solution approaches for integrated optimization using operations research techniques for the vehicle scheduling, crew scheduling, and crew rostering problems. It also covers some relevant related approaches from other industries. The paper analyzes existing optimization approaches based on different aspects such as mathematical modeling, optimization objective and method, and data source and scope. Moreover, the paper examines the problem dimensions that are often required in practical applications. The paper identifies some directions for future research, such as focusing more on objectives other than cost-minimization like robustness, schedule regularity, or fairness.
Lucas Mertens, Lena-Antonia Wolbeck, David Rößler, Lin Xie, Natalia Kliewer
2023-10-20T11:27:52Z
http://arxiv.org/abs/2310.13425v1
An overview of optimization approaches for scheduling and rostering resources in public transportation ###### Abstract Public transport is an essential component in satisfying people's growing need for mobility. Thus, providers are required to organize their services well in order to meet the high demand of service quality at low operational costs. In practice, optimized planning can lead to considerable improvements for providers, customers, and municipalities. The planning process related to public transport consists of various decision problems, of which the providers are usually responsible for vehicle and crew planning. There is a growing body of literature that recognizes the shift from sequential and iterative to integrated solution approaches for these problems. Integrated optimization of several planning phases enables higher degrees of freedom in planning, which allows for operational costs savings and increased service quality. This paper provides an overview of solution approaches for integrated optimization based on operations research techniques for the vehicle scheduling, crew scheduling, and crew rostering problem, extended by a selected number of relevant related approaches from other industries. Therefore, existent optimization approaches are analyzed with regard to different aspects such as mathematical modeling, optimization objective, and method, as well as the source and scope of the data used for evaluation. Additionally, we analyze the problem dimensions that are usually required in practical applications. In doing so, we are able to point out directions for future research, such as a stronger focus on objectives besides cost-minimization like robustness, schedule regularity, or fairness. ## 1 Introduction Urbanization in developed and developing countries leads to quickly growing needs for urban mobility. Cities and municipalities face severe challenges in providing the necessary infrastructure to satisfy these needs. Individual motorized traffic is rather part of the problem than of the solution: Traffic jams leading to adverse effects such as long commuting times, frequent accidents, and air pollution are just some of the issues that arise from cities being overflow with cars (Schrank et al., 2019). An efficient public mass transportation system can remedy these problems (Ibarra-Rojas et al., 2015). In addition to the advantage of accommodating the growing mobility demand at lower external effects and costs, public mass transport is safer as well as more resource-efficient than individual transport (Litman, 2016). Traditionally, public transport used to be provided solely by the public sector. However, it has been deregulated in many countries. Nowadays, the transport services offered by private companies are substantial, and competition in this area is increasing (Hrelja et al., 2018). For a public transport provider, an effective and efficient operation is crucial to face the trade-off between operating costs and service quality. Thus, in each phase of the planning process, this trade-off is considered. Moreover, further goals like schedule robustness, regularity, travel satisfaction, and fairness for employees have gained more attention in recent years (Abenoza et al., 2017; Borndorfer et al., 2015; Cats, 2016). Since the underlying decision problems are not trivial to solve (to optimality), public transport planning has been extensively studied in the literature. Usually, the public transport planning process is divided into planning steps that have to be performed subsequently. However, recent advances in optimization methods allow a gradual integration of the optimization subproblems arising from subsequent planning steps. While better network designing, line planning, and timetabling effect both customer satisfaction and cost structure, vehicle scheduling, crew scheduling, and crew rostering mainly influence the provider's profit as well as operational timeliness. Superior calculated vehicle and crew schedules lead to lower investments due to less required vehicles and personnel and lower variable costs due to decreased deadheading distances and improved duty allocation. Furthermore, crew rostering impacts costs and employee satisfaction likewise, as crew members desire a fair distribution of duties and workload. Integrating two or three subproblems increases the degree of freedom for these decision problems, and thus, schedule and roster quality may improve. Other industries like railway or aircraft face similar challenges, therefore their solution approaches might be transferable to public transport planning. The last decade has witnessed an enormous increase in publications on integrated optimization approaches for public transport planning problems. In 2015, Ibarra-Rojas et al. conducted a literature review on solution approaches for bus transport systems. In order to extend and update this overview, we will analyze the state-of-the-art approaches that follow different variants of integration and objectives in this paper. We first introduce the operational problems in public transport and point out the contribution of the sequential approach in Section 2. Second, we point out the ongoing shift from sequential to an integrated approach in Section 3. ## 2 Decision problems within the operational public transport planning process The planning process in public transport comprises various decision problems, which can be grouped according to their planning horizons (see Figure 1). On a strategical level, public transport providers plan long-term, e.g., the network design and the planning of lines (routes, frequencies). Tactical decisions, however, aim to provide timetables and to reduce the operational costs in the medium term (Ibarra-Rojas et al., 2015). In practice, such decisions are most likely made by the principal, e.g., the municipality (Huisman, 2004). We consider strategical and tactical planning decisions regarding routes, frequencies, and timetables as input for the operational planning tasks of vehicle scheduling, crew scheduling, and crew rostering. Therefore, in the scope of this paper, we assume that public transport providers focus on minimizing costs concerning vehicles and staff in the short term when operationally deciding on their transport and employee schedules (Ibarra-Rojas et al., 2015). Following this, we look at three decision problems: The Vehicle Scheduling Problem (VSP), the Crew Scheduling Problem (CSP), and the Crew Rostering Problem (CRP), which are introduced in the following. ### Vehicle scheduling problem Given a timetable with specified service trips, the VSP relates to generating an optimal vehicle schedule that covers all service trips and achieves the lowest operational costs or optimizes further objectives (Daduna & Paixao, 1995). A service trip is defined by the line it belongs to, a departure and arrival time, as well as the corresponding locations. To achieve a sequence of compatible trips, additional deadhead trips can be added to connect subsequent service trips. These deadhead trips comprise all unloaded trips, including the departure from (pull-out) and arrival at (pull-in) the depot. A solution to the VSP corresponds to a vehicle schedule consisting of vehicle blocks, each representing a feasible sequence of trips for one vehicle (Bodin & Golden, 1981). Thus, a vehicle block comprises one or several vehicle rotations starting from a depot, executing one or more service trips, and returning to a depot. Solving a VSP is not a trivial task and can vary greatly depending on practical requirements and circumstances. The fundamental VSP is characterized by a single depot, a homogeneous fleet, and the objective to minimize costs only (Daduna & Paixao, 1995). One of the first optimal solutions of such a VSP originates from Saha, 1970. However, modern VSP evolved to cover a more complex environment. Opposed to originating from one depot only, the Multi Depot Vehicle Scheduling Problem (MDVSP) considers multiple depots as well as multiple vehicle types and vehicle type groups. This enhancement majorly affects the way of solving the problem. Whereas the single-depot VSP is described as a polynomially Figure 1: The sequential planning process in public bus transit, as illustrated in Xie, 2014. solvable minimum cost flow problem, the MDVSP is considered to be NP-hard (Bertossi et al., 1987). By utilizing a linear programming approach with column generation, Lobel, 1999 exactly solve the MDVSP. Considering multiple depots as well as a heterogeneous fleet, Kliewer et al., 2006 present a Time Space Network (TSN) to efficiently model a network associated to the MDVSP. By modeling the MDVSP as a TSN, the solution space can be reduced significantly. As a result, optimally solving the multicommodity min-cost flow MIP-formulation of the MDVSP for real-world instances is made possible. However, not only the underlying problem shifted to a more complex model, but also the objective itself adapted. Whereas in the beginning, the focus was primarily on cost-related objectives, other goals like the schedule robustness are increasingly considered in recent years (Kramkowski et al., 2009; Naumann et al., 2011). Different dimensions regarding constraints such as the limited ranges of electric buses (Adler, 2014; Reuer et al., 2015) and objectives shape each VSP individually. Several modeling approaches, as well as specialized solution strategies for the VSP and its extensions, have been developed in the last decades. For an overview on vehicle scheduling and corresponding solution approaches, we refer to Bunte and Kliewer, 2009 and Pepin et al., 2009. ### Crew scheduling problem In sequential planning, the decision problem of crew scheduling arises succeeding to vehicle scheduling. The CSP (also known as driver, duty, or shift scheduling) aims at finding a daily cost-optimal duty allocation that encompasses all trips of the vehicle blocks (Borndorfer et al., 2001). These duties are not assigned to specific drivers yet. Each anonymous duty is associated with a predefined generic duty type. These heterogeneous duty types are characterized by different lengths and attributes. A duty type, e.g., considers legal requirements on working and break times as well as company-specific regulations such as the kind of qualifications required (Freling et al., 2004). Due to the vast amount of possible solutions based on the predefined duty types covering the vehicle schedule's trips, solving the CSP is considered to be NP-hard (Fischetti et al., 1987). The complexity of finding a solution to the CSP correlates with the number of trips and especially with the quantity and diversity of the generic duty types. Depending on practical requirements, each duty type at least considers legal, union-related, and company-defined regulations. These characteristics vary significantly regarding each problem specification. In solving the CSP, it has prevailed to split all vehicle blocks into segments according to predefined relief points (Desaulniers & Hickman, 2007). Relief points indicate locations at specific times, which allow an exchange of drivers. The tasks between two relief points represent the smallest unit of work that has to be covered by the same driver and is called duty element. Combining consecutive duty elements and adding sign-on and sign-off tasks results in a possible shift, and is called a piece of work. Final duties are composed of one or more pieces of work, where usually two pieces of work are separated by a break (Desaulniers & Hickman, 2007). Since the emerging duties are not associated with specific drivers yet, commonly cost criteria shape the objective of solving the CSP (Ernst et al., 2004; Huisman, 2004). Depending on the practical application, both minimizing the total amount of daily duties as well as minimizing the total required work time are achievable tasks. Whereby the former determines the minimum demand of employees on a daily basis, the latter aims at an optimal duty structure by avoiding unnecessary breaks or waiting times. Utilizing fixed costs for duties and an hourly rate for the working time, these objectives are usually transformed into one that minimizes the total costs (Desaulniers & Hickman, 2007). Commonly, for solving the CSP a column generation approach in combination with Lagrangian or LP-relaxation considering a set covering or partitioning problem is utilized. Solution approaches for crew scheduling are reviewed in detail in Huisman, 2004 and Ernst et al., 2004. ### Crew rostering problem The crew rostering (or driver rostering) is concerned with assigning anonymized duties to specific drivers. The results are individual schedules for every crew member, so-called crew rosters (Freling et al., 2004). As opposed to crew scheduling, where the foremost objective is to minimize operative costs, crew rostering takes crew welfare, such as balancing workload and additional individual characteristics of each crew member and efficiency objectives, e.g., minimizing layovers and crew deadheading, into account. Complementing the legal daily duty requirements, already respected within the CSP, further law and labor union rules have to be considered, solving a CRP. These additional requirements range from minimal break times between two consecutive shifts to a maximum weekly workload for a single driver. In constructing personalized schedules, two different kinds of crew rosters can be distinguished, namely cyclic and non-cyclic rosters (Xie Suhl, 2015). The cyclic roster occurs to be the less sophisticated approach and is developed for a group of drivers with similar qualifications and preferences. A regular, repeating working pattern is established for the entirety of drivers. This pattern is constructed as such that all legal requirements are met, and the workload is allocated evenly. However, this roster occurs not to feature a high degree of individuality. Non-cyclic rosters, on the other hand, offer the possibility to develop personalized schedules for a medium to long period of time (Xie Suhl, 2015). Depending on the extent to consider individual preferences and shift requests, constructing a non-cyclic pattern requires sophisticated techniques. A multicommodity network flow formulation is developed in Xie and Suhl, 2015 to deal with both cyclic and non-cyclic rostering, also in Mesquita et al., 2015 for non-cyclic rostering. In order to deal with the complexity, (Meta-)heuristics are applied to solve non-cyclic rostering, such as in Xie et al., 2017, Mesquita et al., 2015. Ernst et al., 2004 and van den Bergh et al., 2013 cover the crew rostering problem in their literature reviews and elaborate approaches to solve the CRP utilizing both cyclic and non-cyclic rosters. As a first step towards more robustness in crew rostering, Xie et al., 2012 consider a simplified version of rostering but incorporate possible reserve shifts to cover the absences of drivers. ## 3 Partial Integration & Integrated approaches The three decision problems - more precisely VSP, CSP and CRP - have been extensively studied by scholars. Various methods have been proposed to find optimal or close-to-optimal solutions to each of these problems (Bunte & Kliewer, 2009; Ernst et al., 2004; van den Bergh et al., 2013). These problems constitute consecutive phases (Desaulniers & Hickman, 2007) within the operational public transport planning process. Thus, choosing a sequential approach to solving the entirety of these problems is straightforward. In such an approach, the output of the previous phase is used as an input for the subsequent planning problem. However, this traditionally utilized approach may not lead to a globally optimal solution. A slightly adjusted timetable, e.g., might lead to more freedom for solving the VSP and hence a lower demand for buses. Here, the gain from consecutive steps can outweigh the loss of the adjusted prior phase or, due to choosing an indifferent solution of a previous step, a Pareto-efficient improvement might even be possible. As a result, iteratively solving the three sequential phases in order to leverage knowledge gained in every iteration can improve the overall solution. However, repeated executions of each phase might lead to prohibitively long run times or, due to a fixed number of iterations, to local optima. Opposed to sequential or iterative approaches, which solve each of the problems separately, integrated approaches solve the VSP, CSP, or CRP conjointly. As a result, superior solutions are attainable within acceptable computation time, even for problem instances of realistic size. We distinguish between the integration of the first two phases (VSP + CSP in the following referred to as VCSP) and the last two phases (CSP + CRP in the following referred to as CSRP). The highest level of integration is achieved by simultaneously considering all three decision problems (VSP + CSP + CRP in the following referred to as VCSRP). The number of publications for integrated solution approaches is unevenly distributed. In contrast to the wide range of publications considering the VCSP, there are only three approaches for the VCSRP. The main objective in either integrated approach is usually minimizing costs - while in recent years, additional objectives such as robustness, regularity, and fairness have become increasingly important. Similar to publications covering the VSP only, there exists an evident trend to modeling the underlying problem as a TSN instead of a connection-based Network (CBN). Due to the integration of the planning phases, many approaches use column generation and (meta-)heuristics (such as genetic algorithms, simulated annealing, ant colony algorithms) to solve the remaining complexity problem. More than two-thirds of the evaluated solution approaches use real data for evaluation and thus examine the applicability of the methods in practice. It is noteworthy that the majority of approaches employ combinations of solution approaches instead of individual exact or heuristic methods. Looking at the methods, special attention is paid to column generation, as it is prevalent in the sample, as well as non-exact heuristics that are used. The right choice of model and combination of solution algorithms facilitates solving problem instances of realistic size. However, within the regarded sample, only a few publications from the bus industry solve VSP instances of realistic urban size (e.g., Amberg et al., 2018; Kliewer et al., 2012; Steinzen et al., 2010). Many rely on evaluation using the random benchmark instances published in Huisman, 2004. Similar decision problems under consideration occur in several industries. Three are identified as the major industries: Airline, railway, and public buses. Regarding vehicle scheduling, similarities, as well as differences, are evident. All three industries have the goal in common to minimize the operational costs and utilize the least possible number of vehicles. However, the details of either industry differ greatly. Due to high initial costs for rolling stock and railroads, as well as long construction times for the latter, planning in the railway industry is highly constrained by its infrastructure. In contrast, vehicle scheduling for a public bus provider offers more decision-making possibilities. Various existing roads can be used, and different vehicle types offer higher degrees of freedom in planning. Given a fixed number of vehicles, scheduling for the airline industry is the least restricted one. Changing the route of an airplane is usually only restricted by costs, but not by infrastructural conditions. Depending on the preconditions of each unique industry, mathematical modeling might be more challenging given increased infrastructural requirements. The number of constraints correlates strongly with the model's degrees of freedom. More flexibility in planning leads to an increased solution space. Both the quantity of constraints and the size of the solution space enable different solution approaches and might lead to different expedient ways of solving the specific planning problem. Similar to vehicle scheduling, both crew scheduling and crew rostering share similarities but differ in detail. As previously described, labor law and other legal provisions, as well as collective and individual agreements, restrict the CSP and CRP within the mentioned industries (Freling et al., 2004; Guo et al., 2006; Xie & Suhl, 2015). However, buses, e.g., only need one driver while airplanes and trains must have a crew. Crews typically consist of more than two members who have to fulfill specific tasks and functions, and are thus typically planned as teams (Freling et al., 2004). Compared to public bus transport, the railway and airline industry possibly cover huge distances. Thus, the crew rostering has to consider individual home bases, take lodging into account and return each crew member to its origin at some point (Wen et al., 2021). Most studies in our sample deal with the public bus transport industry (in either urban or rural environments). There are some important exceptions from other industries, especially concerning Crew Scheduling Rostering Problem (CSRP) such as Sandhu and Klabjan, 2007; Freling et al., 2004; Guo et al., 2006; Medard and Sawhney, 2007 and Souai and Teghem, 2009 in the airline industry and Freling et al., 2004 as well as Borndorfer et al., 2012 in railway. In the following sections, we discuss the solution approaches from the literature concerning the pairwise integrated problems (i.e., the VCSP and the CSRP), and the "fully" integrated problem (i.e., the VCSRP) in more detail. ## 4 Pairwise integrated optimization ### Integrated vehicle and crew scheduling The majority of solution approaches for the VCSP in our sample follow a column generation scheme to generate vehicle schedules and anonymous duties for a given timetable and corresponding service trips. The VCSP is the master problem, and duties are generated as columns by solving the pricing problem as a constrained shortest path problem. All approaches investigated have in common that minimizing costs is the central objective criterion. In recent years, further optimization objectives such as robustness (Amberg et al., 2011a, 2018; Huisman et al., 2004; Kliewer et al., 2012) and schedule regularity (Amberg et al., 2012; Amberg et al., 2011b; Steinzen et al., 2009) have been considered. For the corresponding VSP, the underlying network is usually explicitly modeled. Historically, integrated optimization approaches have focused on using a CBN with depots and stops as nodes, and all possible connections, including pull-ins and pull-outs, are enumerated as arcs, such as in Ball et al., 1983, Freling et al., 1999, Friberg and Haase, 1999, Gaffi and Nonato, 1999, Freling et al., 2001 and Freling et al., 2003. This approach might be most intuitive and was used mostly in the last century. Recent network modeling approaches shift towards a TSN where time-space nodes represent possible arrivals and departures at a location and where only feasible connections are modeled as arcs such as in Gintner et al., 2008, Keri and Haase, 2008, Steinzen et al., 2010, Amberg et al., 2011b, Amberg et al., 2011a, Kliewer et al., 2012 and Amberg et al., 2018. The TSN method has the advantage that much fewer connections are included, which reduces the model complexity tremendously - especially for larger instances. Gintner et al., 2005 report that the number of arcs in the TSN amounts to 1-3% of all arcs in an equivalent CBN. Thus, the problem size could be reduced significantly without reducing the solution space because all compatible trips are implicitly connected. ### Integrated crew scheduling and rostering In the airline and railway industries, crew scheduling and crew rostering are usually considered sequentially (see Caprara et al., 1999; Lee and Chen, 2003; Medard and Sawhney, 2007; Yunes et al., 2005 since it is not yet possible to find an optimal solution for one of the two planning steps with current state-of-the-art technologies for realistically sized models. An overview of the developments until 1998 for air and rail transport was presented in Ernst et al., 2001b. Integrated planning has received increasing attention since the 2000s, with a focus on airlines and railways. Due to the high combinatorial complexity of integrated planning, approaches to partial or iterative integration were first published. In Ernst et al., 2001a the number of paired personnel crews in crew scheduling is taken into account. Most integrated crew scheduling and crew rostering approaches deal with airline optimization (Freling et al., 2004; Guo et al., 2006; Medard and Sawhney, 2007; Souai and Teghem, 2009) and only a few tackle the public bus transit (e.g. Xie and Suhl, 2015; Xie et al., 2012, 2013, 2017) and the railway industry (e.g. Borndorfer et al., 2014; Lin and Tsai, 2019). An iterative method through a feedback mechanism between the CSP and CRP is implemented in Caprara et al., 2001. All duties are generated in the first phase, and the number of duties is reduced by heuristics in the second phase, such that instances with various compositions with real-world characteristics can be solved. Guo et al., 2006 focus on partial integration based on the aggregated TSN. In the first step, instead of a single duty, a chain of duties is generated, taking into account the individual activities of the crew members planned in advance. In addition, the number of crews is also taken into account in this step (Guo et al., 2006). The approach can solve even instances of up to 1977 tasks, considering 188 crew members, in acceptable time (\(\sim\) 15.5 minutes). In Zeghal and Minoux, 2006 the integration problem is formulated as an integer linear program, and a new heuristic method is developed, which is used in a search procedure for a subtree based on a rounding strategy. A decision support system is developed in Freling et al., 2004 for integrated crew scheduling in the airline and railway sector, and a general set partitioning model is formulated, and a state-of-the-art branch and price solver is generated. In Saddoune et al., 2011 and Saddoune et al., 2012 a column generation approach is used to reduce the computing time of the sub-problem. In further research projects regarding the complete integration of the two planning phases, meta-heuristics, in particular specialized genetic algorithms, are successfully used to solve the integrated problem (see Chen et al., 2013; Souai and Teghem, 2009). Because of lower operational costs, optimization approaches for public transport were developed only about ten years later than in the integrated planning for the airline and railway industry. A Bender's decomposition approach is used in Borndorfer et al., 2014, where the crew rostering was simplified in such a way that the duty sequences are anonymous and shift and duty templates were used instead of services. In Xie, 2015 it is shown that in practice, it is often critical to underlay shifts with concrete duties. Lin and Tsai, 2019 propose a Branch-and-Price-and-Cut (BPC) algorithm for solving the CSRP for the Taiwanese railway system with regard to standby personnel. They compare the results with solution approaches using expert knowledge or rules of thumb, commercial standard solvers for the associated Mixed Integer Linear Problem (MILP) and a sequential Depth-First Search (DFS) based algorithm for several instances reaching real-world problem sizes regarding the number of tasks to be performed. The employed DFS first enumerates all potential duties, then identifies the minimum required duties to cover all tasks as a set partitioning problem, and finally solves the shift-assignment to optimality. Only the BPC algorithm was capable of solving all instances, whereas Gurobi and the DFS-based algorithm are only tractable for the smallest and second-smallest instance, respectively. For the smallest instances, the BPC can recreate the optimal solution in less time and is the only algorithm capable of solving all problem instances. In addition to cost minimization, younger approaches aim at optimizing for further goals such as the maximization of fairness of the drivers' shift allocation and the regularity of duty rosters to increase satisfaction (e.g. Borndorfer et al., 2017; Quesnel et al., 2020)). ## 5 Integrated vehicle and crew scheduling and rostering Few publications look into integrating all three phases, all of which take up the bus industry. Shen and Xia, 2009 consider several data sets and circumstances. They use data from the Beijing Bus group to point out the practical constraints that derive from Chinese law and culture. These include built-in meal periods, multi-type bus scheduling, and restricting drivers to one or two particular buses. The authors develop an iterative sequential heuristic algorithm that consists of three steps: Firstly, the VSP is solved with a local search based on \(n\)-opt operators. Then, the CSP is solved using a tabu-search heuristic. Finally, driver rosters are proposed to the user and can be modified through an interface. According to the authors, it is possible to find feasible solutions for instances up to 107 buses and 164 duties within an appropriate time frame of some minutes. The authors report savings in vehicle costs close to 4.5% and driver wages of approximately 9.9% when comparing with manually built solutions. Mesquita et al., 2011 use data from a bus company in Lisbon to demonstrate their preemptive goal programming-based heuristic approach that prioritizes the Vehicle Crew Scheduling Problem (VCSP) over the CRP part. Their approach is able to generate optimal solutions within a short computing time for most instances. When considering all costs, however, some instances could not be solved within a reasonable time limit. Their integer formulation consists of a preemptive goal programming framework that prioritizes the integrated vehicle-crew-scheduling goals over the driver rostering goals. The problem is first decomposed to solve one VSP + CSP per day and then establish a roster for a longer time horizon. Two years later, Mesquita et al., 2013 manage to outperform the traditional sequential approach by integrating VSP, CSP and CRP with a Bender's decomposition problem formulation using a multicommodity network flow formulation, set covering and covering-assignment elements. They tackle the integrated problem by dividing it into a master problem that contains the VSP and CSP and a sub-problem for the CRP. Information from the sub-problem and its dual solution are used to find better duties for the CSP. The authors minimize vehicle and driver costs and take into account constraints regarding roster balancing and coverage of all daily duties. Using data from two bus companies in Portugal, their rosters have to match predefined days-off patterns based on the requirements of these companies. The planning horizon for rosters is seven weeks long. All three papers, Shen and Xia, 2009, Mesquita et al., 2011, and Mesquita et al., 2013, evaluate their proposed algorithms on real-world instances. However, they are too small (108 to 238 timetable trips) to represent a larger, realistic urban bus system. In summary, public transport providers recognize the necessity to organize their services efficiently. Because of increasing urbanization, the demand for public transport is rising, and thus competition and the need for efficiency in the public transport planning process rise as well. Integrating the operational phases of VSP, CSP, and CRP gives public transport providers more degrees of freedom and can lead to better schedules and rosters. Our findings include that there are three forms of integrated problems that are solved in the literature. While most authors focus on the VCSP, some approaches consider the CSRP and a few recent publications tackle the challenge of solving the VCSRP. All approaches to solving the VCSRP deal with the bus industry. This might be because crew rostering is much easier when considering only one driver, as compared to multiple-person crews in airline or railway planning, which would exacerbate the integration even more. Moreover, the majority of scholars focus on minimizing costs in their approaches. However, a few authors have considered other objectives such as robustness (see an overview of different robustness approaches in public transit in Ge, Voss, and Xie, 2022), regularity, and fairness of schedules in recent years. This indicates that diverse objective functions are becoming more common over time. In a recent study, Ge, Kliewer, et al., 2022 showed that adding a robustness objective to the VCSRP model of Mesquita et al., 2013 does not take much additional computational time in this application, which is an interesting result. Many standard combinatorial optimization models are used for decision problems, including the minimum cost flow problem and its various special cases (e.g., the resource-constrained shortest path problem, the multicommodity flow problem, and the linear assignment problem). Set partitioning formulations are often used to solve the CSP. Some authors prefer the easier set covering formulation where crew members become passengers in the case of overlapping assignments. The TSN formulation is a powerful tool to reduce network size and thus computing time compared to Connection-Based Network (CBN), where all deadhead trips are explicitly modeled. In terms of solution techniques, column generation stands out as the most powerful operations research method to solve integrated decision problems. It is usually accompanied by relaxation techniques. Lagrangian relaxation seems to work best for quickly finding reasonable bounds for the integer solution. Branch-and-bound and branch-and-price techniques are popular to find feasible integer solutions. Heuristics and meta-heuristics are used to speed up the solution process. They include tabu-search, simulated annealing, ant colony algorithms, genetic algorithms, and smaller-scale greedy heuristics. In conclusion, we can say that there exists no single best approach to integrated solve the VSP, CSP, and CRP. Which solution approach yields the best results is always subject to the specific problem settings. Among others, it is important to take into account the source and nature of data that is used, the size of the instances, and the relevant constraints. Every new approach can be a game-changer for some situations, while in others, it might prove less useful. ## 6 Future Research For future research, we propose to focus on further integrating the public transport planning process. Since there are only three approaches towards a threefold integration of all operational phases and the first results are promising, more effort is needed in this direction. In addition, strategic decision problems may also be included in integrated planning. In the literature, the first approaches towards integrating timetabling or vehicle routing with vehicle scheduling can be found. The long-term aim of an integrated public transport planning process where all sub-problems are solved simultaneously and thus making it possible to use all degrees of freedom is still a long way off. At the same time, the existing approaches should be enhanced in order to make them suitable for real-world use with larger instances and more complex data sets, such as public transport providers in larger cities. Furthermore, schedule robustness, regularity, crew or driver preferences, and fairness as optimization objectives should be examined more closely. There are some first steps taken in individual publications in integrated scheduling. The crew scheduling literature offers many more starting points that could be incorporated into integrated planning as well. More publications that explicitly deal with these topics are desirable.
2302.04348
Addressing Systematics in the Traceback Age of the $β$ Pictoris Moving Group
We characterize the impact of several sources of systematic errors on the computation of the traceback age of the $\beta$ Pictoris Moving Group ($\beta$PMG). We find that uncorrected gravitational redshift and convective blueshift bias absolute radial velocity measurements by $\sim$ 0.6 kms${}^{-1}$, which leads to erroneously younger traceback ages by $\sim$ 2 Myr. Random errors on parallax, proper motion, and radial velocity measurements lead to an additional bias of $\sim$ 1.5 Myr on traceback ages. Contamination of astrometric and kinematic data by kinematic outliers and unresolved multiple systems in the full input sample of 76 members and candidates of $\beta$PMG also erroneously lowers traceback ages by ${\sim}$ 3 Myr. We apply our new numerical traceback analysis tool to a core sample of 25 carefully vetted members of $\beta$PMG using Gaia Data Release 3 (DR3) data products and other kinematic surveys. Our method yields a corrected age of 20.4 $\pm$ 2.5 Myr, bridging the gap between kinematic ages (11$-$19 Myr) and other age-dating methods, such as isochrones and lithium depletion boundary (20$-$26 Myr). We explore several association size metrics that can track the spatial extent of $\beta$PMG over time, and we determine that minimizing the variance along the heliocentric curvilinear coordinate $\xi^{\prime}$ (i.e., toward the Galactic Center) offers the least random and systematic errors, due to the wider UVW space velocity dispersion of members of $\beta$PMG along the U-axis, which tends to maximize the spatial growth of the association along the $\xi^{\prime}$-axis over time.
Dominic Couture, Jonathan Gagné, René Doyon
2023-02-08T21:35:12Z
http://arxiv.org/abs/2302.04348v1
# Addressing Systematics in the Traceback Age of the \(\beta\) Pictoris Moving Group ###### Abstract We characterize the impact of several sources of systematic errors on the computation of the traceback age of the \(\beta\) Pictoris moving group (\(\beta\)PMG). We find that uncorrected gravitational redshift and convective blueshift bias absolute radial velocity measurements by \(\sim 0.6\,\mathrm{km\,s^{-1}}\), which leads to erroneously younger traceback ages by \(\sim 2\,\mathrm{Myr}\). Random errors on parallax, proper motion, and radial velocity measurements lead to an additional bias of \(\sim 0.6\,\mathrm{Myr}\) on traceback ages. Contamination of astrometric and kinematic data by kinematic outliers and unresolved multiple systems in the full input sample of 76 members and candidates of \(\beta\)PMG also erroneously lowers traceback ages by \(\sim 3\,\mathrm{Myr}\). We apply our new numerical traceback analysis tool to a core sample of 25 carefully vetted members of \(\beta\)PMG using _Gaia_ Data Release 3 (DR3) data products and other kinematic surveys. Our method yields a corrected age of \(20.4\pm 2.5\,\mathrm{Myr}\), bridging the gap between kinematic ages (\(11-19\,\mathrm{Myr}\)) and other age-dating methods, such as isochrones and lithium depletion boundary (\(20-26\,\mathrm{Myr}\)). We explore several association size metrics that can track the spatial extent of \(\beta\)PMG over time, and we determine that minimizing the variance along the heliocentric curvilinear coordinate \(\xi^{\prime}\) (i.e., toward the Galactic Center) offers the least random and systematic errors, due to the wider \(UVW\) space velocity dispersion of members of \(\beta\)PMG along the \(U\)-axis, which tends to maximize the spatial growth of the association along the \(\xi^{\prime}\)-axis over time. methods: traceback -- star: kinematics and dynamics -- \(\beta\) Pictoris moving group + Footnote †: journal: ApJ 0000-0002-8861-8885]Dominic Couture 0000-0002-3177-587X]Jonathan Gagne 0000-0002-1881-707X]Rene Doyon ## 1 Introduction Nearby young associations (NYAs) are sparse, coeval, gravitationally unbound stellar populations located in the solar neighborhood. Formed within a few million years of each other from the collapse of a single molecular cloud or cloud complex, members of these kinematic associations have similar Galactic positions and space velocities, and they share the same age and chemical composition due to their common formation history. However, their members span relatively large projected angular areas on the sky, as a result of the proximity and low spatial density of NYAs, which has made identification of NYAs challenging, especially prior to the _Gaia_ mission (Gaia Collaboration et al., 2016). Such populations of nearby, age-calibrated stars are ideal laboratories to study the last stages of stellar and exoplanetary formation, and they provide strategic locations to search for isolated planetary-mass objects and for the direct imaging of giant exoplanets, thanks to the more favorable contrast between the host star and its companion, which remain relatively bright in the near-infrared (NIR) at such young ages. This contrast is even more advantageous for low-mass, late-type stars, which make up the majority of NYAs. For up to a few hundred million years after their formation, members of an NYA retain similar kinematics, until gravitational perturbations cause their Galactic orbits to become randomized enough that they become indistinguishable from unrelated field stars. Using the full 6D kinematics of their members (i.e., their \(XYZ\) Galactic positions and \(UVW\) space velocities, where \(U=dX/dt\), \(V=dY/dt\), and \(W=dZ/dt\)), it is possible to compute backward Galactic orbits and trace members' trajectories back to the epoch when the NYA's spatial extent was minimal, which is assumed to coincide with the epoch of stellar formation. Thus, trace-back analysis can provide a kinematic age estimate for members of an NYA, independent of stellar evolution models, unlike the more usual isochrone or lithium depletion boundary (LDB) methods. One of the greatest difficulties in tracing back stellar trajectories is the need for precise astrometric and kinematic measurements to compute the full 6D kinematics of members of an NYA. The _Gaia_ Data Release 3 (DR3) data products provide an unmatched sample of precise parallax and proper motion measurements (Gaia Collaboration et al., 2022). However, despite the increase of available measurements over the _Gaia_ Early Data Release 3 (Gaia Collaboration et al., 2021), absolute radial velocity measurements remain relatively inaccurate and represent, by far, the largest contribution to the total error in 6D kinematics during backward Galactic orbit integration, especially given how errors on Galactic position grow as stars are projected further back in time. Jointly discovered by Barrado y Navascues et al. (1999) and Zuckerman et al. (2001), using data from the _Hipparcos_ catalog, the \(\beta\) Pictoris moving group (\(\beta\)PMG) is one of the youngest and nearest known NYAs. Over 40 stars, located at an average distance \(\sim 35\) pc, with a distance range of \(9-72\) pc, with computed full 6D kinematics and visible signs of youth consistent with membership in \(\beta\)PMG, are considered to be _bona fide_ members of the association, and more than a hundred additional candidate members are known, for which kinematic measurements for final membership determination are needed (Torres et al., 2006; Schlieder et al., 2010; Kiss et al., 2011; Schlieder et al., 2012; Malo et al., 2014; Riedel et al., 2014; Binks and Jeffries, 2016; Riedel et al., 2017; Gagne et al., 2018; Miret-Roig et al., 2020). The youth and proximity of \(\beta\)PMG make its members ideal targets for the search and characterization of exoplanets through direct imaging. For instance, its eponymous star, \(\beta\) Pictoris, has two known giant exoplanets, a debris disk, and exocomets (Lagrange et al., 2009; Chauvin et al., 2012; Kiefer et al., 2014; Lagrange et al., 2019). PSO J318.5338-22.8603, a \(6.5^{+1.3}_{-1.0}\,M_{\rm Jup}\) free-floating planetary-mass object of spectral type L7, was discovered using data from the Pan-STARRS survey (Liu et al., 2013). With an \(XYZ\) Galactic position and an \(UVW\) space velocity compatible with membership in \(\beta\)PMG, its age is the same as the rest of the association. However, traceback age estimates for \(\beta\)PMG and other NYAs are inconsistent with other age-dating methods. Age estimates for \(\beta\)PMG using the isochrones and LDB methods (\(20-26\) Myr; Mentuch et al., 2008; Yee and Jensen, 2010; Malo et al., 2014; Mamajek and Bell, 2014; Binks and Jeffries, 2014; Bell et al., 2015; Galindo-Guil et al., 2022) are significantly older than most traceback age estimates (\(11-13\) Myr; Ortega et al., 2002; Song et al., 2003; Ortega et al., 2004; Miret-Roig et al., 2018), although more recent kinematic approaches (\(17-19\) Myr; Crundall et al., 2019; Miret-Roig et al., 2020) have addressed specific and distinct sources of bias, and have allowed the gap between isochrones and LDB methods to be bridged. Crundall et al. (2019) used a forward modeling approach, which circumvents the bias on traceback ages due to measurement errors (see Section 3.5.1) that arises when computing backward Galactic orbits for individual members, whereas Miret-Roig et al. (2020) used a more traditional traceback approach but minimized sample contamination through a rigorous selection process and used robust association size metrics to track the spatial extent of \(\beta\)PMG over time. In this work, we aim to correct for various sources of systematic errors in the computation of the traceback age of \(\beta\)PMG, and to determine whether this can further reconcile the tension between kinematic ages and other age-dating methods. Our approach uses a clean sample of _bona fide_ members of \(\beta\)PMG, free of any kinematic outlier or unresolved multiple system (see Section 2), along with data from _Gaia_ DR3 data products and other radial velocity surveys, as the basis to compute backward Galactic orbits using their full 6D kinematics. We account for the bias on traceback age estimates due to measurement errors in astrometric and kinematic data (see Section 3.5.1) and two separate biases on radial velocity measurements, gravitational redshift and convective blueshift (see sections 2.3 and 2.4), all of which tend to artificially push the epoch of minimal association size closer to the current-day epoch. We also test several association size metrics to evaluate the spatial extent of NYAs over time, in order to determine which provide the most accurate and reliable age estimates (see Section 3.3). This study is structured as follows. First, in Section 2, we describe how we selected a clean, vetted sample of _bona fide_ members of \(\beta\)PMG and corrected for biases on radial velocity measurements. In Section 3, we describe the numerical method used to derive a kinematic age estimate of NYAs, and we apply it to simulated samples of stars in order to assess the precision and bias of every association size metric. In Section 4, we apply our method to our sample of members of \(\beta\)PMG, and we compare our results to previous age estimates for \(\beta\)PMG. Finally, we conclude our analysis in Section 5. ## 2 Sample Selection When assembling a sample with the aim to perform traceback analysis, candidates must be carefully vetted in order to minimize contamination by unrelated older stars that show a kinematic compatible with the NYA. Indeed, our simulations show that traceback ages are not only less precise with the addition of kinematic outliers, they are also biased toward younger ages because the positions of kinematic outliers do not converge at the epoch of stellar formation along with actual members of the NYA (see Section 3.5). Therefore, in order to mitigate this issue, we first assembled a sample of candidate members of \(\beta\)PMG from the literature (Zuckerman et al., 2001; Malo et al., 2013; Riedel et al., 2014; Miret-Roig et al., 2020) with available full 6D kinematics. Candidates were also vetted by the Bayesian analysis tool BANYAN \(\Sigma^{1}\)(Gagne et al., 2018), which utilizes astrometric and kinematic measurements to establish the membership probability of a star in 30 known NYAs, represented by 3D ellipsoidal models in \(XYZ\) and \(UVW\) space (Lee and Song, 2018; Gagne et al., 2018; Lee and Song, 2019). In total, 76 stars, located at an average distance of \(28.22\pm 0.02\) pc and a distance range of \(9.719-83.99\) pc, were identified as part of this sample, presented in Table 2 and hereafter referred to as the input sample. Over half of these stars are low-mass M dwarfs, and most show signs of youth such as high X-ray, UV, or H\(\alpha\) emission, fast rotation, or lithium emission lines. Stars within the input sample were further vetted in order to flag and exclude possible unresolved multiple systems (see Section 2.2) and kinematic outliers (see Section 3.2). ### Stellar kinematics We sourced most parallax, proper motion, and absolute radial velocity data from the _Gaia_ DR3 data products (Gaia Collaboration et al., 2022), while _Hipparcos_ data were used for very bright stars (Perryman et al., 1997), for which astrometry is sometimes more precise in _Hipparcos_ than _Gaia_ DR3. We also used other kinematic measurements compiled from the literature, and we used an error-weighted average to combine all available measurements. This allows us to obtain more precise and reliable measurements in order to better confirm the membership of members of \(\beta\)PMG with BANYAN \(\Sigma\) and compute more precise backward Galactic orbits for every member, which will in turn yield a more precise kinematic age estimate and help limit contamination by multiple systems and kinematic outliers. In addition, reliable and precise astrometric and kinematic measurements are also essential to minimize the bias on traceback ages due to measurement errors (see Section 3.5.1). The average \(XYZ\) Galactic positions and \(UVW\) space velocities of members of the input sample are presented in Table 1. The astrometric and kinematic measurements of all members of the input sample of \(\beta\)PMG are presented in Table 3. ### Unresolved multiple systems A large fraction of stars in the Galaxy are in fact part of multiple systems, for which the added stellar motion around the barycenter of the system, along the line of \begin{table} \begin{tabular}{l r} \hline \hline \multicolumn{1}{c}{ Parameter} & \multicolumn{1}{c}{Value} \\ \hline **Input Sample** & \\ Number of stars & 76 \\ Average distance & \(28.22\pm 0.02\) pc \\ Distance range & \(9.719-83.99\) pc \\ \(XYZ\) & (\(22.14\), \(-5.20\), \(-16.71\)) pc \\ \(\sigma_{XYZ}\) & (\(32.97\), \(14.08\), \(8.61\)) pc \\ \(UVW\) & (\(-10\), \(-15\), \(-9\)) km s\({}^{-1}\) \\ \(\sigma_{UVW}\) & (\(8\), \(5\), \(5\)) km s\({}^{-1}\) \\ **Core Sample** & \\ Number of stars & 25 \\ Average distance & \(29.587\pm 0.006\) pc \\ Distance range & \(9.719-71.55\) pc \\ \(XYZ\) & (\(22.691\), \(-4.308\), \(-18.492\)) pc \\ \(\sigma_{XYZ}\) & (\(29.698\), \(13.940\), \(8.106\)) pc \\ \(UVW\) & (\(-10.2\), \(-15.7\), \(-8.64\)) km s\({}^{-1}\) \\ \(\sigma_{UVW}\) & (\(1.5\), \(0.6\), \(0.76\)) km s\({}^{-1}\) \\ \hline \end{tabular} Note. – The average \(XYZ\) Galactic position and dispersion (\(\sigma_{XYZ}\)) and the average \(UVW\) space velocity and dispersion (\(\sigma_{UVW}\)) are given for members of the input and core samples of \(\beta\)PMG. \end{table} Table 1: Parameters of the input and core samples of \(\beta\)PMG sight, modulates radial velocity measurements. Unresolved multiple systems can therefore appear as false positives when one attempts to select NYA members based on kinematic data if a stars' \(UVW\) space velocity happens to match the kinematic model of an NYA. Fortunately, it is possible to flag these false positives by comparing radial velocity measurements at different epochs. Indeed, the radial velocity of stars in multiple systems changes periodically, and inconsistent measurements can thus be used as a way to flag unresolved multiple systems. Therefore, in order to exclude kinematic measurements contamination by the presence of these systems, we selected members from the input sample of \(\beta\)PMG for which at least two radial velocity measurements are available in the literature and did not display variations \(\geq 0.6\,\mathrm{km}\,\mathrm{s}^{-1}\), a value larger than the typical radial velocity scatter in our data set, indicating the possible presence of an unresolved companion. Also excluded were known spectral binaries from the literature and the Washington Double Star (WDS) catalog, stars with excess brightness in the color-magnitude diagram (CMD), as well as stars with a _Gaia_ DR3 Renormalized Unit Weight Error (RUWE) \(\geq 2.0\).2 This statistical indicator is a way to assess the reliability of astrometric data. It is expected to be \(\sim 1.0\) for sources for which the single-star astrometric solution is a good fit (Lindegren, 2018) while a RUWE \(>1.4\) is a sign of an unresolved binary system (Stassun and Torres, 2021; El-Badry et al., 2021). We chose a slightly less robust threshold because we also require stars to show relatively stable radial velocities over time. The various multiplicity indicators of members of the input sample of \(\beta\)PMG are presented in Table 4. Footnote 2: The RUWE is documented in the Gaia Data Release Documentation at [https://gea.esac.esa.int/archive/documentation/GDR2/Gaia_archive/chap_datamodel/sec_dm_main_tables/ssec_dm_ruwe.html](https://gea.esac.esa.int/archive/documentation/GDR2/Gaia_archive/chap_datamodel/sec_dm_main_tables/ssec_dm_ruwe.html). The result of this selection process is a clean, uncontaminated subset of the input sample of \(\beta\)PMG made up of 25 stars located at an average distance of \(29.587\pm 0.006\,\mathrm{pc}\) and a distance range of \(9.719-71.55\,\mathrm{pc}\), hereafter referred to as the core sample of \(\beta\)PMG, which can be used to compute the traceback age of the association. The average \(XYZ\) Galactic positions and \(UVW\) space velocities of members of the core sample are presented in Table 1. We note that all parameters of the input and core samples are similar, apart from the number of stars included in the sample and the \(UVW\) space velocity dispersion (\(\sigma_{UVW}\)), which is significantly tighter for the core sample, as expected. The remaining 51 stars excluded from the core sample by this selection process, hereafter referred to as part of the extended sample, are not considered in the computation of the traceback age. The following stars deserve additional discussion: **HD 165189:**: This star is a slight kinematic outlier with only three available radial velocity measurements. It is more than 1 magnitude brighter than other members of \(\beta\)PMG in the CMD, and the same is true of its companion located at a 1" separation. The absolute V vs. V-J is comparable to Bell et al. (2015) and seems to best match the _Gaia_ DR3 parallax. Therefore, the _Gaia_ DR2 and EDR3 photometry might be unreliable. For now, it remains a \(\beta\)PMG candidate and was kept in the extended sample. **HD 207043:**: This star's position in a _Gaia_ DR3 CMD is inconsistent with other members of \(\beta\)PMG, which puts into question its membership in the association. For now, it remains a \(\beta\)PMG candidate and was kept in the extended sample. **AF Psc:**: With updated kinematics, this triple star system with separations of 19" and 1000" (Malo et al., 2014; Kraus et al., 2014; Miret-Roig et al., 2020) is a poor kinematic match to the BANYAN \(\Sigma\) model of \(\beta\)PMG. As a result, it was excluded from the extended sample. **G 271-110:**: With updated kinematics, this star is a poor kinematic match to the BANYAN \(\Sigma\) model of \(\beta\)PMG, and its position in a CMD is also fainter than other members of \(\beta\)PMG. Malo et al. (2014b) noted that it also does not show a lithium absorption, whereas it normally should at its temperature if it were a member of \(\beta\)PMG. Also, at 1.71, its _Gaia_ DR3 RUWE is a bit high, so it may be an unresolved binary system. As a result, it was excluded from the extended sample. **HD 14082 A:**: This star displays slight radial velocity variations. It was excluded from the extended sample, and its co-mover, HD 14082 B, located at a separation of 14", was used instead. **HD 173167:**: This star is a spectroscopic binary system with few radial velocity measurements that display slight variations (Elliott and Bayo, 2016). This star was excluded from the extended sample, and its co-mover, Smethells 20, was used instead. ### Gravitational redshift As a result of general relativity, radial velocity measurements are biased due to the gravitational redshift as light emitted at the surface of a star leaves its gravitational potential. Light is also later blueshifted slightly by the Sun's and the Earth's gravitational potentials, although this effect is negligible in comparison and safely ignored. The gravitational redshift is given as \[1+z_{grav}=\frac{\lambda_{obs}}{\lambda}=\left(1-\frac{2GM}{c^{2}R}\right)^{-1/ 2}. \tag{1}\] A star's radial velocity (RV) is obtained indirectly by measuring its Doppler shift, as it moves away from or toward the observer. It is estimated by its non-relativistic expression: \[1+z_{Doppler}=\frac{\lambda_{obs}}{\lambda}=1+\frac{RV}{c}. \tag{2}\] By combining equations 1 and 2, the expected bias on traceback ages due to the gravitational redshift of a star can be estimated as \[\Delta RV_{grav}=c\left(\left(1-\frac{2GM}{c^{2}R}\right)^{-1/2}-1\right). \tag{3}\] Therefore, a star will appear to have a greater radial velocity (i.e., it will appear to move away faster from the observer). From equation 3, this bias can be derived by two fundamental parameters: the stellar mass (\(M\)) and radius (\(R\)). These values, however, cannot be measured directly for most stars, and an approximation based on spectral type is used instead (see Figure 1). This approximation is calibrated on interferometric radius measurements of main-sequence stars (Ligi et al., 2016), semi-empirical spectral energy distribution (SED)-based determinations of angular radii of members of the core sample of \(\beta\)PMG by Pecaut & Mamajek (2013), updated with _Gaia_ DR3 parallax measurements, as well as ensembles of dynamical masses of eclipsing binaries for young and pre-main-sequence stars compiled by Hillenbrand & White (2004) and other sources, updated with _Gaia_ DR3 parallax measurements when relevant. We avoid basing our approximation on absolute photometry or bolometric luminosity, due to the potentially large bias that we would obtain for yet unknown unresolved multiple systems in the core sample. While the impact is relatively small, \(\sim 0.6\,\mathrm{km}\,\mathrm{s}^{-1}\) for most stars, traceback analysis is highly sensitive to such biases: because members of the closest NYAs are spread across a large fraction of the sky, this bias will create the illusion that members are moving away from the observer in multiple directions, causing them to appear to reach a minimal spatial extent at a more recent epoch when performing a traceback analysis. More distant NYAs, which do not cover a large fraction of the sky, are proportionally less affected by this bias. This bias also affects the determination of membership probabilities for NYAs with BANYAN \(\Sigma\), because this process also utilizes radial velocities as one of its input parameters, although the impact is much less pronounced, given that the inherent velocity dispersion of young associations is larger, at \(\sim 1-3\,\mathrm{km}\,\mathrm{s}^{-1}\). #### 2.3.1 Stellar masses The mass of members of the core sample of \(\beta\)PMG was estimated using their spectral type in order to avoid biases in cases of potential unresolved multiple systems that may remain in this sample. We used the Pecaut & Mamajek (2013) spectral type-mass relation3 (P13 relation hereafter) and compared it to dynamical mass mea Figure 1: Gravitational redshift as a function of spectral type based on our determinations of stellar masses and radii from empirical mass–spectral type and radius–spectral type sequences for main-sequence (solid red line) and pre-main-sequence objects (dotted red line). The shaded areas show the uncertainties based on individual spectral type measurement errors and propagated standard deviations of empirical mass and radius sequences. For most late-type members of \(\beta\)PMG, the expected absolute radial velocity shifts due to the gravitational redshift alone are \(\sim 0.6\,\mathrm{km}\,\mathrm{s}^{-1}\). For comparison, the radial velocity shifts due to the gravitational redshift of four stars within \(\beta\)PMG that benefit from interferometric radius measurements are indicated by black circles and error bars (Kervella et al., 2004; Bruntt et al., 2008; Simon & Schaefer, 2011). surements of known A2-M5 young stars (\(2-150\) Myr) from the literature, improved with _Gaia_ DR3 parallax measurements when relevant. Figure 2 shows that the dynamical masses of young stars are slightly larger than those of old field stars at a fixed spectral type. This is expected, given that stars warm up slightly in their pre-main-sequence phase, which lasts for \(20-100\) Myr for \(\leq 1\,M_{\odot}\) stars (e.g., see Choi et al., 2016). We built a young version of the P13 spectral type-mass relation by adjusting a two-segment linear fit to Figure 2: Stellar masses (top panels) and radii (bottom panels) as a function of spectral type for field (blue circles) and young stars (\(2-150\) Myr, green triangles). The masses of both field and young stars were dynamically measured (Hillenbrand and White, 2004; Montet et al., 2015; Nielsen et al., 2016; Azulay et al., 2017; Rodet et al., 2018; Janson et al., 2018; Simon et al., 2019; Braun et al., 2021; Pegues et al., 2021). The radii of field stars were measured by interferometry (Ligi et al., 2016) while the radii of young stars were derived from their SED using the modified P13 spectral type–radius relation (Pecaut and Mamajek, 2013), which reproduces empirical measurements at young ages, representative of \(\beta\)PMG (magenta triangles). Blue (\(>200\) Myr) and green (\(\leq 200\) Myr) lines represent our empirical spectral type–mass and spectral type–radius sequences with model extrapolations for spectral types later than M5 (light gray shaded area), for ages ranging from 1 Myr to 10 Gyr, using 50 steps of 0.08 dex in log space. The discontinuity at 200 Myr is due to the adoption of distinct spectral type to effective temperature sequences as prescribed by Pecaut and Mamajek (2013). The error bars represent the typical uncertainty associated with our sequences for spectral types M5 and earlier (same colors), and the black error bar represents the uncertainty assigned to model extrapolations. the difference in masses between the P13 relation and measured dynamical masses in log space to shift the P13 relation upward slightly, as shown in Figure 2. We found a relative standard deviation of 11 % for the dynamical masses of young stars with respect to our modified young sequence, which we adopt as 1\(\sigma\) uncertainties on our spectral-type-based mass estimations. #### 2.3.2 Stellar radii We similarly used the P13 spectral type-radius relations to determine the radii of the stars in the core sample based on their spectral types, to avoid potential biases due to unresolved binaries. We modified the main-sequence version of the P13 relations using the sample of known members of \(\beta\)PMG in P13 to obtain a spectral type-radius relation that is appropriate for the younger age of \(\beta\)PMG, at a phase when stars are known to have an inflated radius due to their slow contraction during the pre-main-sequence phase (e.g., see Burrows et al., 1997), which is slowed down significantly in the case of low-mass stars by strong magnetic fields driven by their fast rotations (e.g., see Malo et al., 2014). P13 measured angular semi-diameters based on an SED fitting method (Masana et al., 2006), which we translated into stellar radii using the best available parallax measurements, mostly based on _Gaia_ DR3. We calculated a relative standard deviation of 9 % for the radius measurements with respect to our modified spectral type-radius sequence, which we adopt as our measurement error. We show the resulting sequence in Figure 2. For both the masses and radii, we used the Burrows et al. (1997) evolutionary models to extrapolate our empirical sequences beyond spectral type M5, and we assigned larger relative measurement errors of 20 % to account for possible model systematics, which may be especially important for young low-mass stars. By combining the spectral type-mass and spectral type-radius relations described above, we built a spectral type-gravitational redshift relation shown in Figure 1. The application of this correction to radial velocity measurements of members of the core sample of \(\beta\)PMG is presented in Table 5. ### Convective blueshift With the presence of convection cells at the stellar surface, light is emitted from areas moving toward or away from the observer, resulting in a Doppler broadening of spectral lines. Because rising gas is hotter and brighter, and it accounts for a larger fraction of the stellar surface, the broadening is uneven and causes a change in the shape of spectral lines and a net blueshift of the spectrum. Figure 3: Convective blueshift (top panel) and total absolute radial velocity shift (bottom panel), including the effects of the gravitational redshift for main-sequence (solid purple line) and pre-main-sequence objects (dotted purple lines), as a function of spectral type. The shaded areas show the uncertainties based on individual spectral type measurement errors and propagated standard deviations of empirical mass and radius sequences. The convective blueshift three-segment polynomial sequence (solid blue line) was fitted with data from Meunier et al. (2017), and the uncertainty was chosen to be wide enough to encompass data from: Leão et al. (2019), Liebing et al. (2021), Dai et al. (2019), Baroch et al. (2020), Loehner-Boettcher et al. (2019), Allende Prieto et al. (2013), and Gunn et al. (1988), including measurements for YZ CMi, a member of \(\beta\)PMG, and the Sun. Convective blueshift has historically been difficult to measure. By comparing precise astrometric radial velocity measurements from the _Hipparcos_ and _Gaia_ missions with spectroscopic measurements, it is possible to isolate phenomena, other than the Doppler effect, that can shift spectral lines (Leao et al., 2019). Convective blueshift can also be measured by studying the shape of the differential of spectral lines (Meunier et al., 2017). Its impact on the instrumental point spread function is dependent on the instrumental resolution, the measurement method, and the spectral range. Lower stellar mass and higher stellar activity are linked to lower levels of convective blueshift, while lower metallicity and higher effective temperature are linked to higher levels. On the other hand, there is little effect from age or cyclic activity (Liebing et al., 2021). The net radial velocity shifts derived by cross-correlation also depend on the spectral resolution of the instrument (i.e., the lower the resolution, the smaller the sensitivity to changes in the shapes of spectral lines) and the wavelength range (Allende Prieto et al., 2013). The effect of convective blueshift is smaller than the aforementioned gravitational redshift, so the sum of both effects is expected to be a redshift (see Figure 3). For spectral types earlier than F2, we assume no convective blueshift, due to the absence of convection cells at the stellar surface. A three-segment polynomial sequence was fitted to blueshift measurements from the literature for F2-F7, F7-K4, and K4-M5 stars with a conservative estimated error of \(0.2\,\mathrm{km}\,\mathrm{s}^{-1}\), to account for the dependence of the impact of convective blueshift on the instrumental point spread function. For stars later than M5, although they are expected to be fully convective, no robust measurements of convective blueshift are available yet. The case of YZ CMi (Baroch et al., 2020), a member of \(\beta\)PMG, is consistent with a null convective blueshift within the error bars. For this reason, we adopt a null correction with a \(0.2\,\mathrm{km}\,\mathrm{s}^{-1}\) measurement error until more data are available. The application of this correction to radial velocity measurements of members of the core sample of \(\beta\)PMG is presented in Table 5. ## 3 Traceback analysis Traceback analysis is performed with the custom Python package kanya4, which is capable of computing the traceback age of a given sample of stars. The backward Galactic orbit of each star is computed and the spatial extent of the association is determined using a range of association size metrics (see Section 3.3) at every temporal step, in order to identify the epoch of minimal association size. As will be demonstrated in this section, this method is sensitive to sample contamination, biases introduced by measurement errors, and systematic effects that impact absolute radial velocity measurements. It is therefore crucial to address each of these potential issues in order to determine reliable traceback ages. Footnote 4: kanya: Kinematic Age for Nearby Young Associations. The source code is available at [https://github.com/](https://github.com/)***/kanya. ### Backward orbital integration A correction to radial velocity measurements was first applied before computing backward Galactic orbits, in order to compensate for the effects of gravitational redshift and convective blueshift (see sections 2.3 and 2.4). Then the full current-day 6D kinematics of members of the core sample of \(\beta\)PMG are computed (see Table 6). We use the galpy5 Python package (Bovy, 2015) with the Galactic potential model I from Irrgang et al. (2013) to compute independent backward Galactic orbits for every member of the NYA. This model, which is an updated version of the Galactic potential from Allen and Santilan (1991), assumes a \(9.5\times 10^{9}\,M_{\odot}\) spherical bulge, a \(6.6\times 10^{10}\,M_{\odot}\) Miyamoto and Nagai (1975) disk, and a \(1.8\times 10^{12}\,M_{\odot}\) spherical halo. This is the same Galactic potential used by Miret-Roig et al. (2020), who concluded that variations in the kinematic age caused by the choice of the Galactic potential are smaller than the main source of uncertainty, due the short integration time for such young NYAs. Footnote 5: galpy is documented at [http://github.com/jobovy/galpy](http://github.com/jobovy/galpy). \(XYZ\) Galactic coordinates are transformed from a heliocentric, right-handed Cartesian system into a Galactocentric, left-handed cylindrical system (Quillen et al., 2020). The following values for the Sun's current-day Galactic position are adopted for this transformation: \[\begin{split} R_{\odot}&=8.12\pm 0.03\,\mathrm{kpc}\\ \phi_{\odot}&=0^{*}\\ z_{\odot}&=5.6\pm 5.8\,\mathrm{pc},\end{split} \tag{4}\] where \(R_{\odot}\) is the Sun's Galactocentric radius (Gravity Collaboration et al., 2018) and \(z_{\odot}\) is the Sun's height above or below the Galactic plane (Reid et al., 2019). The Sun's Galactocentric longitude \(\phi_{\odot}\) is null by definition. We adopt the following peculiar Solar motion to transform \(UVW\) space velocities (Schonrich et al., 2010): \[\begin{split} U_{\odot}&=11.1^{+0.7}_{-0.8}\,\mathrm{ km}\,\mathrm{s}^{-1}\\ V_{\odot}&=12.2\pm 0.5\,\mathrm{km}\,\mathrm{s}^{-1}\\ W_{\odot}&=7.3\pm 0.4\,\mathrm{km}\,\mathrm{s}^{-1}. \end{split} \tag{5}\] We also adopt the following local standard of rest (LSR) rotational velocity (Schonrich et al., 2010): \[V_{\rm LSR}=233\pm 1.4\,{\rm km\,s^{-1}}. \tag{6}\] Once Galactic orbits are computed for every member of the NYA, their individual Galactocentric positions and velocities are transformed back into \(XYZ\) Galactic coordinates and \(UVW\) space velocities. Though this approach is more physically accurate than simply ignoring the Galactic potential's impact on past trajectories, its inclusion may have little impact on both the bias and error of our method, for an association as young as \(\sim 24\,{\rm Myr}\) (see Figure 5). Members of NYAs have similar \(XYZ\) Galactic positions and \(UVW\) space velocities, and thus they follow similar Galactic orbits as well. Because only the members' relative positions to the average position of the NYA are considered, the Galactic potential may have little impact on traceback ages. ### Kinematic outliers Stars in the core sample (see Section 2.2) were further investigated in order to identify kinematic outliers. First, the \(XYZ\) Galactic position and \(UVW\) space velocity standard deviations are computed independently along each 6D component at every temporal step of the stars' trajectory. Stars that stray away from the core of the NYA, beyond a \(3\sigma\) threshold in either \(XYZ\) Galactic position or \(UVW\) space velocity at any point and in any component, are flagged as kinematic outliers and excluded from the computation of size metrics. This process is repeated recursively until no more stars are flagged as kinematic outliers. The robust covariance estimator developed by Pedregosa et al. (2011) as part of the scikit-learn6 Python package was then used to identify kinematic outliers independently at every temporal step of the traceback. Stars identified as outliers at least \(70\,\%\) of the time along their trajectory were flagged as kinematic outliers and ignored in the computation of all association size metrics. Scikit-learn's robust covariance estimator uses the empirical covariance matrix (\(\Sigma\)) of the stars' \(XYZ\) Galactic positions and \(UVW\) space velocities: Footnote 6: Scikit-learn’s robust covariance estimator is documented at [https://scikit-learn.org/stable/modules/covariance.html?](https://scikit-learn.org/stable/modules/covariance.html?) highlight=robust%20covariance&robust-covariance-estimation. \[\Sigma=E\left((\mathbf{x}-E(\mathbf{x}))(\mathbf{x}-E(\mathbf{x}))^{T}\right), \tag{7}\] where \(E(\mathbf{x})\) is the expected value of a random vector \(\mathbf{x}\). Since the empirical covariance matrix is sensitive to the presence of outliers in the data, it is necessary to perform outlier detection in order to compute a robust estimator of the covariance matrix. This is accomplished by computing the Mahalanobis distance (Mahalanobis, 1936) of each star. For an observation \(x_{i}\) of a distribution with an empirical covariance matrix \(\Sigma\) and a mean \(\mu\), the Mahalanobis distance (\(d_{M}\)) to the mode is given by \[d_{M}^{2}(x_{i})=(x_{i}-\mu)^{T}\Sigma^{-1}(x_{i}-\mu). \tag{8}\] This approach is an effective tool to perform outlier detection in noisy and irregular data sets because it takes into account covariances in the distribution. In summary, the distance is measured and scaled to the standard deviation along each principal component of the 6D distribution. The number of stars included in the sample is a limitation to the precision and accuracy of the traceback method. As one would expect, traceback ages are more precise and accurate with a larger sample. However, as more stars are included in the sample, some will likely be kinematic outliers or unrecognized binary stars, and these will in turn both negatively affect the precision of traceback ages and create an additional bias toward younger ages. The trajectories of these stars tend to deviate significantly from the average trajectory of the association, and even just a few such outliers can dramatically lower the traceback age by artificially increasing the dispersion in \(XYZ\) Galactic positions and \(UVW\) space velocities. Therefore, it is advantageous to limit traceback samples to smaller ensembles of stars, free of such contamination. However, including more stars in the sample will decrease the minimal theoretical error of ages determined with the traceback method, because the epoch of minimal association size will be more clearly defined, provided that no kinematic outliers affect the results negatively. ### Association size metrics The empirical covariance matrices of members of an NYA were calculated at each temporal step in both \(XYZ\) or \(\xi^{\prime}\eta^{\prime}\zeta^{\prime}\) Galactic coordinate systems. The latter, identical to the system used by Miret-Roig et al. (2020), is a heliocentric curvilinear coordinate system that minimizes variations along every individual component during the backward orbital integration by moving the origin on a circular orbit around the Galactic Center at a frequency of \(\omega_{\odot}=V_{\rm LSR}/R_{\odot}=28.7\,{\rm km\,s^{-1}\,kpc^{-1}}\). In addition to the individual diagonal terms of the empirical covariance matrices, the determinant and the trace of the associated covariance matrices were investigated as plausible association size metrics, similarly to the approach taken by Miret-Roig et al. (2020). We also considered the spatial-kinematic cross-covariance terms for members at each temporal step as potential association size metrics, in both \(XYZ\) and \(\xi^{\prime}\eta^{\prime}\zeta^{\prime}\) Galactic coordinate systems. Covariances between spatial and kinematic terms of a given direction are expected to be minimal at the epoch of stellar formation and naturally increase over time (e.g., see Crundall et al., 2019). Once again, the determinant, trace, and individual diagonal terms of the cross-covariance matrices were included in the size metrics under consideration. We implemented both "empirical" and "robust" versions of each association size metric based on the empirical \(XYZ\) and \(\xi^{\prime}\eta^{\prime}\zeta^{\prime}\) covariance matrices (equation 7). While empirical metrics give each member an equal weight, robust metrics assign them a weight inversely proportional to their Mahalanobis distance (see equation 8) to the association's core. We also investigated the use of median absolute deviations (MADs) as an alternate set of association size metrics, given that the MAD depends more weakly on a small number of outliers. We computed the MAD along each spatial component, in both \(XYZ\) and \(\xi^{\prime}\eta^{\prime}\zeta^{\prime}\) Galactic coordinate systems, as a way to represent the spatial extent of an NYA at each epoch. This metric is not only less sensitive to kinematic outliers than the standard deviation, it is also expected to be even less sensitive to outliers than "robust" metrics described above, because the latter still assign a small but nonzero weight to significant outliers. Finally, we computed the minimum spanning tree (MST) of the position of members at each epoch in both \(XYZ\) and \(\xi^{\prime}\eta^{\prime}\zeta^{\prime}\) Galactic coordinate systems using a Kruskal algorithm (Krruskal, 1956). MSTs are undirected graphs that link all vertices with the minimal possible total edge length, without allowing for loops. The mean edge length and the MAD of the edge lengths of the MST are used as association size metrics, which remain valid even in highly non-Gaussian geometries. This approach has the advantage of determining a characteristic size while being less sensitive to the shape of the NYA. A robust version of the mean edge length of the MST, using the same weights as other robust size metrics, was also explored. ### Error on the traceback age The traceback age is highly dependent on which stars are included in the computation of association size metrics. To assess this error on the traceback age due to the sensitivity to sample definition, we employed a jackknife Monte Carlo approach. We used 1000 iterations, each made up of a 50 % fraction of randomly selected stars the sample, to measure the standard deviation on the traceback age for every size metric. We also made use of a Monte Carlo approach to compute the error on the traceback age due to the measurement errors in the astrometric and kinematic data. Randomized Gaussian fluctuations with a standard deviation equal to the reported measurement errors were added to radial velocity, proper motion, and parallax measurements for 1000 iterations. The total error on the traceback age was computed with a Monte Carlo approach including both the jackknife and measurement errors. ### Simulated samples We constructed simulated NYA samples, designed to best represent the core sample of current-day members \(\beta\)PMG in order to test and characterize our method precision, accuracy, and sensitivity to several sources of bias. A model star was initialized with an \(XYZ\) Galactic position and a \(UVW\) space velocity equal to the current-day average values of members of the core sample of \(\beta\)PMG (see Table 1). This model star's trajectory was traced back 24 Myr to the birth epoch, a length of time similar to recent age estimates of \(\beta\)PMG using the isochrones and LDB methods (Malo et al., 2014). Then, a randomized Gaussian sample of 25 synthetic stars, the same number as the number of stars in the core sample of \(\beta\)PMG, was created, with average \(XYZ\) Galactic positions and \(UVW\) space velocities equal to those of the model star at the birth epoch, an initial \(XYZ\) Galactic position scatter of 3.0 pc, and an initial \(UVW\) space velocity scatter of 1.0 km s\({}^{-1}\), along all three axes. It is impossible to know for sure what the initial \(XYZ\) Galactic position scatter of \(\beta\)PMG was. However, the value used is similar to the current-day scatter along the \(\zeta^{\prime}\)-axis, an axis with very little growth over time. The initial \(UVW\) space velocity scatter was approximated by the average current-day \(UVW\) space velocity scatter of members of the core sample of \(\beta\)PMG, because we assume that \(UVW\) space velocities are relatively unchanged since the epoch of stellar formation for such a young NYA. We used the extreme deconvolution algorithm7 developed by Bovy et al. (2011) to mitigate the impact of elongation of \(UVW\) space velocities along the line of sight, due to larger errors on radial velocity measurements than proper motion measurements. This general algorithm can infer a \(d\)-dimensional distribution function from a set of heterogeneous and noisy samples, and it can treat uncertainties properly. When applied on the current-day \(UVW\) space velocities of members of the core sample of \(\beta\)PMG and their respective uncertainties (see Figure 4), the resulting ellipsoids offer a more accurate description of distribution of \(UVW\) space velocities in the core sample of \(\beta\)PMG, which in turn is used to create more representative simulated samples. The simulated sample were then projected 24 Myr forward in time such that synthetic stars are located as near to the Sun as current-day members of \(\beta\)PMG. \(XYZ\) Galactic positions and \(UVW\) space velocities were transformed into observables, and Gaussian-like fluctuations were added to the true sky coordinates, parallax, radial velocities, proper motions of each synthetic star in order to simulate measurement errors. A radial velocity shift was also added to simulate the bias on traceback ages due to the gravitational redshift and convective blueshift (see sections 2.3 and 2.4). The errors on astrometric and kinematic measurements, and the radial velocity shifts were drawn from the real set of individual members of the core sample of \(\beta\)PMG. The 24 Myr old simulated, biased, and noisy sample is then traced back over 50 Myr, and its size is computed with the full array of association size metrics (see Section 3.3) in order to find the epoch of minimal spatial extent. #### 3.5.1 Sensitivity to kinematic measurement errors An important bias to account for when computing the age of a coeval group of stars with traceback methods is the impact of Gaussian-like measurement errors in astrometric and kinematic data that may not simply result in a Gaussian-like distribution in the estimated epoch of minimal association size, but rather in a more complex probability distribution that will be not only inflated by an additional error term but also systematically biased with respect to the true epoch of minimal size closer to the current-day epoch. The total observed Galactic position scatter (\(\sigma_{\rm observed}\)) measured by association size metrics (see Section 3.3) is made of two components: the inherent Galactic position scatter of members of an NYA (\(\sigma_{\rm inherent}\)) and an additional, artificial scatter due to the impact of measurement errors (\(\sigma_{\rm error}\)). Assuming Gaussian distributions, the total observed scatter is given by: \[\sigma_{\rm observed}^{2}(t)=\sigma_{\rm inherent}^{2}(t)+\sigma_{\rm error}^{ 2}(t). \tag{9}\] Hence, the total observed scatter (the scatter that is directly observed in the traceback analysis) is an approximation of the inherent scatter (the scatter we are trying to measure). Unlike the inherent scatter, which we assume is minimal at the epoch of stellar formation and grows over time, the scatter due to measurement errors is minimal at the current-day epoch and increases as stellar trajectories are traced back in time, due to the increasingly imprecise \(XYZ\) Galactic positions that result from errors on \(UVW\) space velocities, mostly on account of relatively imprecise radial velocity measurements. Adding Gaussian-like fluctuations to the current-day radial velocities of members of the core sample of \(\beta\)PMG across all directions on the sky affects the angle of convergence of the reconstructed \(UVW\) space velocities, and thus biases the epoch and \(XYZ\) Galactic position where members converge. Therefore, traceback ages become not only less precise but also biased toward younger ages. The older the NYA, the longer the trajectories of its members must be traced back in time Figure 4: Application of the extreme deconvolution algorithm developed by Bovy et al. (2011) on the current-day \(UVW\) space velocities of members of the core sample of \(\beta\)PMG in order to mitigate the effect of the elongation of error bars along the line of sight due to relatively larger measurement errors on absolute radial velocity measurements. The dark and light blue shaded areas respectively represent the 67 % and 95 % probability ellipsoids defined by the diagonal terms of the covariance matrix. The length of the semi-major (a) and semi-minor (b) axes of the 95 % probability ellipsoids are indicated in the legends. to find the epoch of minimal association size, and the greater the error and bias on traceback ages, limiting the traceback method to younger associations. Figure 5: Bias (blue curves and circles) and error (green curves and squares) on the traceback age as a function of the initial \(XYZ\) Galactic position scatter, the initial \(UVW\) space velocity scatter, the radial velocity shift and measurement errors on radial velocity measurements, for 24 Myr old simulated NYAs made up of 25 members each, using the \(\xi^{\prime}\) variance as the association size metric. The bias on age is defined as the average measured age offset of simulated NYAs with respect to their actual age (24 Myr), and the error on age is the standard deviation of the measured ages of simulated NYAs. Dotted and solid curves respectively represent the bias and error on the traceback age with and without the Galactic potential model I from Irrgang et al. (2013). The gray dashed line shows the actual simulated NYA’s age. Simulation parameters are set to match members of the core sample of \(\beta\)PMG: measurement errors on radial velocity, proper motion, and parallax are equal to average measurement errors of members of the core sample of \(\beta\)PMG, the initial \(XYZ\) Galactic position scatter is set to 3.0 pc, the initial \(UVW\) space velocity scatter is set to 1.0 km s\({}^{-1}\), and the bias on radial velocity is set to 0.0 km s\({}^{-1}\). The results of simulations without the Galactic potential are similar to the results with the Galactic potential taken into account or only slightly offset, which seems to confirm our assumption that taking into account the effect of the Galactic potential has little impact on the final traceback age. In an effort to characterize this bias on traceback ages, we constructed simulated NYA samples initialized with a range of initial \(XYZ\) Galactic position and \(UVW\) space velocity scatters. After initializing a set of synthetic samples following these distributions, we projected them forward in time and added a range of Gaussian-like fluctuations on the simulated radial velocity measurements and a range of radial velocity shifts (see Section 3.5.2). Then, the resulting biased, noisy, reconstructed \(XYZ\) Galactic positions and \(UVW\) space velocities were traced back in time to determine how the epoch of minimum association size was affected. The biases (i.e., the average offsets of traceback ages from the actual age) and errors (i.e., the width of the distributions of traceback ages) on the traceback ages that result from this analysis are presented in Figure 5. _Gaia_ EDR3 data, one member included in the core sample, CD-31 16041, was automatically identified as a kinematic outlier (see Section 3.2), due to its Galactic position reaching \(4.3\sigma\) above the association's average Galactic position along the \(\xi^{\prime}\)-axis, beyond the \(3\sigma\) threshold (se Section 3.2). Its RUWE does not suggest it is unresolved multiple system. However, with a total of 12 available radial velocity measurements from _Gaia_ DR3 and other kinematic surveys, this star is no longer considered an outlier and it was used in the computation of association size metrics. As expected from their similar kinematics, members of the core sample of \(\beta\)PMG follow similar backward Galactic orbits. In Figure 7, the dispersion along the \(\xi^{\prime}\)-axis reaches a clearly defined minimum value, which corroborates our assumption that the wider \(UVW\) space velocity dispersion of members of \(\beta\)PMG along the \(U\)-axis would result a greater change in association size over time. In contrast, no clear minimum is observed along the \(\eta^{\prime}\)- and \(\zeta^{\prime}\)-axes. As expected, members follow sinusoidal trajectories along the \(\zeta^{\prime}\)-axis (perpendicular to the Galactic plane) and do not reach a minimum at the epoch of stellar formation. Because members do not clearly converge across these two directions, there is little useful data along these two axes for the computation of the traceback age of \(\beta\)PMG. ### Traceback age of the \(\beta\) Pictoris Moving Group Figure 8 shows several association size metrics back to 35 Myr in the past for members of the core sample of \(\beta\)PMG. The traceback ages of \(\beta\)PMG for all association size metrics tested in this work are presented in Table 7. These results confirm that, due to the wider \(UVW\) space velocity dispersion of members of \(\beta\)PMG along the \(U\)-axis (\(\sigma_{U}=1.66\,\mathrm{km\,s^{-1}}\)), size metrics along the \(X\)- and \(\xi^{\prime}\)-axes show greater contrast (i.e., the relative change from the current-day epoch to the epoch of minimal association size), and reach a minimum value at a more distant epoch when compared to size metrics along the \(Y\)- (\(\sigma_{V}=0.52\,\mathrm{km\,s^{-1}}\)), \(\eta^{\prime}\)-, \(Z\)- (\(\sigma_{W}=0.73\,\mathrm{km\,s^{-1}}\)) or \(\zeta^{\prime}\)-axes, consistent with the result from our simulated samples (see Section 3.5 and Figure 5). Size metrics along the \(Z\)- and \(\zeta^{\prime}\)-axes are safely ignored, due to the lack of convergence along these axes. Compound size metrics, which include data along all axes, such as the determinant and trace of the \(\xi^{\prime}\eta^{\prime}\zeta^{\prime}\) covariance matrix, or the total \(\xi^{\prime}\eta^{\prime}\zeta^{\prime}\) MAD, reach a minimum value at a slightly older epoch but with an inferior contrast as a result of the narrower \(UVW\) space velocity dispersion of members of the core sample of \(\beta\)PMG along the \(\eta^{\prime}\)- and \(\zeta^{\prime}\)-axes. We also note that size metrics along the \(\xi^{\prime}\)-axis have a greater contrast than those based on the \(X\)-axis. As a result, we focus the remainder of our analysis on size metrics along the \(\xi^{\prime}\)-axis. The contrast is slightly higher in the case of robust size metrics than equivalent empirical size metrics, but the epoch of minimal association size is similar and their jackknife error is significantly larger. This is an expected result for robust size metrics, due to the smaller number of members used by these metrics. A similar observation can be made for size metrics based on the spatial-kinematic cross-covariance matrices such as the \(X-U\) and \(\xi^{\prime}-\dot{\xi}^{\prime}\) cross-variances: a minimal value is reached at the same epoch as for their counterparts based on the \(XYZ\) and \(\xi^{\prime}\eta^{\prime}\zeta^{\prime}\) covariance matrices, and the contrast is the highest of any size metric tested in this study. However, the jackknife error of \(\xi^{\prime}-\dot{\xi}^{\prime}\) cross-variance is much larger, suggesting that it is unsuitable for this traceback analysis. Size metrics based on the MAD offer contrasts similar to those of their counterparts based on the spatial-kinematic cross-covariance matrices. However, their total errors are larger than those of size metrics based on the empirical \(XYZ\) and \(\xi^{\prime}\eta^{\prime}\zeta^{\prime}\) covariance matrices. As for size metrics based on the use of an MST, we observe no difference with respect to those based on the \(XYZ\) or Figure 6: Backward orbital integration of individual members of the core sample of \(\beta\)PMG in \(\xi^{\prime}\eta^{\prime}\zeta^{\prime}\) Galactic coordinates, back to 50 Myr into the past. Black circles indicate the members’ positions at the current epoch, and blue circles indicate their positions at the epoch of minimal association size, as measured by the variance along the \(\xi^{\prime}\)-axis. This figure is adapted from Figure 3 of Miret-Roig et al. (2020). \(\xi^{\prime}\eta^{\prime}\zeta^{\prime}\) coordinate systems. This result is not too surprising, given how MSTs are less sensitive to the shape of the NYA and only measure the change in distances between members. Nevertheless, size metrics based on the use of an MST are excluded from the current analysis, due to their low contrast. We also tested the impact of taking into account the Galactic potential, correcting the radial velocity shifts, or minimizing the impact of the sample contamination by kinematic outliers and multiple systems on the computation of accurate traceback ages for a young NYA like \(\beta\)PMG. We find traceback ages \(\sim 1.5\) Myr younger without considering the impact of the Galactic potential. As expected based on our discussion in Section 3.1, the impact of the Galactic potential on traceback ages is small, but not insignificant. The impact of the choice of the Galactic potential is negligible: if we use the Galactic potential from Bovy (2015) instead of model I from Irrgang et al. (2013), the final age difference is \(<0.2\) Myr, confirming the result from Miret-Roig et al. (2020). If radial velocity measurements are not corrected for, the unaccounted impacts of gravitational redshift and convective blueshift cause our traceback ages to be \(\sim 2\) Myr younger. The biggest change, however, comes from mitigating contamination by kinematic outliers and multiple systems. With the full input sample, traceback ages for \(\beta\)PMG are \(\sim 3\) Myr younger. We found a raw traceback age of \(19.8\pm 2.5\) Myr for \(\beta\)PMG using the \(\xi^{\prime}\) variance as the size metric. We computed a correction to account for the bias due to measurement errors for every size metric with simulated samples representative of \(\beta\)PMG using the same parameters as Figure 5 to account for the limited number of members and the impact of measurement errors in astrometric and kinematic data (see Section 3.5). The mea Figure 7: Backward orbital integration of individual members of the core sample of \(\beta\)PMG in \(\xi^{\prime}\eta^{\prime}\zeta^{\prime}\) Galactic coordinates as a function of time, back to 50 Myr into the past. The average changes in position are the result of the average orbit in the Galaxy of members of the core sample of \(\beta\)PMG, and the residual changes shown in all three panels are due to individual variations in \(UVW\) space velocity among members. Black circles indicate the members’ positions at the current epoch, and blue circles indicate their positions at the epoch of minimal association size, as measured by the variance along the \(\xi^{\prime}\)-, \(\eta^{\prime}\)- and \(\zeta^{\prime}\)-axes. The vertical dashed line represents the epoch of smallest spatial scatter using the same three association size metrics. The \(\xi^{\prime}\) variance provides the most signal due to the greater velocity dispersion along this axis, whereas the \(\zeta^{\prime}\) variance provides no useful data. This is an expected result of the Galactic orbits along this axis, which follow sinusoidal paths perpendicular to the Galactic plane. surement errors that were applied on simulated stars were taken from one-to-one associations between simulated stars and real members of the core sample of \(\beta\)PMG. We find that the measurement errors specific to our sample alone cause a \(\sim 0.6\) Myr bias toward younger ages, using the \(\xi^{\prime}\) variance as the size metric. Therefore, we apply a 0.6 Myr correction to our raw traceback age and find a final, corrected traceback of \(20.4\pm 2.5\) Myr for \(\beta\)PMG. ### Comparison with other age-dating methods While our corrected traceback age estimate for \(\beta\)PMG is significantly older than previous traceback age estimates (\(11-13\) Myr; Ortega et al., 2002; Song et al., 2003; Ortega et al., 2004; Miret-Roig et al., 2018), it is compatible with several more recent studies. Figure 9 shows the age distribution of \(\beta\)PMG using the corrected \(\xi^{\prime}\) variance as the association size metric, compared to other recent kinematic age estimates for \(\beta\)PMG by Crundall et al. (2019) and Miret-Roig et al. (2020). All three results are compatible with each other despite the differences in samples and methods. The result from Crundall et al. (2019), who reported a kinematic age of \(17.8\pm 1.2\) Myr for \(\beta\)PMG, can be more directly compared to our results using the \(X-U\) spatial-kinematic cross-covariance matrix, because this is the same metric indirectly used by the Chronostar method (the age of their forward model is constrained by the assumption that the initial spatial-kinematic model of \(\beta\)PMG has no spatial-kinematic cross-covariances). Using this size metric, we report a slightly older traceback age of \(19.9\pm 3.1\) Myr, including a 1.0 Myr correction to account for the biases on traceback ages due to measurement errors. This difference is mainly due to the more reliable radial velocity measurements from _Gaia_ DR3 and other kinematic surveys used in our study. Miret-Roig et al. (2020) reported traceback ages of \(18.5^{+2.0}_{-2.4}\) Myr and \(17.5^{+3.5}_{-2.9}\) Myr for \(\beta\)PMG by minimizing the trace and the determinant of the \(\xi^{\prime}\eta^{\prime}\zeta^{\prime}\) covariance matrix, respectively, computed with scikit-learn's robust estimator (Pedregosa et al., 2011). This is compatible with our result using the same association size metrics. We report corrected traceback ages of \(18.6\pm 4.0\) Myr and \(21.3\pm 6.5\) Myr using the same size metrics, respectively, although the errors on the traceback age are significantly larger. If instead we use the exact same sample of 26 _bona fide_ members as described in Miret-Roig et al. (2020), along with our robust version of the trace and determinant of the \(\xi^{\prime}\eta^{\prime}\zeta^{\prime}\) covariance matrix, we find corrected traceback ages of \(21.3\pm 8.4\) Myr and \(22.4\pm 8.1\) Myr, respectively. However, in our analysis, we excluded these specific size metrics in favor of size metrics with greater contrast and lower jackknife errors, such as the \(\xi^{\prime}\) variance. The difference between our result and that of Miret-Roig et al. (2020) is likely due not to differences in samples, but rather to the meth Figure 8: \(\beta\)PMG size as a function of time, back to 35 Myr into the past, using only members of the core sample. Several association size metrics are used: the determinant and trace of the empirical \(\xi^{\prime}\eta^{\prime}\zeta^{\prime}\) covariance matrix, as well as the variance along the \(\xi^{\prime}\)-, \(\eta^{\prime}\)- and \(\zeta^{\prime}\)-axes (top panel), and the total \(\xi^{\prime}\eta^{\prime}\zeta^{\prime}\) median absolute deviation (MAD) and the MAD along the same three axes (bottom panel). The epoch of minimal association size for each metric and its associated error are indicated in the legend. ods that were used to compute the spatial extent of the association and the Galactic orbits of its members. Nevertheless, all of the ages measured by traceback analyses remain younger than the age estimates of \(\beta\)PMG determined using other approaches, such as the isochrones and LDB methods (\(20-26\) Myr; Mentuch et al., 2008; Yee and Jensen, 2010; Malo et al., 2014; Mamajek and Bell, 2014; Bills and Jeffries, 2014; Bell et al., 2015; Galindo-Guil et al., 2022). While precise astrometric and kinematic measurements, clean uncontaminated samples, and corrections applied to radial velocity measurements to account for the gravitational redshift and convective blueshift helped bridge the gap with other methods, kinematic ages still remain younger than those calculated using the isochrones and LDB methods by \(\sim 1-3\) Myr. This difference may be explained by several factors. First, there may still be contamination in the core sample of \(\beta\)PMG, or a more robust way of measuring the spatial extent of the association over time might be necessary. However, the issue could also be more fundamental. Based on ages measured for star-forming regions such as Tau-Aur (Krolikowski et al., 2021), members may remain gravitationally bound to interstellar dust and gas for less than \(\sim 2-3\) Myr after their formation. This would bias traceback ages, regardless of the association size metric used, because the initial assumption that members follow independent Galactic orbits would only be true once the initial cloud has been dissipated and members are free of the influence of the rest of the association. In other words, the traceback method would measure the time since members of the NYA are gravitationally unbound, not the time since stellar formation. Such a systematic difference could potentially be further tested if the greater kinematic accuracy allows us to measure the traceback ages of older associations in the near future, as we may expect to find a similar offset between traceback ages and other methods. ## 5 Conclusions In this study, we created a numerical tool capable of the deriving the age of an NYA, based solely on astrometric and kinematic measurements by tracing back the Galactic orbits of a sample of stars and evaluating the size of the association with multiple metrics. The kanya Python package takes into account several observational biases, including the bias on traceback ages due to measurement errors in astrometric and kinematic data, as well as the biases on absolute radial velocity measurements due to the gravitational redshift and convective blueshift of the star. Our results confirm that it is crucial to compensate for these biases and limit sample contamination by unresolved multiple systems and other kinematic outliers, in order to compute reliable traceback age estimates. We applied our method to a core sample of 25 members of \(\beta\)PMG, which were assembled from the literature and include data from the _Gaia_ DR3 catalog, and we found that minimizing the variance along the \(\xi^{\prime}\)-axis offers the least random and systematic errors, due to the wider \(UVW\) space velocity dispersion of members of \(\beta\)PMG along the \(U\)-axis, which tends to maximize its spatial growth along the \(\xi^{\prime}\)-axis over time. We found a corrected traceback age of \(20.4\pm 2.5\) Myr, a result compatible with other recent kinematic ages found by other studies, but still slightly lower, by \(\sim 1-3\) Myr, than ages obtained using either the isochrones or LDB methods. In future works, we plan to apply our traceback method to other known NYAs. Results will also be improved by new data from dedicated radial velocity surveys and future releases of the _Gaia_ catalog, which has dramatically increased the number of stars with precise radial velocity measurements. More robust NYA mod Figure 9: Traceback age distribution of \(\beta\)PMG with 500 jackknife Monte Carlo iterations using a 50 % fraction, computed by minimizing the variance along the \(\xi^{\prime}\)-axis with a correction to account for the bias on traceback ages due to measurement errors (green). Our age estimate, \(20.4\pm 2.5\) Myr, is compatible with other recent kinematic ages, i.e., \(18.5^{+2.0}_{-2.4}\) Myr (Miret-Roig et al., 2020, orange) and \(17.8\pm 1.2\) Myr (Crumdall et al., 2019, blue). However, all three age estimates remain incompatible with the range ages for \(\beta\)PMG computed with the isochrones or LDB methods (Mamajek and Bell, 2014, light gray shaded area). eling and association size metrics may also help further bridge the gap with ages derived using non-kinematic methods. ## 6 Acknowledgments We would like to thank Miret-Roig, N. and Mamajek, E. for their assistance, useful discussions, and helpful answers to our inquires. J. G. and R. D. acknowledge the support of the Natural Sciences and Engineering Research Council of Canada (NSERC), funding reference numbers RGPIN-2021-03121 and RGPIN-2017-06777, respectively. This work was partially carried under a Banting grant from NSERC. This work has made use of: the SIMBAD database and VizieR catalog access tool, operated at the Centre de Donnees astronomiques de Strasbourg, France (Ochsenbein et al., 2000); data products from the Two Micron All Sky Survey (_2MASS_), which is a joint project of the University of Massachusetts and the Infrared Processing and Analysis Center (IPAC)/California Institute of Technology (Caltech), funded by the National Aeronautics and Space Administration (NASA), and the National Science Foundation (Skrutskie et al., 2006; DOI: 10.26131/IRSA2); data from the European Space Agency (ESA) mission _Gaia_(Gaia Collaboration et al., 2016). Gaia data are being processed by the _Gaia_ Data Processing and Analysis Consortium (DPAC; [https://www.cosmos.esa.int/web/gaia/dpac/consortium](https://www.cosmos.esa.int/web/gaia/dpac/consortium)). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the _Gaia_ Multilateral Agreement (MLA). The _Gaia_ mission website is [https://www.cosmos.esa.int/gaia](https://www.cosmos.esa.int/gaia). The _Gaia_ archive website is [https://archives.esac.esa.int/gaia](https://archives.esac.esa.int/gaia). D.C. wrote the codes and the manuscript, and generated figures and tables; J.G. compiled the list of candidates and built figures 1-3; R.D. led the analysis and provided general comments. BANYAN \(\Sigma\)(Gagne et al., 2018), galpy (Bovy, 2015), Scikit-learn (Pedregosa et al., 2011), Extreme deconvolution (Bovy et al., 2011), Astropy (Astropy Collaboration et al., 2013, 2018, 2022)
2303.14272
Learning to Operate in Open Worlds by Adapting Planning Models
Planning agents are ill-equipped to act in novel situations in which their domain model no longer accurately represents the world. We introduce an approach for such agents operating in open worlds that detects the presence of novelties and effectively adapts their domain models and consequent action selection. It uses observations of action execution and measures their divergence from what is expected, according to the environment model, to infer existence of a novelty. Then, it revises the model through a heuristics-guided search over model changes. We report empirical evaluations on the CartPole problem, a standard Reinforcement Learning (RL) benchmark. The results show that our approach can deal with a class of novelties very quickly and in an interpretable fashion.
Wiktor Piotrowski, Roni Stern, Yoni Sher, Jacob Le, Matthew Klenk, Johan deKleer, Shiwali Mohan
2023-03-24T21:04:16Z
http://arxiv.org/abs/2303.14272v1
# Learning to Operate in Open Worlds by ###### Abstract. Planning agents are ill-equipped to act in novel situations in which their domain model no longer accurately represents the world. We introduce an approach for such agents operating in open worlds that detects the presence of novelties and effectively adapts their domain models and consequent action selection. It uses observations of action execution and measures their divergence from what is expected, according to the environment model, to infer existence of a novelty. Then, it revises the model through a heuristics-guided search over model changes. We report empirical evaluations on the CartPole problem, a standard Reinforcement Learning (RL) benchmark. The results show that our approach can deal with a class of novelties very quickly and in an interpretable fashion. 2023 ## 1. Introduction Artificial intelligence and machine learning research on sequential decision-making usually relies on the _closed world_ assumption. That is, all relevant characteristics of the environment are known ahead of deployment, during agent design time. For a decision-making agent that relies on automated planning techniques, knowledge about environmental characteristics is encoded explicitly as a domain model (description of actions, events, processes) that govern the agent's beliefs about the environment's dynamics. In an _open world_, however, the characteristics of the environment often change while the agent is operational (Brock et al., 2015; Brock et al., 2015). Such changes -- _novelties_ -- can cause a planning agent to fail catastrophically as its knowledge of the environment may become incomplete or incorrect. We explore how planning agents can robustly handle such novelties in an open world. Agents following our design use the planning domain model to also evaluate if observed outcomes diverge from what is expected in the plans it generated. If the divergence is significant, the novelty is inferred and accomodated through heuristics search. This approach is applicable to planning agents implementing various levels of PDDL. Results in this paper are from a system implemented using PDDL+(Brock et al., 2015) for CartPole (Brock et al., 2015), a classic control problem. ## 2. Approach Figure 1 shows the proposed agent design and the novelty reasoning process. The agent interacts with its environment in a sequence of episodes, where each episode is a set of actions taken by the agent to reach a terminal state. At some episode novelty is introduced and the environment changes, the agent is oblivious to the existence, timing, and nature of the introduced novelty. At an episode's beginning, the agent accepts the current state \(s_{t}\) and creates a corresponding planning problem \((s_{t},G)\) Figure 1. Diagram of novelty reasoning. Solid lines denote the planning process and dotted denote domain model revision. which is then paired with the domain model \(D\). Then, it uses a planner to solve the problem to obtain plan \(\pi\) and attempts to execute in the environment. During execution, it stores the observed trajectory \(\tau\) as a list of \(\langle s_{t},a,s_{t1}\rangle\). At the episode's end, it computes an _inconsistency score_ for the current model \(D\) by comparing the expected state trajectory with the observed execution trace, \(\tau\). Formally, let \(S\tau\) be the sequence of states in observations and \(S\pi,D\) be the expected sequence of states obtained by simulating the generated plan \(\pi\) with the domain model \(D\). Let \(Sxi\) denote the \(i^{th}\) state in the state sequence \(Sx\). The inconsistency score is computed as \(C\pi,D,\tau={}_{i}\gamma^{i}\cdot||S\tau i-S\pi,Di||\) where \(0<\gamma<1\) is a discount factor intended to limit the impact of sensing noise. If the inconsistency score exceeds a set threshold \(C_{th}\), the agent infers that its domain model \(D\) has become inconsistent with the novel environment characteristics. Then, it initiates the _search-based model repair_ process described in Algorithm 1 to adjust \(D\) accordingly. Algorithm 1 works by searching for a _domain repair_\(\Phi\), which is a sequence of model modifications that, when applied to the agent's internal domain \(D\), returns a domain \(D^{\prime}\) that is consistent with observations. To find such a repair, the algorithm accepts as input a set of basic _Model Manipulation Operators_ (MMOs), denoted \(\{\varphi\}=\{\varphi_{0},\varphi_{1},...,\varphi_{n}\}\). Each MMO \(\varphi_{i}\in\{\varphi\}\) represents a possible change to the domain. A domain repair \(\Phi\) is a sequence of one or more basic MMO \(\varphi_{i}\in\{\varphi\}\). An MMO example is to add an amount \(\Lambda\in\mathbb{R}\) to a numeric domain fluent. After this repair, the agent moves on to the next episode and uses the updated internal domain model \(D^{\prime}\) to solve the subsequent tasks. It may take a few repair steps to find a consistent domain model because a single trajectory may not provide enough information to find the correct repair. ## 3 Results We evaluated our approach using a standard implementation of CartPole [1], where the task is to balance the pole in the upright position for \(n=200\) steps by pushing the cart either left or right. The environment provides information on the velocities and positions of the cart and the pole (4-tuple). The domain's system dynamics are defined by several parameters: mass of the cart, mass of the pole, length of the pole, gravity, angle limit, cart limit, push force. ``` Input :\(\{\varphi\}\): a set of basic MMOs; \(D\): the original PDDL+ domain; \(\pi\): plan generated using \(D\); \(\tau\): a trajectory; \(C_{th}\): consistency threshold Output :\(\Phi_{best}\), a domain repair for \(D\) OPEN=\(\{\emptyset\}\); \(C_{best}\leftarrow\infty\); \(\varphi_{best}\leftarrow\emptyset\) while\(C_{best}\geq C_{th}\)do 1\(\ell\leftarrow\) pop from OPEN foreach\(\varphi_{i}\in\{\varphi\}\)do 2\(\phi^{\prime}\leftarrow\phi\cup\varphi_{i}\) /* Compes a domain repair */ [MISSING_PAGE_POST] as drastic as on the DQN agents. It is because planning agents use models that are modular, composable and are written in a general way. In the novelty setting, a subset of model elements are still relevant. Second, our approach, the planning-adaptive agent learns _quickly_ and recovers optimal performance in around 20 episodes. This observation supports our central thesis: model-space search enables quick adaptation in dynamic environments because it can localize the learning to specific parts of the explicit model and other parts are _transferred_. In contrast, a DQN agent has to learn new network parameters afresh. Finally, the adaptations are _interpretable_; they are expressed in the same language as the original model, enabling a model designer to inspect what the system has learned. Our method found the following example repairs for CartPole. Each element in the repair is a numeric domain fluent and the reported value is a change from its nominal value. ## Acknowledgements The work presented in this paper was supported in part by the DARPA SAIL-ON program under award number HR001120C0040. The views, opinions and/or findings expressed are those of the authors' and should not be interpreted as representing the official views or policies of the Department of Defense or the U.S. Government.
2301.02269
Taming the Rotating Wave Approximation
The interaction between light and matter is one of the oldest research areas of quantum mechanics, and a field that just keeps on delivering new insights and applications. With the arrival of cavity and circuit quantum electrodynamics we can now achieve strong light-matter couplings which form the basis of most implementations of quantum technology. But quantum information processing also has high demands requiring total error rates of fractions of percentage in order to be scalable (fault-tolerant) to useful applications. Since errors can also arise from modelling, this has brought into center stage one of the key approximations of quantum theory, the Rotating Wave Approximation (RWA) of the quantum Rabi model, leading to the Jaynes-Cummings Hamiltonian. While the RWA is often very good and incredibly useful to understand light-matter interactions, there is also growing experimental evidence of regimes where it is a bad approximation. Here, we ask and answer a harder question: for which experimental parameters is the RWA, although perhaps qualitatively adequate, already not good enough to match the demands of scalable quantum technology? For example, when is the error at least, and when at most, 1%? To answer this, we develop rigorous non-perturbative bounds taming the RWA. We find that these bounds not only depend, as expected, on the ratio of the coupling strength and the oscillator frequency, but also on the average number of photons in the initial state. This confirms recent experiments on photon-dressed Bloch-Siegert shifts. We argue that with experiments reporting controllable cavity states with hundreds of photons and with quantum error correcting codes exploring more and more of Fock space, this state-dependency of the RWA is increasingly relevant for the field of quantum computation, and our results pave the way towards a better understanding of those experiments.
Daniel Burgarth, Paolo Facchi, Robin Hillier, Marilena Ligabò
2023-01-05T19:02:24Z
http://arxiv.org/abs/2301.02269v2
# Taming the Rotating Wave Approximation ###### Abstract The interaction between light and matter is one of the oldest research areas of quantum mechanics, and a field that just keeps on delivering new insights and applications. With the arrival of cavity and circuit quantum electrodynamics we can now achieve strong light-matter couplings which form the basis of most implementations of quantum technology. But quantum information processing also has high demands requiring total error rates of fractions of percentage in order to be scalable (fault-tolerant) to useful applications. Since errors can also arise from modelling, this has brought into center stage one of the key approximations of quantum theory, the Rotating Wave Approximation (RWA) of the quantum Rabi model, leading to the Jaynes-Cummings Hamiltonian. While the RWA is often very good and incredibly useful to understand light-matter interactions, there is also growing experimental evidence of regimes where it is a bad approximation. Here, we ask and answer a harder question: for which experimental parameters is the RWA, although perhaps qualitatively adequate, already not good enough to match the demands of scalable quantum technology? For example, when is the error at least, and when at most, 1%? To answer this, we develop rigorous non-perturbative bounds taming the RWA. We find that these bounds not only depend, as expected, on the ratio of the coupling strength and the oscillator frequency, but also on the average number of photons in the initial state. This confirms recent experiments on photon-dressed Bloch-Siegert shifts. We argue that with experiments reporting controllable cavity states with hundreds of photons and with quantum error correcting codes exploring more and more of Fock space, this state-dependency of the RWA is increasingly relevant for the field of quantum computation, and our results pave the way towards a better understanding of those experiments. The Rotating Wave Approximation (RWA) is one of the oldest and most important approximations in Quantum Theory. The starting point is at the birthplace of Nuclear Magnetic Resonance (NMR) in 1938, when Rabi and co-authors realized that rather than using rotating fields, "it is more convenient experimentally to use an oscillating field, in which case the transition probability is approximately the same for weak oscillating fields near the resonance frequency" [1]. This was significant: Rabi had shown earlier that the Schrodinger equation for rotating fields is easily solved analytically [2]. This approximation was a crucial step in understanding driven quantum dynamics, as the time-dependent Schrodinger equation is notoriously hard to solve. Perhaps this is the key reason for the popularity [3] of the RWA: it provides understanding and intuition of resonant driving. In fact, the importance of these ideas and the resulting techniques of NMR led to Rabi being awarded the Nobel Laureate in Physics in 1944. But what justified the approximation, and how did Rabi get to it? Primarily reporting an experimental finding, Rabi himself does not provide justification, but over the last 80 years many different theoretical methods were used to provide justification and deeper understanding of the RWA (the literature is extensive, but see for instance [4, 5, 6, 7, 8]). Rabi described the atom as a two-level system and the field classically. In the full quantum description of light-matter interaction the situation is much more complicated. By the 1960s Quantum Electrodynamics was well established, and the electromagnetic field is now itself a quantum system described by unbounded operators. Jaynes and Cummings [9] developed the full quantum mechanical version of the Rabi model (now called Quantum Rabi Model) \[H=\frac{\Omega}{2}\sigma_{z}+\omega a^{\dagger}a+\lambda\sigma_{x}(a+a^{ \dagger}), \tag{1}\] and applied the RWA to obtain the Jaynes-Cummings model \[H_{\text{RWA}}=\frac{\Omega}{2}\sigma_{z}+\omega a^{\dagger}a+\lambda(\sigma_ {+}a+\sigma_{-}a^{\dagger}). \tag{2}\] Here, \(\Omega\) is the energy difference between the two states of the atom, \(\omega\) the light frequency and \(\lambda\) the strength of the light-matter coupling; we always use \(\hbar=1\). Due to its simplicity and wide range of applicability, the Jaynes-Cummings model is the main work horse of light-matter interactions and, by extension, quantum technology. For an excellent overview of its scope see [10]. While at the time of the original paper the RWA was rather natural, given that the bare coupling between matter and light tends to be extremely weak, in cavity and circuit QED nowadays it is well understood that the effective coupling can be enhanced to a level where the RWA breaks down. This is often referred to as the Ultrastrong Coupling regime. For examples of experiments, see [11] and [12], for a recent review see [13]. While there is no rigorous derivation of the RWA for the Jaynes-Cummings model till date, the common lore is that the ratio \(g\equiv\lambda/\omega\) between the light-matter coupling and the light frequency is the key parameter [10]. This is motivated by perturbative arguments and of course backed up by extensive numerical studies and simulations. For a summary of the different regimes see Table 1.1 in [10], where it is argued that for \(g\approx 0.1\) the RWA breaks down. On the other hand, this picture changes for high photon numbers. Indeed, Walls showed [14] that the Bloch-Siegert shift (taken as a sign of the breakdown of the RWA) scales with the number of photons. This was also observed experimentally [15]. See also [16] for a perturbative argument that \(\lambda\sqrt{\langle a^{\dagger}a\rangle}\ll\omega\) is a more relevant condition in that regime. What this means is that the quality of the RWA does not only depend on the parameters of the model, but also on the initial state of the system. See Fig. 2 for a numerical example. Indeed, we prove that there are short times [17]\(t\leq\pi/\omega\) for which \[\|e^{-itH}-e^{-itH_{\rm RWA}}\|\geq\frac{1}{6} \tag{3}\] for _any_ parameter value. This should be considered as a big error, because the biggest difference between two unitaries is 2 and because modern quantum technology demands errors well below 1% (see below). Does this mean that the RWA is wrong? No, because we also show that for any state \(\varphi\) and any time \(t\), \[e^{-itH}\varphi-e^{-itH_{\rm RWA}}\varphi\to 0,\quad\mbox{as $g\to 0$.} \tag{4}\] This is our main result, providing a rigorous justification to the RWA. It does not contradict Eq. (3), but is a typical phenomenon of unbounded Hamiltonians such as \(H\) and \(H_{\rm RWA}\): there is no norm convergence, only state-dependent convergence. This is one the key technicalities that make it hard to apply standard perturbative arguments for the RWA. Let us discuss the relevance of this photon-dependence in the context of quantum technology. For fault-tolerant quantum computation, very high fidelity with error rates \(<10^{-3}\) are required [18]. Moreover, modern qubit designs such as GKP [19] and CAT qubits use cavity states and Figure 1: Light-matter interactions have been a major driver in quantum physics for half a decade. Often, atoms are placed into cavities to amplify their effective coupling strength with photons. Here, we show that the rotating wave approximation is not only determined by such coupling strength and the frequency of the driving, but also by the number of photons (naively depicted as golden spheres) in the cavity. explore high numbers of photons. In particular, CAT states have been created with about 100 photons [20]. It is therefore necessary to have a good handle of the error of the RWA. Since quantum algorithms also invoke dynamics, it is not sufficient to simply match spectral properties, as it is usually done, but we need to bound the difference in evolution operators. The interesting evolution time regime here are short times up to \(\pi/\omega\): already there, the RWA dynamics can deviate substantially. We show that the maximal error \(\epsilon_{n}\) that the RWA has for an \(n-\)photon Fock state in a short time interval up to \(\pi/\omega\) is bounded between \[5g\sqrt{n+3}\geq\epsilon_{n}\geq\frac{1}{6}-\frac{1}{216g^{2}n}-\frac{7}{12n}, \tag{5}\] proving that the RWA becomes good for small \(g\) but bad for large \(n\). Tighter and more general bounds and the full proofs of our results are provided in the Appendix. See also Fig. 2 for numerical examples of these refined bounds. These bounds prove that \(g\sqrt{n}\) is the right parameter (as anticipated by the perturbative argument [16]) for the validity of the RWA for Fock states. For more general states, see the Appendix. These bounds will be useful for experimentalists in quantum information to judge if they should apply the RWA or not. Figure 2: Bounding the error of the RWA. We consider a Fock state evolving under the quantum Rabi and the Jaynes-Cummings model, respectively. We show our analytical upper and lower bounds and the exact numerical norm difference between the two models. We see that the error grows with the photon number, and that the bounds provide a good understanding of the scaling (other parameters here \(g=\frac{\lambda}{\omega}=\frac{1}{100}\),\(\Delta=0\), \(t\approx 0.04/\omega\)). We would now like to explain the idea which allows us to tame the RWA. Although there are many different conceptual ideas trying to justify the RWA, almost all of them agree that 'highly oscillatory terms' in a Hamiltonian may sometimes be discarded to a good approximation. But why? Interestingly, some have argued that such terms are not observable, since measurements take finite time. This is plausible; however it turns out that even if measurements are instantaneous, the RWA can be taken. Others argue on the basis of first order perturbation theory, when the term involves an integral over the Hamiltonian. This gives a good qualitative picture but makes it impossible to compute a rigorous and precise picture. In a more recent work [8] a different route was taken: by an integration by part, the difference between two evolutions can indeed be written in terms of an integral over the difference of their generating Hamiltonians, where fast oscillations average out. This allows one to prove and provide bounds for the RWA, but only in the finite dimensional case. Here, we develop an integration by part to unbounded operators. In the general case, this is hard, so we are employing several structures of the specific problem of the quantum Rabi model to simplify the analysis. First, both \(H\) and \(H_{\text{RWA}}\) are time-independent, so we can use the rich theory of semigroups. Secondly, \(H_{\text{RWA}}\) has many conserved quantities and can only increase and decrease the photon number by one. Finally, all involved quantities are well-defined on the subspace of rapidly decreasing functions and leave it invariant, which allows us to work on that subspace. We refer to the Appendix for the mathematical details. To summarize, after decades of work and conjectures around the RWA for the highly relevant quantum Rabi model, we have now got a rigorous proof and in addition a complete quantitative measure in terms of lower and upper bounds on the error of the approximation. In particular, this confirms the experimental and numerical findings that the error becomes large for large ratio \(g\) between light-matter coupling and light frequency or for large photon numbers and hence the dependence on the state of the system. In practice, for given fixed photon number and given maximally permissible error this tells us how small \(g\) has to be in order for the RWA to work. Since experiments are working with ever growing systems, our results will be of immediate relevance to the understanding and setup of those experiments and further developments in quantum technology. We expect that the methods developed for our proof can be applied to tame the RWA for other interesting models, such as systems with multiple modes, nonlinearities and other descendants of the Jaynes-Cummings model [10]. ## Acknowledgements DB acknowledges funding by the Australian Research Council (project numbers FT190100106, DP210101367, CE170100009). PF and ML were partially supported by the Italian National Group of Mathematical Physics (GNFM-INdAM), by Istituto Nazionale di Fisica Nucleare (INFN) through the project "QUANTUM", and by Regione Puglia and QuantERA ERA-NET Cofund in Quantum Technologies (GA No. 731473), project PACE-IN. ## Appendix ### 1 Time evolution of the Rabi and the Jaynes-Cummings models We consider the infinite dimensional Hilbert space \(L^{2}(\mathbb{R})\), the creation operator \(a^{\dagger}=\frac{1}{\sqrt{2}}\left(x-\frac{d}{dx}\right)\) and the annihilation operator \(a=\frac{1}{\sqrt{2}}\left(x+\frac{d}{dx}\right)\) on Schwartz space \(\mathscr{S}(\mathbb{R})\). A fundamental feature of these two operators is that their commutator is the identity operator, i.e. \[[a,a^{\dagger}]=I. \tag{6}\] Now we consider the following two Hamiltonians \[H=\frac{\Omega}{2}\sigma_{z}\otimes I+I\otimes\omega a^{\dagger}a+\lambda \sigma_{x}\otimes(a+a^{\dagger}) \tag{7}\] and \[H_{\text{RWA}}=\frac{\Omega}{2}\sigma_{z}\otimes I+I\otimes\omega a^{\dagger} a+\lambda(\sigma_{+}\otimes a+\sigma_{-}\otimes a^{\dagger}) \tag{8}\] on \(\mathbb{C}^{2}\otimes\mathscr{S}(\mathbb{R})\), and we also denote their closures by \(H\) and \(H_{\text{RWA}}\), with suitable dense domains. Here \(\lambda,\omega,\Omega\in\mathbb{R}\), \[\sigma_{x}=\left(\begin{array}{cc}0&1\\ 1&0\end{array}\right),\quad\sigma_{y}=\left(\begin{array}{cc}0&-i\\ i&0\end{array}\right),\quad\sigma_{z}=\left(\begin{array}{cc}1&0\\ 0&-1\end{array}\right), \tag{9}\] are the Pauli matrices and \[\sigma_{+}=\left(\begin{array}{cc}0&1\\ 0&0\end{array}\right),\quad\sigma_{-}=\left(\begin{array}{cc}0&0\\ 1&0\end{array}\right). \tag{10}\] In what follows, we usually work on \(\mathbb{C}^{2}\otimes\mathscr{S}(\mathbb{R})\) without saying so explicitly every time, as this subspace of \(\mathbb{C}^{2}\otimes L^{2}(\mathbb{R})\) forms a common invariant core of the operators we are studying in this section; this can be shown following the standard methods in [22, Sec.X.6]. To simplify the notation, we usually use the same notation for an operator on this core and its closure on the whole domain. ### Interaction picture: time dependent Hamiltonians \(H_{1}(t)\) and \(H_{2}(t)\) We consider the following Hamiltonian \[H_{0}=\frac{\omega}{2}\sigma_{z}\otimes I+I\otimes\omega a^{\dagger}a, \tag{11}\] and define for all \(t\in\mathbb{R}\) \[U_{1}(t)=e^{itH_{0}}e^{-itH},\quad U_{2}(t)=e^{itH_{0}}e^{-itH_{\text{RWA}}}. \tag{12}\] We have that for all \(j\in\{1,2\}\): \(U_{j}(0)=I\) and \[i\frac{dU_{j}(t)}{dt}=H_{j}(t)U_{j}(t), \tag{13}\] where for all \(t\in\mathbb{R}\): \[H_{1}(t)=e^{itH_{0}}(H-H_{0})e^{-itH_{0}},\quad H_{2}(t)=e^{itH_{0}}(H_{\rm RWA }-H_{0})e^{-itH_{0}}, \tag{14}\] We define the detuning between field and atom as \(\Delta=\Omega-\omega\) and compute \(H_{1}(t)\): \[H_{1}(t) = e^{itH_{0}}(H-H_{0})e^{-itH_{0}} \tag{15}\] \[= e^{itH_{0}}\left(\frac{\Delta}{2}\sigma_{z}\otimes I+\lambda \sigma_{x}\otimes(a+a^{\dagger})\right)e^{-itH_{0}}\] \[= \frac{\Delta}{2}\sigma_{z}\otimes I+\lambda(e^{it\omega\sigma_{z} /2}\sigma_{x}e^{-it\omega\sigma_{z}/2})\otimes(e^{it\omega a^{\dagger}a}(a+a^ {\dagger})e^{-it\omega a^{\dagger}a})\] We get \[e^{it\omega\sigma_{z}/2}\sigma_{x}e^{-it\omega\sigma_{z}/2}=\cos\left(t\omega \right)\sigma_{x}-\sin\left(t\omega\right)\sigma_{y}, \tag{16}\] and \[e^{it\omega a^{\dagger}a}ae^{-it\omega a^{\dagger}a}=e^{-it\omega}a,\qquad e^ {it\omega a^{\dagger}a}a^{\dagger}e^{-it\omega a^{\dagger}a}=e^{it\omega}a^{ \dagger}. \tag{17}\] Therefore, \[H_{1}(t) = \frac{\Delta}{2}\sigma_{z}\otimes I+\lambda\left(\cos\left(t \omega\right)\sigma_{x}-\sin\left(t\omega\right)\sigma_{y}\right)\otimes\left( e^{-it\omega}a+e^{it\omega}a^{\dagger}\right)\] \[= \frac{\Delta}{2}\sigma_{z}\otimes I+\lambda\left(\sigma_{+} \otimes a+\sigma_{-}\otimes a^{\dagger}+e^{2it\omega}\sigma_{+}\otimes a^{ \dagger}+e^{-2it\omega}\sigma_{-}\otimes a\right).\] In a similar way we compute \(H_{2}(t)\): \[H_{2}(t) = e^{itH_{0}}(H_{\rm RWA}-H_{0})e^{-itH_{0}} \tag{19}\] \[= e^{itH_{0}}\left(\frac{\Delta}{2}\sigma_{z}\otimes I+\lambda( \sigma_{+}\otimes a+\sigma_{-}\otimes a^{\dagger})\right)e^{-itH_{0}}\] \[= \frac{\Delta}{2}\sigma_{z}\otimes I+\lambda(\sigma_{+}\otimes a+ \sigma_{-}\otimes a^{\dagger}).\] We notice that \(H_{2}\) is time-independent, and again we use the same symbol for its closure. ### Computation of \(U_{2}(t)\) Notice that, since \(H_{2}\) is time-independent, \(\{U_{2}(t)\}_{t\in\mathbb{R}}\) is a unitary group: for all \(t\in\mathbb{R}\), \[U_{2}(t)=e^{itH_{0}}e^{-itH_{\rm RWA}}=e^{-it\left(\frac{\Delta}{2}\sigma_{z} \otimes I+\lambda(\sigma_{+}\otimes a+\sigma_{-}\otimes a^{\dagger})\right)}= e^{-itH_{2}}. \tag{20}\] We observe that \(\mathbb{C}^{2}\otimes\mathscr{S}(\mathbb{R})\subset D(H_{2})\) is a set of analytic vectors for \(K\). Let \[e_{1}=\left(\begin{array}{c}1\\ 0\end{array}\right),\qquad e_{2}=\left(\begin{array}{c}0\\ 1\end{array}\right) \tag{21}\] be the canonical orthonormal basis of \(\mathbb{C}^{2}\). Using the following identities \[\sigma_{+}e_{1}=\left(\begin{array}{c}0\\ 0\end{array}\right),\quad\sigma_{+}e_{2}=e_{1},\quad\sigma_{-}e_{1}=e_{2}, \quad\sigma_{-}e_{2}=\left(\begin{array}{c}0\\ 0\end{array}\right), \tag{22}\] we compute the even and the odd powers of \(H_{2}\) on vectors \(e_{1}\otimes\psi\) and \(e_{2}\otimes\psi\), with \(\psi\in\mathscr{S}(\mathbb{R})\). For all \(j\in\mathbb{N}\), \[H_{2}^{2j}(e_{1}\otimes\psi)=\sum_{\ell=0}^{j}\left(\begin{array}{c}j\\ \ell\end{array}\right)\lambda^{2\ell}\left(\frac{\Delta}{2}\right)^{2(j-\ell) }e_{1}\otimes(aa^{\dagger})^{\ell}\psi, \tag{23}\] \[H_{2}^{2j}(e_{2}\otimes\psi)=\sum_{\ell=0}^{j}\left(\begin{array}{c}j\\ \ell\end{array}\right)\lambda^{2\ell}\left(\frac{\Delta}{2}\right)^{2(j-\ell) }e_{2}\otimes(a^{\dagger}a)^{\ell}\psi, \tag{24}\] \[H_{2}^{2j+1}(e_{1}\otimes\psi)=\sum_{\ell=0}^{j}\left(\begin{array}{c}j\\ \ell\end{array}\right)\left[\lambda^{2\ell}\left(\frac{\Delta}{2}\right)^{2(j- \ell)+1}e_{1}\otimes(aa^{\dagger})^{\ell}\psi+\lambda^{2\ell+1}\left(\frac{ \Delta}{2}\right)^{2(j-\ell)}e_{2}\otimes a^{\dagger}(aa^{\dagger})^{\ell} \psi\right], \tag{25}\] \[H_{2}^{2j+1}(e_{2}\otimes\psi)=\sum_{\ell=0}^{j}\left(\begin{array}{c}j\\ \ell\end{array}\right)\left[-\lambda^{2\ell}\left(\frac{\Delta}{2}\right)^{2(j -\ell)+1}e_{2}\otimes(a^{\dagger}a)^{\ell}\psi+\lambda^{2\ell+1}\left(\frac{ \Delta}{2}\right)^{2(j-\ell)}e_{1}\otimes a(a^{\dagger}a)^{\ell}\psi\right]. \tag{26}\] **Lemma 1.1**.: \(U_{2}(t)(\mathbb{C}^{2}\otimes\mathscr{S}(\mathbb{R}))\subset\mathbb{C}^{2} \otimes\mathscr{S}(\mathbb{R})\) _for all \(t\in\mathbb{R}\)._ Proof.: By the \(N\)-representation theorem for \(\mathscr{S}(\mathbb{R})\)[21, Thm.V.13], we have that \(\psi\in\mathscr{S}(\mathbb{R})\) if and only if for all \(m\in\mathbb{N}\): \[\sup_{n\in\mathbb{N}}|\langle\varphi_{n}|\psi\rangle|n^{m}<+\infty, \tag{27}\] where \(\{\varphi_{n}\}_{n\in\mathbb{N}}\) is the orthonormal eigenbasis of the number operator \(a^{\dagger}a\), i.e. \(a^{\dagger}a\varphi_{n}=n\varphi_{n}\) for all \(n\in\mathbb{N}\). We have that for all \(n\in\mathbb{N}\) and \(t\in\mathbb{R}\): \[U_{2}(t)(e_{1}\otimes\varphi_{n})=e_{1}\otimes a_{n}(t)\varphi_{n}+e_{2} \otimes b_{n}(t)\varphi_{n+1} \tag{28}\] and \[U_{2}(t)(e_{2}\otimes\varphi_{n})=e_{1}\otimes c_{n}(t)\varphi_{n-1}+e_{2} \otimes d_{n}(t)\varphi_{n} \tag{29}\] where \[a_{n}(t)=\cos\left(t\sqrt{\lambda^{2}(n+1)+\left(\frac{\Delta}{2}\right)^{2}} \right)-\frac{i\Delta}{2}\frac{\sin\left(t\sqrt{\lambda^{2}(n+1)+\left(\frac{ \Delta}{2}\right)^{2}}\right)}{\sqrt{\lambda^{2}(n+1)+\left(\frac{\Delta}{2} \right)^{2}}}, \tag{30}\] \[b_{n}(t)=-i\lambda\sqrt{n+1}\frac{\sin\left(t\sqrt{\lambda^{2}(n+2)+\left(\frac{ \Delta}{2}\right)^{2}}\right)}{\sqrt{\lambda^{2}(n+2)+\left(\frac{\Delta}{2} \right)^{2}}}, \tag{31}\] \[c_{n}(t)=-i\lambda\sqrt{n}\frac{\sin\left(t\sqrt{\lambda^{2}n+\left(\frac{ \Delta}{2}\right)^{2}}\right)}{\sqrt{\lambda^{2}n+\left(\frac{\Delta}{2} \right)^{2}}} \tag{32}\] and \[d_{n}(t)=\cos\left(t\sqrt{\lambda^{2}n+\left(\frac{\Delta}{2}\right)^{2}} \right)+\frac{i\Delta}{2}\frac{\sin\left(t\sqrt{\lambda^{2}n+\left(\frac{ \Delta}{2}\right)^{2}}\right)}{\sqrt{\lambda^{2}n+\left(\frac{\Delta}{2} \right)^{2}}}. \tag{33}\] Let \(\psi\in\mathscr{S}(\mathbb{R})\) and \(t\in\mathbb{R}\), then \[U_{2}(t)(e_{1}\otimes\psi)=e_{1}\otimes\psi_{1}+e_{2}\otimes\psi_{2},\quad U _{2}(t)(e_{2}\otimes\psi)=e_{1}\otimes\psi_{3}+e_{2}\otimes\psi_{4}, \tag{34}\] where \[\psi_{1}=\sum_{n=0}^{+\infty}\langle\varphi_{n}|\psi\rangle a_{n}(t)\varphi_{ n},\quad\psi_{2}=\sum_{n=0}^{+\infty}\langle\varphi_{n}|\psi\rangle b_{n}(t) \varphi_{n+1}, \tag{35}\] and \[\psi_{3}=\sum_{n=1}^{+\infty}\langle\varphi_{n}|\psi\rangle c_{n}(t)\varphi_{ n-1},\quad\psi_{4}=\sum_{n=0}^{+\infty}\langle\varphi_{n}|\psi\rangle d_{n}(t) \varphi_{n}. \tag{36}\] Notice that for all \(m\in\mathbb{N}\) and for all \(j\in\{1,2,3,4\}\): \[\sup_{n\in\mathbb{N}}|\langle\varphi_{n}|\psi_{j}\rangle|n^{m}<+\infty, \tag{37}\] hence \(\psi_{j}\in\mathscr{S}(\mathbb{R})\) and therefore \(U_{2}(t)(\mathbb{C}\otimes\mathscr{S}(\mathbb{R}))\subset\mathbb{C}\otimes \mathscr{S}(\mathbb{R})\). **Lemma 1.2**.: _For all \(t\in\mathbb{R}\) and \(\Psi\in\mathbb{C}^{2}\otimes\mathscr{S}(\mathbb{R})\), we have_ \[\int_{0}^{t}(H_{2}-H_{1}(s))\Psi\,ds=-\frac{\lambda\sin\left(t\omega\right)}{ \omega}\left(e^{it\omega}\sigma_{+}\otimes a^{\dagger}+e^{-it\omega}\sigma_{- }\otimes a\right)\Psi, \tag{38}\] _and we denote the closure of this operator by \(S_{21}(t)\). Moreover:_ * \(S_{21}(t)(\mathbb{C}^{2}\otimes\mathscr{S}(\mathbb{R}))\subset\mathbb{C}^{2} \otimes\mathscr{S}(\mathbb{R})\) _for all_ \(t\in\mathbb{R}\)_;_ * _for all_ \(\Psi\in\mathbb{C}^{2}\otimes\mathscr{S}(\mathbb{R})\) _and for all_ \(t\in\mathbb{R}\)_:_ \(\frac{d}{dt}S_{21}(t)\Psi=(H_{2}(t)-H_{1}(t))\Psi\) Proof.: On \(\mathbb{C}^{2}\otimes\mathscr{S}(\mathbb{R})\), we have \[S_{21}(t) := \int_{0}^{t}(H_{2}-H_{1}(s))\,ds \tag{39}\] \[= -\lambda\int_{0}^{t}\left(e^{2is\omega}\sigma_{+}\otimes a^{ \dagger}+e^{-2is\omega}\sigma_{-}\otimes a\right)\,ds\] \[= -\frac{\lambda}{2i\omega}\left[e^{2is\omega}\right]_{s=0}^{s=t} \sigma_{+}\otimes a^{\dagger}+\frac{\lambda}{2i\omega}\left[e^{-2is\omega} \right]_{s=0}^{s=t}\sigma_{-}\otimes a\] \[= -\frac{\lambda}{2i\omega}\left(e^{2it\omega}-1\right)\sigma_{+} \otimes a^{\dagger}+\frac{\lambda}{2i\omega}\left(e^{-2it\omega}-1\right) \sigma_{-}\otimes a\] \[= -\frac{\lambda\sin\left(t\omega\right)}{\omega}\left(e^{it\omega }\sigma_{+}\otimes a^{\dagger}+e^{-it\omega}\sigma_{-}\otimes a\right).\] **Lemma 1.3**.: _For all \(\Psi\in\mathbb{C}^{2}\otimes\mathscr{S}(\mathbb{R})\):_ \[i(U_{2}(t)-U_{1}(t))\Psi = S_{21}(t)U_{2}(t)\Psi+\] \[+i\int_{0}^{t}U_{1}(t)U_{1}(s)^{\dagger}(S_{21}(s)H_{2}-H_{1}(s) S_{21}(s))U_{2}(s)\Psi\,ds,\] Proof.: Let \(\Psi\in\mathbb{C}^{2}\otimes\mathscr{S}(\mathbb{R})\), by Lemmas 1.1 and 1.2 we have that for all \(s\in\mathbb{R}\): \[U_{2}(s)\Psi,S_{21}(s)H_{2}U_{2}(s)\Psi,H_{1}(s)S_{21}(s)U_{2}(s)\Psi\in \mathbb{C}^{2}\otimes\mathscr{S}(\mathbb{R}). \tag{41}\] By equation (13) we have that \[i(U_{2}(t)-U_{1}(t))\Psi = iU_{1}(t)(U_{1}(t)^{\dagger}U_{2}(t)-I)\Psi \tag{42}\] \[= iU_{1}(t)\left[U_{1}(s)^{\dagger}U_{2}(s)\right]_{0}^{t}\Psi\] \[= iU_{1}(t)\int_{0}^{t}\frac{d}{ds}U_{1}(s)^{\dagger}U_{2}(s)\Psi\,ds\] \[= U_{1}(t)\int_{0}^{t}U_{1}(s)^{\dagger}(H_{2}-H_{1}(s))U_{2}(s) \Psi\,ds.\] We observe that for all \(s\in\mathbb{R}\): \[\frac{d}{ds}\left(U_{1}(s)^{\dagger}S_{21}(s)U_{2}(s)\Psi\right) = iU_{1}(s)^{\dagger}(H_{1}(s)S_{21}(s)-S_{21}(s)H_{2})U_{2}(s)\Psi+ \tag{43}\] \[+U_{1}(s)^{\dagger}(H_{2}-H_{1}(s))U_{2}(s)\Psi,\] therefore \[i(U_{2}(t)-U_{1}(t))\Psi = U_{1}(t)\int_{0}^{t}U_{1}(s)^{\dagger}(H_{2}-H_{1}(s))U_{2}(s) \Psi\,ds \tag{44}\] \[= S_{21}(t)U_{2}(t)\Psi\] \[+i\int_{0}^{t}U_{1}(t)U_{1}(s)^{\dagger}(S_{21}(s)H_{2}-H_{1}(s)S _{21}(s))U_{2}(s)\Psi\,ds.\] Notice that a similar Lemma might hold with \(U_{1}(t)\) and \(U_{2}(t)\) interchanged. This would however be much harder to prove, as our current prove relies on the simple structure of the Jaynes-Cummings interaction through Lemma 1.1. ## 2 Computation of bounds and the rotating wave approximation Without loss of generality, we can assume \(\lambda,\Omega,\omega>0\). ### Upper bound for generic vectors **Theorem 2.1**.: _For all \(\Psi\in\mathbb{C}^{2}\otimes\mathscr{S}(\mathbb{R})\) and \(t\in\mathbb{R}\):_ \[\|(U_{2}(t)-U_{1}(t))\Psi\|\leq\frac{\lambda}{\omega}\bigg{[}\|(N+2)^{1/2}\Psi \|+|t|\Big{(}|\Delta|\|(N+2)^{1/2}\Psi\|+3\lambda\|\big{(}(N+2)(N+3)\big{)}^{1/ 2}\Psi\|\Big{)}\bigg{]}, \tag{45}\] _where \(N=I\otimes a^{\dagger}a\). Moreover for all \(\Psi\in\mathbb{C}^{2}\otimes L^{2}(\mathbb{R})\):_ \[\lim_{\omega\to+\infty}\big{\|}(e^{-itH_{RWA}}-e^{-itH})\Psi\big{\|}=0, \tag{46}\] _uniformly for \(t\) in compact sets._ This theorem proves (4). In particular, (46) shows that mathematically the rotating wave approximation is correct in the limit \(\omega\to\infty\). How appropriate the approximation is in practice with finite parameters can be computed in (45), which provides a concrete upper bound on the norm difference of the time evolution of an initial state under the actual time evolution and under the rotating wave approximation. Proof.: First we prove (45). Let \(\Psi\in\mathbb{C}^{2}\otimes\mathscr{S}(\mathbb{R})\) and \(t\in\mathbb{R}\). First of all we observe that \[\|(I\otimes a)\Psi\|^{2}=\langle(I\otimes a)\Psi|(I\otimes a)\Psi\rangle= \langle\Psi|(I\otimes a^{\dagger}a)\Psi\rangle=\langle\Psi|N\Psi\rangle=\|N^{ 1/2}\Psi\|^{2}, \tag{47}\] \[\|(I\otimes a^{\dagger})\Psi\|^{2}=\langle\Psi|(N+1)\Psi\rangle=\|(N+1)^{1/2} \Psi\|^{2}. \tag{48}\] Moreover, the conservation law \[[H_{2},\mathcal{N}]=0,\qquad\mathcal{N}=P_{+}+N,\qquad P_{+}=\sigma_{+}\sigma _{-}\otimes I, \tag{49}\] implies that \[U_{2}(t)^{\dagger}NU_{2}(t) = U_{2}(t)^{\dagger}\mathcal{N}U_{2}(t)-U_{2}(t)^{\dagger}P_{+}U_{ 2}(t)=\mathcal{N}-U_{2}(t)^{\dagger}P_{+}U_{2}(t)\] \[= N+\big{(}P_{+}-U_{2}(t)^{\dagger}P_{+}U_{2}(t)\big{)}\] \[\leq N+1\] on \(\mathbb{C}^{2}\otimes\mathscr{S}(\mathbb{R})\). Start from the equality \[\big{(}U_{2}(t)-U_{1}(t)\big{)}\Psi=-iS_{21}(t)U_{2}(t)\Psi+\int_{0}^{t}U_{1}(t)U_ {1}(s)^{\dagger}\big{(}S_{21}(s)H_{2}-H_{1}(s)S_{21}(s)\big{)}U_{2}(s)\Psi\,ds\,. \tag{51}\] Let \(u(t)=U_{2}(t)\Psi\). One gets \[\|S_{21}(t)U_{2}(t)\Psi\|^{2} = \langle S_{21}(t)u(t)|S_{21}(t)u(t)\rangle \tag{52}\] \[= \bigg{(}\frac{\lambda\sin{(t\omega)}}{\omega}\bigg{)}^{2}\, \langle u(t)|\big{(}\sigma_{+}\sigma_{-}\otimes a^{\dagger}a+\sigma_{-}\sigma_ {+}\otimes aa^{\dagger}\big{)}u(t)\rangle\] \[= \bigg{(}\frac{\lambda\sin{(t\omega)}}{\omega}\bigg{)}^{2}\,\| \big{(}\sigma_{+}\sigma_{-}\otimes n^{1/2}+\sigma_{-}\sigma_{+}\otimes(n+1)^{1 /2}\big{)}u(t)\|^{2}\] \[\leq \frac{\lambda^{2}}{\omega^{2}}\|(N+1)^{1/2}u(t)\|^{2}\] \[\leq \frac{\lambda^{2}}{\omega^{2}}\|(N+2)^{1/2}\Psi\|^{2}.\] Moreover, let \[V(t)=H_{2}-H_{1}(t)=-\lambda\left(e^{2it\omega}\sigma_{+}\otimes a^{\dagger} +e^{-2it\omega}\sigma_{-}\otimes a\right). \tag{53}\] Then \[X(t) = S_{21}(t)H_{2}-H_{1}(t)S_{21}(t) \tag{54}\] \[= [S_{21}(t),H_{2}]+V(t)S_{21}(t)\] \[= -\frac{\lambda\sin{(t\omega)}}{\omega}\bigg{[}\Delta\left(-e^{it \omega}\sigma_{+}\otimes a^{\dagger}+e^{-it\omega}\sigma_{-}\otimes a\right)\] \[\qquad\qquad+\lambda\sigma_{+}\sigma_{-}\otimes\left(e^{it \omega}a^{\dagger 2}-e^{-it\omega}a^{2}-e^{it\omega}a^{\dagger}a\right)\] \[\qquad\qquad\qquad+\lambda\sigma_{-}\sigma_{+}\otimes\left(-e^{ it\omega}a^{\dagger 2}+e^{-it\omega}a^{2}-e^{-it\omega}aa^{\dagger}\right)\bigg{]}.\] We want to estimate \(\|X(t)u(t)\|\). First we observe that \[\|\left(-e^{it\omega}\sigma_{+}\otimes a^{\dagger}+e^{-it\omega}\sigma_{-} \otimes a\right)u(t)\|\leq\|(N+1)^{1/2}u(t)\|. \tag{55}\] Moreover for all \(\psi\in\mathscr{S}(\mathbb{R})\): \[\|\left(e^{it\omega}a^{\dagger 2}-e^{-it\omega}a^{2}-e^{it \omega}a^{\dagger}a\right)\psi\| \leq \|a^{\dagger 2}\psi\|+\|a^{2}\psi\|+\|a^{\dagger}a\psi\| \tag{56}\] \[\leq 3\|((a^{\dagger}a+1)(a^{\dagger}a+2))^{1/2}\psi\|\] and \[\|\left(-e^{it\omega}a^{\dagger 2}+e^{-it\omega}a^{2}-e^{-it \omega}aa^{\dagger}\right)\psi\| \leq \|a^{\dagger 2}\psi\|+\|a^{2}\psi\|+\|aa^{\dagger}a\psi\| \tag{57}\] \[\leq 3\|((a^{\dagger}a+1)(a^{\dagger}a+2))^{1/2}\psi\|.\] Therefore, \[\|X(t)u(t)\| \leq \frac{\lambda}{\omega}\Big{[}|\Delta|\|(N+1)^{1/2}u(t)\|+3\lambda\| ((N+1)(N+2))^{1/2}u(t)\|\Big{]} \tag{58}\] \[\leq \frac{\lambda}{\omega}\Big{[}|\Delta|\|(N+2)^{1/2}\Psi\|+3 \lambda\|((N+2)(N+3))^{1/2}\Psi\|\Big{]}.\] Taking things together we get \[\|(U_{2}(t)-U_{1}(t))\Psi\|\leq\frac{\lambda}{\omega}\bigg{[}\|(N+2)^{1/2}\Psi \|+|t|\Big{(}|\Delta|\|(N+2)^{1/2}\Psi\|+3\lambda\|\big{(}(N+2)(N+3)\big{)}^{1/ 2}\Psi\|\Big{)}\bigg{]}. \tag{59}\] Therefore \[\lim_{\omega\rightarrow+\infty}\big{\|}(e^{-itH_{\rm RWA}}-e^{-itH})\Psi \big{\|}=\lim_{\omega\rightarrow+\infty}\|(U_{2}(t)-U_{1}(t))\Psi\|=0, \tag{60}\] for all \(\Psi\in\mathbb{C}^{2}\otimes\mathscr{S}(\mathbb{R})\), and since the latter is dense in \(\mathbb{C}^{2}\otimes L^{2}(\mathbb{R})\), (46) follows. ### Lower bound **Theorem 2.2**.: _For all \(\Psi\in\mathbb{C}^{2}\otimes\mathscr{S}(\mathbb{R})\) and for all \(0\leq t\leq\pi/\omega\):_ \[\|(U_{2}(t)-U_{1}(t))\Psi\| \geq \frac{\lambda}{\omega}\sin(t\omega)\|(N-1)_{+}^{1/2}\Psi\|\] \[-\frac{\lambda}{\omega^{2}}(1-\cos(t\omega))\Big{(}|\Delta|\|(N+ 2)^{1/2}\Psi\|+3\lambda\|\big{(}(N+2)(N+3)\big{)}^{1/2}\Psi\|\Big{)},\] _where \((N-1)_{+}\) denotes the positive part of the operator \(N-1\)._ This theorem reveals the limitation of the rotating wave approximation in practice: the error grows with the photon number, so for larger systems the rotating wave approximation may no longer be justified. Proof.: Start from the equality \[\big{(}U_{2}(t)-U_{1}(t)\big{)}\Psi=-iS_{21}(t)U_{2}(t)\Psi+\int_{0}^{t}U_{1}( t)U_{1}(s)^{\dagger}\big{(}S_{21}(s)H_{2}-H_{1}(s)S_{21}(s)\big{)}U_{2}(s)\Psi \,ds\,. \tag{61}\] We have that \[\|(U_{2}(t)-U_{1}(t))\Psi\| \geq \|S_{21}(t)\Psi\|-\int_{0}^{t}\|(S_{21}(s)H_{2}-H_{1}(s)S_{21}(s) )U_{2}(s)\Psi\|\,ds\] We define \(u(t)=U_{2}(t)\Psi\) and we have \[\|S_{21}(t)U_{2}(t)\Psi\|^{2} = \bigg{(}\frac{\lambda\sin(t\omega)}{\omega}\bigg{)}^{2}\langle u( t)|\big{(}\sigma_{+}\sigma_{-}\otimes a^{\dagger}a+\sigma_{-}\sigma_{+} \otimes aa^{\dagger}\big{)}u(t)\rangle \tag{63}\] \[\geq \bigg{(}\frac{\lambda\sin(t\omega)}{\omega}\bigg{)}^{2}\|N^{1/2} u(t)\|^{2},\] moreover, by (50), we have \[U_{2}(t)^{\dagger}NU_{2}(t)\geq N-1. \tag{64}\] Hence, one can show that \[\|S_{21}(t)U_{2}(t)\Psi\|^{2}\geq\left(\frac{\lambda\sin\left(t\omega\right)}{ \omega}\right)^{2}\|(N-1)_{+}^{1/2}\Psi\|^{2}. \tag{65}\] Moreover, for \(0\leq t\leq\pi/\omega\) we have \[\int_{0}^{t}\|(S_{21}(s)H_{2}-H_{1}(s)S_{21}(s))U_{2}(s)\Psi\|\,ds\] \[\leq\frac{\lambda}{\omega}\Big{[}|\Delta|\|(N+2)^{1/2}\Psi\|+3 \lambda\|((N+2)(N+3))^{1/2}\Psi\|\Big{]}\int_{0}^{t}\sin(\omega s)\,ds\] \[=\frac{\lambda(1-\cos(\omega t))}{\omega^{2}}\Big{[}|\Delta|\|(N +2)^{1/2}\Psi\|+3\lambda\|((N+2)(N+3))^{1/2}\Psi\|\Big{]} \tag{66}\] and hence, for \(0\leq t\leq\pi/\omega\), we get \[\|(U_{2}(t)-U_{1}(t))\Psi\| \geq \|S_{21}(t)\Psi\|-\int_{0}^{t}\|(S_{21}(s)H_{2}-H_{1}(s)S_{21}(s ))U_{2}(s)\Psi\|\,ds\] \[\geq \frac{\lambda}{\omega}\sin(t\omega)\|(N-1)_{+}^{1/2}\Psi\|\] \[-\frac{\lambda}{\omega^{2}}(1-\cos(t\omega))\Big{(}|\Delta|\|(N +2)^{1/2}\Psi\|+3\lambda\|\big{(}(N+2)(N+3)\big{)}^{1/2}\Psi\|\Big{)}.\] ### Applying the bounds to Fock states To understand the scaling better, let us apply the above bounds to (normalised) Fock states \(\Phi_{j,n}=e_{j}\otimes\varphi_{n}\in\mathbb{C}^{2}\otimes\mathscr{S}(\mathbb{ R})\), with \(j\in\{1,2\}\) and \(n\in\mathbb{N}\), \(n>0\). For simplicity, we consider the case \(\Delta=0\), but the argument is easily generalised. From the upper bound (45) we obtain \[\|(U_{2}(t)-U_{1}(t))\Phi_{j,n}\|\leq\frac{\lambda(n+2)^{1/2}}{\omega}\bigg{[} 1+3|t|\lambda(n+3)^{1/2}\bigg{]}. \tag{68}\] For the lower bound we have, for \(0\leq t\leq\pi/\omega\), \[\|(U_{2}(t)-U_{1}(t))\Phi_{j,n}\|\geq\frac{\lambda}{\omega}\sin(t\omega)(n-1) ^{1/2}-\frac{3\lambda^{2}}{\omega^{2}}(1-\cos(t\omega))\big{(}(n+2)(n+3)\big{)} ^{1/2}. \tag{69}\] We compute that this as a function of \(t\) has a maximum at \[t_{*}=\frac{\cos^{-1}\left(\frac{3\lambda}{\sqrt{9\lambda^{2}+\frac{(n-1) \omega^{2}}{(n+2)(n+3)}}}\right)}{\omega} \tag{70}\] At this time, the right hand side of Eq. (69) evaluates as \[\frac{g\left(9g^{2}(n+2)(n+3)-3g\sqrt{(n+2)(n+3)\left(9g^{2}(n+2)(n+3)+n-1\right)} +n-1\right)}{\sqrt{9g^{2}(n+2)(n+3)+n-1}}. \tag{71}\] Here, we have set \(g\equiv\frac{\lambda}{\omega}\). We can expand this in order of \(n^{-1}\) to obtain a slightly simpler exact lower bound (valid for \(n>0\)) \[\sup_{t\in[0,\frac{\pi}{\omega}]}\|(U_{2}(t)-U_{1}(t))\Phi_{j,n}\|\geq\frac{1 }{6}-\frac{1}{216g^{2}n}-\frac{7}{12n}. \tag{72}\] Focussing on the same time interval also for the upper bound and linearising it, we can conclude that \[5g\sqrt{n+3}\geq\sup_{t\in[0,\frac{\pi}{\omega}]}\|(U_{2}(t)-U_{1}(t))\Phi_{j, n}\|\geq\frac{1}{6}-\frac{1}{216g^{2}n}-\frac{7}{12n}, \tag{73}\] proving (5). This bound is not necessarily a sharp bound but it shows nicely that the short-time error becomes small for small \(g\) and large for high photon number \(n\), hence it provides us with a quantitative condition on \(g\) in order to reduce the error below a certain bound for given photon number. For high photon numbers \(n\to\infty\), there is a time such that the difference becomes greater than \(\frac{1}{6}\), i.e., \[\|e^{-itH}-e^{-itH_{\rm RWA}}\|\geq\frac{1}{6} \tag{74}\] by taking the supremum over all states in (72), which means that the rotating wave approximation does not work for arbitrarily high photon numbers; this proves (3).
2303.15178
Robust Path Following on Rivers Using Bootstrapped Reinforcement Learning
This paper develops a Deep Reinforcement Learning (DRL)-agent for navigation and control of autonomous surface vessels (ASV) on inland waterways. Spatial restrictions due to waterway geometry and the resulting challenges, such as high flow velocities or shallow banks, require controlled and precise movement of the ASV. A state-of-the-art bootstrapped Q-learning algorithm in combination with a versatile training environment generator leads to a robust and accurate rudder controller. To validate our results, we compare the path-following capabilities of the proposed approach to a vessel-specific PID controller on real-world river data from the lower- and middle Rhine, indicating that the DRL algorithm could effectively prove generalizability even in never-seen scenarios while simultaneously attaining high navigational accuracy.
Niklas Paulig, Ostap Okhrin
2023-03-24T07:21:27Z
http://arxiv.org/abs/2303.15178v1
# Robust Path Following on Rivers Using Bootstrapped Reinforcement Learning ###### Abstract This paper develops a Deep Reinforcement Learning (DRL)-agent for navigation and control of autonomous surface vessels (ASV) on inland waterways. Spatial restrictions due to waterway gently following restricted waterways autonomously autonomous surface vehicle (2023) of the European Commission emphasizes the importance of inland waterway traffic and its development due to decreased costs and increased safety in comparison to other modes of transport. To build on this directive, the present study is one of the first approaches to solving the path-following problem for underactuated vessels on restricted waterways using deep reinforcement learning (DRL) and under consideration of environmental influences. Breivik and Fossen (2004) stated, that, compared to other automated systems, ships on inland waterways face additional challenges due to their environment (e.g., strong directional currents, shallow banks) and underlying physics (e.g., underactuation, highly non-linear maneuvering models), leading to a highly dynamic and stochastic operational environment. To overcome these hurdles, we incorporate water depth, current direction, and speed into the agent's perception, allowing it to navigate tight river turns safely. In this paper, an ensemble-based DRL algorithm is used to develop a high-precision and generalizable path-following controller for inland transportation vessels. The contribution to the field is as follows: * We develop a tunable segmental generator to create realistic and diverse training environments specifically for inland waterways. The source code is publicly available as a GitHub repository via github.com/nikpau/sr-gen. * We use a state-of-the-art bootstrapped DQN-based algorithm to generate robust and generalizable policies for rudder control under varying external environmental disturbances. * To demonstrate the generalizability and robustness of our approach, we validate the produced policies on real-world data from the middle and lower Rhine. The rest of the paper is organized as follows. Section 2 recapitulates current literature on the topic of path-following and formalizes the problem. The kinodynamic ship-maneuvering model is detailed in Section 3 while section 4 introduces methodologies and how they are incorporated into the path-follower controller design. The fifth section sets up a benchmark controller for validation and Section 6 applies the path-following results to various maritime scenarios and validates the controller on separate segments of the Rhine river. Section 7 performs a robustness analysis, and the last section concludes. ## 2 Path-following for ASV ### State of the art The objective of path-following for ships demands a controller to generate steering commands that enable an underactuated autonomous surface vessel (ASV) to follow a pre-defined path with minimal angular and spatial deviation. The problem formulation for this study will only include rudder angles as control outputs while keeping the engine revolutions constant. According to Fossen (2011), an onboard path-following system requires three sub-systems to be implemented: _guidance_, _navigation_, and _control_. To autonomously control a vessel, we require to know its current position (navigation), planned trajectory (guidance), and a set of control actions to move towards its current goal (control). Line-of-sight (LOS) guidance is one common approach in implementing directional awareness of the agent, achieving convergence to the desired path. It has been successfully applied in various problem settings, as in Fossen et al. (2003) and Fossen and Lekkas (2017) with traditional control approaches and Oh and Sun (2010) for a model-predictive-control application. Vector field guidance (VFG) Nelson et al. (2007) is a different approach that uses a global vector field encompassing the path to guide the vessel towards it, independent of the magnitude of deviation. Woo et al. (2019) integrated VFG into a combined path-following and collision avoidance method. After setting up a suitable guidance algorithm, the next step demands a control system. There are two main methodological approaches to solving the path-following problem, analytic control, and reinforcement learning. As part of the analytic control family, proportional-integral-derivative (PID) controllers are well understood, require few computational resources and have successfully been used to develop path-followers in calm and disturbed waters. While Moreira et al. (2007) achieved path-following using a LOS guidance system and PID controller for steering control, Perera et al. (2014) used fuzzy logic to derive, and PID controller to execute sequential actions for path-following but also collision avoidance of a small model vessel. Paramesh and Rajendran (2021) used PID control to navigate a tanker along a given path under the influence of regular waves. PID performance, however, is vessel-dependent, requires expert tuning and often is sensitive to external disturbances. More recent advances in control theory allow for different approaches such as non-linear model-predictive-control Xia et al. (2013); Sandeepkumar et al. (2022), backstepping control Zhang et al. (2017), or sliding mode control Liu et al. (2018). Reinforcement Learning (RL) is based on agent-environment interaction, aimed at learning a correct set of actions given some observed state. The actions taken by the agent are evaluated based on a hand-crafted reward function, whose goal is to reinforce actions that bring the agent closer to its defined goal, ultimately finding a policy that solves the problem optimally. Recently, interest in academia in using RL-based motion control has surged due to its ability to tackle problems with high uncertainty and non-linear system dynamics. Various researchers, such as Shen and Guo (2016), Martinsen and Lekkas (2018) use a family of continuous-action algorithms in which the agent is free to choose any action on the applicable range every time step. While this allows for a highly reactive policy, it is possible to choose actions leading to unrealistic behavior, such as maximally opposing rudder angles on two successive time steps. Discrete action solutions as discussed by Zhao et al. (2019); Amendola et al. (2019, 2020); Martinsen et al. (2020) often share the identical drawback, leading some researchers to block the agent from choosing the next action until the last one was performed subject to physical constraints of the vessel. While this approach is viable in restricting physically impossible movements, it impairs the agent's reactivity during the time of blockage. To mitigate this problem in this study, we opted only to allow the agent to choose from actions within the vessel's physical possibilities. ### Path following on rivers Contrary to the open sea, rivers pose additional navigational challenges mainly due to their limited spatial extent, strong directional currents, shallow banks, and small path-curve radii. Figure 1 depicts the heading-control setup used in this study. We assume a given path consisting of a discrete set of \(K\) waypoints \(P_{k}=(x_{k},y_{k})^{\top},k\in\{1,\dots,K\}\), where two consecutive waypoints \(P_{k}\) and \(P_{k+1}\) enclose the path heading \[\chi_{P_{k}}=\text{atan2}(y_{k+1}-y_{k},x_{k+1}-x_{k}), \tag{1}\] Using such discrete waypoints, however, will produce discontinuous jumps in the desired path heading once the vessel crosses the waypoint in front of it. To infer a continuous path heading during training, this study uses a distance-dependent weighted sum of the current and next path segment heading: \[\chi_{C_{k}}=\Big{\{}1-\frac{x_{e}}{d(P_{k},P_{k+1})}\chi_{P_{k}}\Big{\}}+\Big{\{} \frac{x_{e}}{d(P_{k},P_{k+1})}\chi_{P_{k+1}}\Big{\}}, \tag{2}\] with \(d(P_{k},P_{k+1})=\sqrt{(x_{k}-x_{k+1})^{2}+(y_{k}-y_{k+1})^{2}}\) being the Euclidean distance between two succeeding waypoints and the along-track distance \(x_{e}\) given by \[x_{e}=\big{(}x_{A}-x_{k}\big{)}\cos\Big{(}\chi_{P_{k}}\Big{)}+\big{(}y_{A}-y_{k }\big{)}\sin\Big{(}\chi_{P_{k}}\Big{)}\,, \tag{3}\] using the current vessel position \(A=(x_{A},y_{A})^{\top}\). Most vessel-related variables such as the vessel position, the along-track error, cross-track error etc., are time-dependent. For simplicity, and to avoid clutter, we will drop the time index \(t\) in this and the next section, i.e. we write \(A=(x_{A},y_{A})^{\top}\) instead of \(A_{t}=(x_{A,t},y_{A,t})^{\top}\). There are two fundamental metrics to control for in a path-following scenario: _Cross-track-error_ (\(y_{e}\)) and _heading-error_ (\(\chi_{e}\)). The cross-track-error normal to the path can then be found via \[y_{e}=(x_{A}-x_{k})\sin(\chi_{C_{k}})+(y_{A}-y_{k})\cos(\chi_{C_{k}}). \tag{4}\] From the cross-track-error, we can construct a vector field after Nelson et al. (2007) to determine the desired course as \[\chi_{d}=\tan^{-1}(cy_{e})+\chi_{P_{k}}, \tag{5}\] where \(c\) is a tunable hyperparameter controlling the speed of convergence of the vector field. Using the vessel's current heading \(\psi\) and drift angle \(\beta\) (see Figure 2), the course error calculates to \[\chi_{e}=\chi_{d}-\psi-\beta. \tag{6}\] Figure 1: Heading control setup ## 3 ASV kinodynamics ### Equations of motion The present study uses the 3-degree-of-freedom MMG model of ship maneuvering Yasukawa and Yoshimura (2015) to describe the autonomous vessel's movement in the horizontal plane. The ASV is modeled as a rigid body with a single propeller. This paper uses the coordinate system shown in Figure 2. The \(o_{0}-x_{0}y_{0}z_{0}\) coordinate system corresponds to the earth-fixed water surface while the \(o-xyz\) system is vessel-fixed with origin \(o\) at midship and \(x,y\) pointing towards the bow and starboard respectively, \(z\) is pointing downwards. The center of gravity is at \((x_{G},0,0)\) in the vessel-fixed coordinate system; total sway at the center of gravity then is \(v=v_{m}+x_{G}r\), with \(v_{m}\) being the sway velocity at midships and \(r\) the turning rate. Surge velocity is denoted by \(u\), thus total ship velocity is given by \(U=\sqrt{u^{2}+v_{m}^{2}}\), drift angle at midships by \(\beta=\tan^{-1}(v_{m}/u)\) and the heading \(\psi\) by the angle between \(x_{0}\) and \(x\). The forces acting on the ship are decomposed as follows: \[\left.\begin{array}{ll}\left(m+m_{x}\right)\dot{u}-\left(m+m_{y}\right)v_{m} r-x_{G}mr^{2}&=X,\\ \left(m+m_{y}\right)\dot{v}_{m}+\left(m+m_{x}\right)ur+x_{G}m\dot{r}&=Y,\\ \left(I_{zG}+x_{G}^{2}m+J_{z}\right)\dot{r}+x_{G}m\left(\dot{v}_{m}+ur\right)& =N,\end{array}\right\} \tag{7}\] where \(m\) is the mass of the ASV, \(m_{x}\) and \(m_{y}\) are the added masses in \(x\)- and \(y\)-axis direction respectively, \(x_{G}\) is the longitudinal coordinate of center of gravity, \(I_{zG}\) is the moment of inertia, \(J_{z}\) is the added moment of inertia, and \(r\) is the yaw rate. Total forces of the left-hand-side of (7), \(X,Y,N\), are surge force, lateral force and yaw moment around midship and consist of the following parts: \[\left.\begin{array}{ll}X=&X_{\rm H}+X_{\rm R}+X_{\rm P},\\ Y=&Y_{\rm H}+Y_{\rm R},\\ N=&N_{\rm H}+N_{\rm R}.\end{array}\right\} \tag{8}\] The subscripts H, R, P describe forces acting on the hull, rudder and propeller respectively. Further implementation details are deferred to the original paper by Yasukawa and Yoshimura (2015). Figure 2: Global and local coordinate systems. ### Environmental forces According to Fossen (2011, p. 39), vessel speed under the influence of currents becomes a relative speed \(U=\sqrt{(u-u_{c})^{2}+(v_{m}-v_{c})^{2}}\) where \(u_{c}\) and \(v_{c}\) are the current component velocities in longitudinal and lateral direction. The effects of shallow water on wake fraction, thrust deduction and flow-straightening coefficients are calculated after Amin and Hasegawa (2010) while the effects on hydrodynamic derivatives are adapted using combined formulations from Kijima and Nakiri (1990) and Ankudinov et al. (1990). A summary can be found at Taimuri et al. (2020). The effects on the maneuverability of the vessel are demonstrated in a zigzag and turning maneuver test shown in Figure 3. For the zigzag test, the vessel starts with an initial velocity \(U_{0}=4.0m/s\), a rudder angle of \(0^{\circ}\) and an arbitrary course \(\bar{\psi}-\beta\) (This study uses \(\bar{\psi}-\beta=0\)). The rudder angle is increased by \(5.0^{\circ}s^{-1}\) until it reaches its maximum value (\(10^{\circ}\) or \(20^{\circ}\)), at which it is held until the vessel's course is changed by the same amount. The rudder direction is then reversed with the same principle. For this test, currents are turned off. The turning maneuver test starts with the same initial conditions as the zigzag test, however, the rudder angle is increased to \(35^{\circ}\) and held there for the rest of the experiment. For both tests, we see impaired maneuverability for the vessel under shallow water conditions (\(h/d=1.2\)), which is to be expected and emphasizes the need for a precise controller on inland waterways. The open-source implementation of the MMG dynamics used for this study can be found at Paulig (2022). ### Vessel model The vessel type used for simulation is a 1:5 scale model (L-64) of the KVLCC2 Tanker, as it has one of the most well-understood dynamics publicly available. The ship's principal particulars can be found in Table 1. We use a 1:5 scaling to mimic the dimensions and behavior of small- to medium-sized inland cargo vessels. ## 4 Reinforcement Learning framework ### Fundamentals RL is a subfield of machine learning in which an agent is trained to act in an environment such that it maximizes a reward signal received from the environment Sutton and Barto (2018). Formally, the simulated environment is modeled as a Markov Decision Process (MDP)Puterman (1994) described by the tuple \((\mathcal{S},\mathcal{A},\mathcal{P},\mathcal{R},\gamma)\). At every time step \(t\), given a current state of the environment \(s_{t}\in\mathcal{S}\) the agent executes an action \(a_{t}\in\mathcal{A}\) according to a parameterized policy \(\pi_{\theta}\,:\,\mathcal{S}\rightarrow\mathcal{A}\). After performing the action, the agent receives a reward \(r_{t}\in\mathcal{R}\) and transitions to the next state \(s_{t+1}\) according to the state transition probability distribution \(\mathcal{P}\,:\,\mathcal{S}\times\mathcal{A}\times\mathcal{S}\rightarrow[0,1]\). The return is defined as the cumulative discounted reward \(R_{t}=\sum_{i=t}^{T}r^{i-t}r_{i}\) from the current time step until the final time step of the episode \(t+T\) with \(\gamma\in[0,1]\) being the discount factor that trades off the importance of immediate and later rewards. All the contributions of the tuple \((\mathcal{S},\mathcal{A},\mathcal{P},\mathcal{R},\gamma)\) for the path-following objective will be specified in Section 4.3. The goal of RL is to find a policy that maximizes reward in the long-term starting from some initial state. Most current algorithms use a state-action value function \(Q\,:\,\mathcal{S}\times\mathcal{A}\rightarrow\mathbb{R}\) to assign a value to each state-action pair such that higher values represent pairs leading to a higher long-term reward. \(Q^{\pi}(s,a)=\mathbb{E}_{s\sim\mathcal{P},a\sim\pi}(R_{t}\,|\,s_{t}=s,a_{t}=a)\) resembles the expected discounted sum of rewards starting from state \(s\), taking action \(a\) and following policy \(\pi\) afterwards. The algorithmic foundation for this work is the Q-Learning algorithm Watkins and Dayan (1992) that uses the Bellman \begin{table} \begin{tabular}{l l} \hline \hline Scale & 1/5 \\ Displacement & 2500.8 \(m^{3}\) \\ Length between perpendiculars & 64.0 \(m\) \\ Width & 11.6 \(m\) \\ Block coefficient & 0.81 \\ Draft & 4.16 \(m\) \\ Rudder area & 4.5 \(m^{2}\) \\ Propeller diameter & 1.76 \(m\) \\ \hline \hline \end{tabular} \end{table} Table 1: Principal particulars of a KVLCC2 L64-model tanker optimality equations Bellman (1957) to solve for the optimal Q-values \(Q^{*}\) satisfying \[Q^{*}(s,a)=\mathbb{E}\Big{\{}r_{t}+\max_{a_{t+1}\in\mathcal{A}}Q^{*}\left(s_{t+1}, a_{t+1}\right)\mid s_{t}=s,a_{t}=a\Big{\}}, \tag{9}\] from which an optimal policy can be derived by \(\pi^{*}(s)=\operatorname*{argmax}_{a}Q^{*}(s,a)\). The Q-Learning update rule for a given Q-value estimate \(\hat{Q}(s,a)\) and learning rate \(\alpha\) is \[\hat{Q}\left(s_{t},a_{t}\right)\leftarrow\hat{Q}\left(s_{t},a_{t}\right)+ \alpha\{\bar{\tau}^{T}-\hat{Q}\left(s_{t},a_{t}\right)\},\text{ for }\bar{\tau}^{T}=r_{t+1}+\gamma\max_{a_{t+1}\in \mathcal{A}}\hat{Q}\left(s_{t+1},a_{t+1}\right). \tag{10}\] To keep track of the Q-values, their value must be stored for each state-action pair, which is infeasible even for moderately-sized environments. To overcome this limitation Mnih et al. (2015) introduced the DQN algorithm that uses two neural networks as function approximators for Q-Value estimation. The second network is a frozen copy of the first one that gets periodically updated to match the first. This contributes significantly to training stability, as bootstrapping the next action's Q-Value from the same network can lead to unpredictable behavior, especially in the early stages of training. ### Overestimation bias One of the most often criticized problems of the Q-learning update rule in (10), is an overestimation bias. It is induced through the fact, that the estimation of the bootstrapped target \(\bar{\tau}^{T}\) uses the maximum over all possible actions. Because all Q-Values are approximations of their true expectation, some estimations are probably higher than the true expected value Thrun and Schwartz (1993). This can lead to misjudgment during exploration, as states with falsely attributed high Q-Values are taken into consideration more often, than the ones with falsely attributed low Q-Values. Eventually, this inequality can overall lead to suboptimal policies. Several approaches set out to mitigate this overestimation. Van Hasselt (2010) introduced Double Q-Learning, replacing over- with underestimation by separating selection and evaluation of the maximum, thereby achieving significant performance improvements in the deep-learning setup Van Hasselt et al. (2016). The approach in this paper was proposed by Waltz and Okhrin (2022) and provides an extension of the bootstrapping framework from Osband et al. (2016). The general idea is to rely on the ability of bootstrapping to provide measures Figure 3: Zigzag and turning maneuver tests for water depth \(h\) and ship draught \(d\). of accuracy for statistical estimates, which is usually achieved by resampling an original dataset with replacement and calculating the statistics of interest using these bootstrapped samples. In the DRL setting, this is translated into either maintaining several distinct Q-networks, each with its own target network or using a single network with a shared core and several heads (see Figure 4). This work will use the latter approach. The bootstrapping nature is achieved by randomly initializing the network heads and using a binary map to determine which head is to be updated on the current iteration. Additionally, Waltz and Okhrin (2022) propose to replace the maximum over all possible actions in the target with a kernel-based testing procedure. Suppose a network with one common core and \(B\in\mathbb{N}\) heads. Furthermore, let \(\kappa\) be a kernel function (in our study we use the Gaussian cumulative distribution function \(\Phi(\cdot)\)). The target for the \(b^{th}\) head then becomes \[\bar{\tau}^{T,b}=r+\gamma\left[\sum_{a_{t+1}\in\mathcal{A}}\kappa\left\{T_{ \hat{Q}_{b}}\left(s_{t+1},a_{t+1}\right)\right\}\right]^{-1}\sum_{a_{t+1}\in \mathcal{A}}\kappa\left\{T_{\hat{Q}_{b}}\left(s_{t+1},a_{t+1}\right)\right\} \hat{Q}_{b}\left(s_{t+1},a_{t+1};\theta_{b}^{-}\right), \tag{11}\] where \[T_{\hat{Q}_{b}}\left(s_{t+1},a_{t+1}\right)=\frac{\hat{Q}_{b}\left(s_{t+1},a_{ t+1};\theta_{b}^{-}\right)-\max_{a_{t+1}\in\mathcal{A}}\hat{Q}_{b}\left(s_{t+1},a _{t+1};\theta_{b}^{-}\right)}{\sqrt{\hat{\mathrm{Var}}\left\{\hat{Q}_{b} \left(s_{t+1},a_{t+1};\theta_{b}^{-}\right)\right\}+\hat{\mathrm{Var}}\left\{ \hat{Q}_{b}\left(s_{t+1},a^{\ast};\theta_{b}^{-}\right)\right\}}}, \tag{12}\] is a statistic for testing whether the selected action from head \(v\) is not smaller than the maximum estimate for that head, and the currently maximizing action \[a^{\ast}\in\left\{a\in\mathcal{A}\mid\hat{Q}_{b}\left(s_{t+1},a;\theta_{b}^{- }\right)=\max_{a_{t+1}\in\mathcal{A}}\hat{Q}_{b}\left(s_{t+1},a_{t+1};\theta_{ b}^{-}\right)\right\} \tag{13}\] In the following, we will stick with the naming of Waltz and Okhrin (2022) and call this algorithm _KEBDQN_. Further implementation details are deferred to the original paper. ### Controller design for inland ASVs In this section, we describe the state, action and reward structure of the MDP used to model inland waterways for this study. As described in Section 4.1, on every time step, \(t\), the agent -our vessel- observes a state \(s_{t}\) from the environment and chooses to perform action \(a_{t}\) according to its policy \(\pi_{\theta}\). State spaceThe state space \(s_{t}=\left(s_{t}^{d\top}s_{t}^{\sigma\tau}\right)^{\top}\) is assumed to be fully observable and involves two parts: The first part \[s^{d}=\left(u_{t},\quad v_{t},\quad r_{t},\quad\delta_{t},\quad u_{t-1},\quad v _{t-1},\quad r_{t-1},\quad\delta_{t-1}\right)^{\top}, \tag{14}\] contains information about surge, sway and yaw rates and the rudder angle, \(\delta\), at the current and previous time steps. This way the agent can perceive the changes in dynamics resulting from different environmental conditions, Figure 4: Bootstrapped DQN architecture as proposed by Osband et al. (2016) for example, increased sway rates due to cross-current fields, or due to the agent's actions. The second part encodes information about the surroundings of the agent: \[s^{e}=\left(\tilde{y}_{e,t},\quad\tilde{y}_{e,t-1},\quad\chi_{e,t} \quad\chi_{e,t-1},\quad\frac{h_{t}-d}{\max(h)},\quad\gamma_{rel}\right)^{\top}, \tag{15}\] where \(h_{t}\) is the current water depth below keel, and \(d\) is the ship draught, thus \(\frac{h_{t}-d}{\max(h)}\) is the remaining water under keel normalized by the maximum depth possible in the environment. The current attack angle relative to the bow is \(\gamma_{rel}\), and \(\tilde{y}_{e,t}=c_{1}\tanh(y_{e,t})\), with \(c_{1}\) being a tunable hyperparameter controlling the importance of the cross-track error. The above cross-track-error scaling is done to stabilize training in later stages, as its magnitude exceeds all other observations being measured. Action spaceIn this study we follow other researchers Moreira et al. (2007); Amendola et al. (2019); Zhao et al. (2019); Amendola et al. (2020) and assume constant thrust by fixing the propeller rotation rate to \(4.0s^{-1}\) i.e. the agent does not control its velocity, but its rudder angle. There are three possible actions \(a_{t}\in\{\delta_{t-1}-2^{\circ},\delta_{t-1},\delta_{t-1}+2^{\circ}\}\), that either increase or decrease the rudder angle by two degrees or leave it as is. The admissible rudder range is \(\delta_{t}\in\{-20^{\circ},-18^{\circ},\ldots,18^{\circ},20^{\circ}\}\). The choice of a stepwise rudder change rather than choosing between fixed angles avoids generating successive rudder commands of unrealistic magnitude, for example, \(\{a_{t}=-20^{\circ},a_{t+1}=20^{\circ}\}\), which would lead to structural damage of the rudder. Reward structureThe reward the environment emits acts as an immediate measurement of the quality of the action taken by the agent. To fulfill the path-following objective, minimal spatial and angular deviation from the given path is intended. Therefore, the reward system includes three parts: \[R_{t}=\omega_{1}R_{y_{e,t}}+\omega_{2}R_{\chi_{e,t}}+R_{\mathrm{ aground,}t}. \tag{16}\] The first part rewards closeness to the desired path, while the second guides the agent towards its desired course as dictated by the underlying vector field. The terms are defined as: \[R_{y_{e},t}=\exp(-c_{2}|y_{e,t}|), \tag{17}\] and \[R_{\chi_{e},t}=\exp(-c_{3}|\chi_{e,t}|) \tag{18}\] If the water depth below the agent is less than \(1.2d\), the agent will receive a negative reward defined by \[R_{\mathrm{aground},t}=\begin{cases}-20&\text{if}\quad h_{t}<1.2d\\ 0&\text{otherwise}.\end{cases} \tag{19}\] The factor of 1.2 is used as a lower bound as the shallow-water correction terms for the hydrodynamic derivatives Kijima and Nakiri (1990); Ankudinov et al. (1990) lead to unrealistic vessel behavior below this bound. If the vessel advances to areas where the water depth falls below this threshold, the current episode is terminated. Preliminary testing concluded that values of \(c_{2}=0.1\), \(c_{3}=10\), \(\omega_{1}=0.6\) and \(\omega_{2}=0.4\) yielded a reward structure sensitive to cross-track error deviations of more than one ship width. Figure 5 shows a contour plot of the reward structure described above. The selection of weights was chosen such, that cross-track deviations are penalized more quickly than course deviations. This was done to allow the vessel to advance through curves and current fields with a non-zero drift angle while still attaining high rewards. ### Training environment generation Since restricted waterways in general, and rivers in particular feature a wide variety of widths, lengths, riverbed profiles, water depth distributions and current velocities, a robust agent needs to be trained in an equally diverse training environment. To generate arbitrary rivers we loosely follow the procedures in Fossen (2011, p. 255) by using an alternating sequence of straight and curved segments of equal width \(w_{S}\), as shown in Figure 6. A given straight segment is described by the tuple \(S^{S}:=(\xi,l)\), while each curved segment is defined by the triple \(S^{C}:=(\xi,r_{C},\phi)\), where \(\xi\) is the starting angle of the segment against the ordinate, \(l\) is the length of the straight segment, \(r_{C}\) is the radius of the circle inscribing the curved segment, and \(\phi\) is the angle by which we want the segment to curve (curvature). A training environment is now build by chaining \(n\) straight and curved segments together in an alternating fashion to form a \(n\)-random river \[\mathrm{Riv}(n)=(S_{1}^{S},S_{1}^{C},S_{2}^{S},S_{2}^{C},\ldots,S_{n}^{S},S_{n }^{C}), \tag{20}\] by the following rules: The first angle \(\xi_{1}\) is initialized arbitrarily, in our study we use \(\xi_{1}=0\). All successive angles are calculated by: \[\xi_{k}=\xi_{1}+\sum_{i=1}^{k-1}\phi_{i}, \tag{21}\] for \(k\in\{1,2,\ldots,n\}\). We additionally divide the entire \(n\)-random river into \(p\) cross-sections \(C_{j}=\{q_{1,j},\ldots,q_{m,j}\},j\in\{1,2\ldots,p\}\), each holding \(m\) supporting points \(q_{i,j}=(x_{q_{i,j}},y_{q_{i,j}})^{\top}\), \(i\in\{1,2\ldots,m\}\). The set of all supporting points forms a two-dimensional grid (see Figure 6, bottom right), which will be used for current field and water depth sampling. On straight segments, the grid is equidistant such that \(d(q_{i,j},q_{i,j+1})=d(q_{i+1,j},q_{i,j})\), while for curved segments, the distance between adjacent supporting points varies depending on the segment's curvature. For every cross-section \(C_{j}\), the water depth is sampled according to \[h_{q_{i,j}}=-h_{\max}\exp\Bigl{\{}-\epsilon\cdot d(q_{i,j},q_{j}^{M})^{4} \Bigr{\}}+\eta, \tag{22}\] with random noise \(\eta\sim\mathcal{N}(0,\sigma)\), maximum water depth \(h_{\max}\), and \(\epsilon\), a parameter controlling the river wall steepness and fairway width. \(q^{M}\) is the middle point of a given cross-section \(j\) such that \(d(q_{1},q_{j}^{M})=d(q_{m},q_{j}^{M})\). Figure 5: Reward contours for the path-following setup To induce a current field, for a given maximum current speed \(\nu_{max}\), and a cross-section \(C_{j}\), we set the direction of current for all supporting points in that cross-section to be \(\gamma_{q_{,j}}=\frac{2}{\rho}\pi j\) radians and the current speed to be \(\nu_{q_{,j}}=\mathsf{f}(\frac{2}{\rho}\pi j)\nu_{max}\) for all supporting points of \(C_{j}\). The function \(\mathsf{f}\,:\,\mathbb{R}\to[-1,1]\) can be an arbitrary continuous periodic function, this study uses the cosine. By tying the current generation process to the number of cross-sections, two rivers constructed from identical segments also share an identical current field, which is helpful in terms of reproducibility, yet, since the segments are rotated at random on each generation iteration, the likelihood of constructing alike rivers during training decreases exponentially with the number of segments. ### Training For training, we chose a discretization time-step of \(\Delta T=1s\) and an episode length of 2000 steps equating to roughly 33 minutes in real-time. At the beginning of each episode, a random river is generated as described in 4.4. We construct \(n=5\) straight and curved segments with angles, radii, and lengths drawn uniformly from the following sets \[\phi \in\{\pm 60^{\circ},\pm 61^{\circ},\ldots,\pm 100^{\circ}\},\] \[r \in\{1000,\ldots,5000\},\] \[l \in\{400,\ldots,2000\}.\] The value ranges for \(r\) and \(l\) are chosen such that they resemble real-world river behavior and avoid the construction of unrealistically sharp turns or too short straights. During training, we sample values from each set with equal probability. Figure 6: Example river generated from five segments as described in Section 4.4. The bottom right view details a curved river segment, showing the width of the segment \(w_{S}\), a cross-section, \(C_{i}\) (see Figure 7 for a side-view), as well as an example path comprised of four waypoints starting at \(P_{k}\) and ending at \(P_{k+n}\). Figure 7: Cross-section view through a river segment. The distorted grey line resembles the depth-generation function with added noise. We set \(w_{S}=500m\), \(d(q_{k},q_{k+1})=20m\) and the maximum current velocity \(\nu=1.5ms^{-1}\); two example generated rivers can be found in Figure 8. At the beginning of each episode, the agent-vessel is placed at the outset of the first segment of the constructed river with a heading equal to the current path heading plus some noise \[\psi_{0}=\chi_{P_{0}}+R,\quad\text{with }R\sim\mathcal{U}(-5^{\circ},5^{\circ}), \tag{23}\] an initial speed \(U_{0}=4.0ms^{-1}\), and a fixed propeller rotation rate of \(4.0s^{-1}\). For this study, we use a network with one common core and 10 heads. The core network is a multilayer perceptron with a single hidden layer containing 128 neurons. The heads follow the same structure as the core, with one hidden layer, each containing 128 neurons. During training random batches of 128 transitions are sampled from a replay buffer of size \(10^{6}\), gradient updates are performed by the Adam optimizer Kingma and Ba (2015) with a learning rate of \(\alpha=5\times 10^{-4}\) and a discount rate of \(\gamma=0.99\). Training has been conducted for \(3\times 10^{6}\) steps, the implementation framework for the _KEBDQN_ is the TUD_RL package Waltz and Paulig (2022) written in Python. For comparison, we also trained a vanilla _DQN_ alongside the _KEBDQN_. The DQN hyperparameter setup can be found in the appendix, while Figure 9 summarizes the training of 15 different seeds per algorithm. ## 5 PID Benchmark In preparation for the validation of our approach, we chose a PID rudder controller design for the KVLCC2 tanker from Paramesh and Rajendran (2021) to serve as a performance benchmark. The original PID implementation is tuned to the full-size vessel, therefore the provided gains cannot be used in this paper. To find the best possible PID configuration for comparison against the DRL controller, we use a Particle Swarm Optimization (PSO) procedure with random uniform inertia weights proposed by Eberhart and Shi (2000) to tune our PID controller. The rudder angle at every time step evaluates to: Figure 8: Example rivers used for training, generated as in 4.4 \[\delta_{t}=K_{p}\chi_{e,t}+K_{d}r_{t}+K_{i}\int_{0}^{t}\chi_{e,t}\mathrm{d}t. \tag{24}\] Initial gains, \(K_{p}=2.96\), \(K_{d}=19\), \(K_{i}=0.03\), have been found via a coarse grid search. The PSO algorithm used the objective function \[J(t)=\int_{0}^{t}\chi_{e,t}^{2}\mathrm{d}t \tag{25}\] to solve for \(\mathrm{argmin}_{K_{p},K_{i},K_{d}}J(t)\) which yielded \(K_{p}=2.81\), \(K_{d}=64\), \(K_{i}=0.0\) as gains, thereby reducing the system to a PD controller. We also used a different objective function with an additive term for minimum overshoot, yet the result could not beat the simple procedure from above. The controller response was tested in three different scenarios. In all three, the agent is set into a straight channel with a course error of \(14^{\circ}\) and a cross-track error of 50 meters. Responses for no current, current to bow and stern can be found in Figure 10. Additionally, as with the RL agent, the maximum change in rudder angle is limited to \(2^{\circ}s^{-1}\) to respect the structural integrity of the ASV. Figure 10: PID-response for achieving zero course error. Figure 9: Training results running 15 independent seeds for \(3\times 10^{6}\) steps each. The shaded area resembles the 95% point-wise confidence intervals. The theoretical reward limit is 2000. ## 6 Simulation and validation The policy found from training was simulated on several sections of the lower and middle Rhine as well as on artificial scenarios checking for reactivity under harsh environmental changes. All experiments are enrolled for the _KEBDQN_, PID and DQN approach for comparison. ### Rhine river The first scenario validates the performance on a near 180deg degree turn on the _lower Rhine_ close to Dusseldorf harbor (\(51.22\lx@math@degree N,6.73\lx@math@degree E\)), as it features one of the tightest turns in the lower Rhine. Figure 11 shows a map containing the path to be followed, and the trajectories generated by each approach; the corresponding metrics are depicted in Figure 12. The second validation scenario was selected on the _middle Rhine_. We chose a segment close to the _Loreelei_ (\(50.12^{\circ}N,7.73^{\circ}E\)) which features one of the smallest widths on the river together with a fast succession of right and left turns. The results can be seen in Figure 13 and 14. In both scenarios, the path was generated by selecting the deepest point for every cross-section through the entire river and smoothing the result using two-dimensional exponential smoothing. Figure 12: Time series of relevant metrics for the Düsseldorf harbor scenario. The reward plot for the PID controller is the reward it would have received if being judged by the same reward function as the RL-based controller. * Paulig (2023) Paulig, Okhrin (2023): _Preprint submitted to Elsevier_ Figure 13: 2D map of the starboard turn near the Lorelei. Analyzing the rudder commands generated for both scenarios, we observe relative similarity in magnitude and direction, indicating that the DRL agents were able to learn a similar behavior as exerted by the PID controller. Inspecting the cross-track error and course error for both scenarios, both controllers are again found to follow akin patterns, however, the DRL controller reacts quicker, thus being able to achieve a maximum cross-track deviation of \(4.36m\) compared to \(26.30m\) from the PID controller for the Dusseldorf harbor scenario, and \(14.67m\) and \(27.47m\) respectively for the Lorelei. One of the major drawbacks of discrete RL-based controllers is the jittering of the rudder angle as seen in the rudder commands in Figures 12 and 14, however, since the rudder steps in the RL approach are chosen such that the structural limits of the ASV are respected, the jittering is not prone to damaging the rudder of the ASV if this algorithm had been deployed in the real world. In earlier stages of research, we followed other authors Martinsen and Lekkas (2018, 2018), and added a penalty for changing rudder angles too quickly, concluding that less change in rudder angle came at the cost of losing cross-track-error accuracy. Since we valued accuracy higher than slow rudder change, the penalty term was removed. Figure 14: Time series of relevant metrics for the Lorelei scenario. The reward plot for the PID controller is the reward it would have received if being judged by the same reward function as the RL-based controller. ### Straight paths For maneuvers like berthing, docking, and locking or advancing through canals it seems natural to ask the vessel to follow a straight line with very high accuracy. We will test this ability for the PID and DRL controller in a straight-path scenario. We would expect the PID controller to achieve near-perfect convergence to the path, as any other result would indicate a misconfigured set-point. As with the PID calibration, the vessel will be placed in a straight canal 50 and 20 meters starboard to the path with a course error of \(14^{\circ}\) and \(5.7^{\circ}\) respectively. Propeller revolutions are fixed to \(4.0s^{-1}\) and the initial velocity \(U_{0}=2.0ms^{-1}\). We assume no currents and a water depth to draught ratio of roughly \(h/d=2.40\). The results from Figure 15 confirm our initial assumption about the PID controller. In comparison to our DRL approach, the PID converges faster and more accurately, falling below one meter of cross-track error after \(600m\) of advance. The DRL rate of convergence seems to be dependent on the offset magnitude from the path. In the \(20m\) offset scenario, the _KEBDQN_ performs similarly to the PID, while for the \(50m\) offset the DRL agent has noticeable difficulties returning to the path. We assume, that the agent rarely saw cross-track errors this large at the beginning of an episode during training, therefore no convergence strategy could be developed. ## 7 Robustness analysis ### Varying revolutions To verify the robustness of our approach, both controllers were driven up and downstream through the entire lower- and middle Rhine, each in a single episode. We did this once for a propeller revolution rate of \(4.0s^{-1}\), which is also the frequency used for training, and another time using \(5.0s^{-1}\) to investigate the generalization capabilities of our approach. The cross-track-error distributions achieved are depicted in Figure 17. For the downstream scenario we find acceptable results for both controllers, whereby the DRL solution exerts significantly smaller variance, yet in the downstream scenario, is biased towards starboard, for both \(4.0s^{-1}\) and \(5.0s^{-1}\). Interestingly, this bias does not appear in the upstream scenarios, ruling out doubt about starboard-biased training, as we would expect to observe a bias towards port for driving upstream. For the upstream scenario, the PID controller appears to be sensitive to changes in ship velocity, with a tendency of becoming more stable for higher velocities. The inability of the PID controller to stay on course at a slower speed and high current velocities to the bow (the middle Rhine features current velocities of up to \(2.4ms^{-1}\), and the lower Rhine up to \(1.5ms^{-1}\)), is likely due to misconfiguration of the PID controller for such environments. The fundamental problem with PIDs is that it may be impossible to find a set of gains, optimally controlling the rudder in dynamic environments featuring a wide range of external disturbances. DRL approaches in contrast have the ability to adapt to more general cases, as they can rely on their experience acquired from the training. Although the agents did not see current velocities above \(1.5ms^{-1}\), they were trained to react to those from every direction. This may lead to an additional environmental awareness, capable of achieving small cross-track errors across varying external disturbances. ### Noisy observations To further explore the robustness of our approach against the PID controller, we decided to compare cross-track-error performance under noisy sensor inputs. Impaired sensor measurements appear regularly in real-world applications, thus posing a valuable platform to evaluate controller behavior. We again chose the unaltered Dusseldorf scenario on the lower Rhine as in Section 6.1 but with added Gaussian noise to the yaw-rate \(\bar{r}_{t}=r_{t}+\epsilon_{r},\epsilon_{r}\sim\mathcal{N}(0,\bar{\sigma}_{r})\) and course error \(\bar{\chi}_{t}^{e}=\chi_{t}^{e}+\epsilon_{\chi_{e}},\epsilon_{\chi_{e}}\sim \mathcal{N}(0,\bar{\sigma}_{\chi_{e}})\). The standard Figure 15: Straight-path-following experiment. Consecutive markers are each 30 seconds apart. deviations are calculated from the empirical distributions of yaw rate and course error obtained from driving the _KEBDQN_ controller through the entire river in a single episode and had been measured to be \(\bar{\sigma}_{r}=0.004\,\mathrm{rad}\cdot s^{-1}\) and \(\bar{\sigma}_{\chi_{e}}=0.052^{\circ}\). All other sensor inputs for the DRL approach are left unchanged. The results in Figure 16 paint an ambiguous picture. On the one hand, we still observe greater accuracy in cross-track error for the DRL controller, on the other hand, the deviation from the noise-free run is smaller for the PID controller. We also did this for several other scenarios (available upon request), all with similar outcomes. Although the variation in cross-track error for the PID controller under noisy inputs is smaller technically, the accuracy achieved by the DRL controller remains higher. Therefore, we can conclude, that in terms of sensor noise, our PID controller attains lower deviations from its anticipated position contrary to the DRL system. However, the broader picture of robustness can be seen in favor of the DRL controller, since it not only produces smaller absolute cross-track errors -even under noisy inputs-, it also performed well under unseen propeller revolutions and very slow vessel advance rates. ## 8 Conclusion ASV path-following on inland waterways poses several additional challenges compared to the open sea. The present study addresses these challenges by using a state-of-the-art bootstrapped DQN algorithm to develop a robust and versatile rudder controller for path-following on inland waterways. Optimal control approaches such as PID or traditional DRL algorithms such as DQN showed inferior adaptability to the highly dynamic river environment, especially for upstream scenarios with strong flow velocities to the vessel's bow. We acknowledge that those approaches can also generate rudder commands leading to accurate control of the ASV. Yet, they would require re-training or reconfiguration to adapt to the versatile dynamics of restricted waterways. Furthermore, our paper neither considers traffic nor dynamic changes in propeller revolutions, which may be oversimplified and should be addressed in future research. ## 9 Acknowledgements The authors would like to thank the German Federal Waterways Engineering and Research Institute (BAW, Bundesanstalt fur Wasserbau) for providing real-world depth and current data for the lower- and middle Rhine as well as the Center for Information Services and High-Performance Computing at TU Dresden for providing its facilities for high throughput calculations. The authors would also like to extend their gratitude to Martin Waltz and Fabian Hart for their valuable discussions and support throughout this project.
2309.00248
DiffuGen: Adaptable Approach for Generating Labeled Image Datasets using Stable Diffusion Models
Generating high-quality labeled image datasets is crucial for training accurate and robust machine learning models in the field of computer vision. However, the process of manually labeling real images is often time-consuming and costly. To address these challenges associated with dataset generation, we introduce "DiffuGen," a simple and adaptable approach that harnesses the power of stable diffusion models to create labeled image datasets efficiently. By leveraging stable diffusion models, our approach not only ensures the quality of generated datasets but also provides a versatile solution for label generation. In this paper, we present the methodology behind DiffuGen, which combines the capabilities of diffusion models with two distinct labeling techniques: unsupervised and supervised. Distinctively, DiffuGen employs prompt templating for adaptable image generation and textual inversion to enhance diffusion model capabilities.
Michael Shenoda, Edward Kim
2023-09-01T04:42:03Z
http://arxiv.org/abs/2309.00248v1
# DiffuGen: Adaptable Approach for Generating Labeled Image Datasets using Stable Diffusion Models ###### Abstract Generating high-quality labeled image datasets is crucial for training accurate and robust machine learning models in the field of computer vision. However, the process of manually labeling real images is often time-consuming and costly. To address these challenges associated with dataset generation, we introduce "DiffuGen," a simple and adaptable approach that harnesses the power of stable diffusion models to create labeled image datasets efficiently. By leveraging stable diffusion models, our approach not only ensures the quality of generated datasets but also provides a versatile solution for label generation. In this paper, we present the methodology behind DiffuGen, which combines the capabilities of diffusion models with two distinct labeling techniques: unsupervised and supervised. Distinctively, DiffuGen employs prompt templating for adaptable image generation and textual inversion to enhance diffusion model capabilities. Figure 1: Generated diverse images with visualization of the labels and cross attention heatmap Introduction Generating labeled image datasets for machine learning and computer vision applications is pivotal for model training and evaluation. The quality of these datasets significantly impacts model performance and generalization. In this context, stable diffusion models emerge as a promising avenue for dataset generation due to their ability to generate high-resolution and realistic images. The key objective of this work is to address the challenge of generating diverse and accurately labeled datasets, enabling the development of more robust machine learning models. Through simple techniques such as prompt templating and textual inversion, DiffuGen enhances dataset diversity and generation capabilities. The dual labeling techniques are introduced to compliment each other. The unsupervised method is useful when lacking a supervised model for labeling, where it utilizes the cross attention attribution heatmaps, extracted through the diffusion pipeline, to produce course labels. The supervised method is effective when an existing image segmentation model exists and ready to be used for labeling and further fine-tuned with generated image datasets. The approach discussed in this paper doesn't require any training, except for expanding a diffusion model with textual inversion. Our experiments showcase the effectiveness of our approach in producing diverse and accurately labeled datasets, offering a promising solution for advancing research in machine learning applications. In our demonstrations we focused on generating cars datasets with ability to generate difficult visual scenarios such as car accidents that involves severe collisions. ## 2 Related Work The field of dataset generation has witnessed the advent of various approaches, including GAN-based approaches such as DatasetGAN [3] and BigDatasetGAN [4]. These techniques offer solutions for image synthesis but lack the quality and flexibility of image generation of stable diffusion models. Additionally, the emergence of DiffuMask [9] highlights the potential of stable diffusion models for semantic segmentation but does not cover a broader generation and labeling scope. Our approach distinguishes itself by offering a comprehensive and flexible solution to the labeling challenge by offering semantic segmentation, bounding polygons for instance segmentation and bounding boxes for object detection. ## 3 Methodology DiffuGen provides a robust framework that integrates pre-trained stable diffusion models, the versatility of prompt templating, and a range of diffusion tasks. By using an input configuration JSON, users can specify parameters to generate image datasets using three primary stable diffusion tasks. Each of these tasks not only benefits from the prompt templating mechanism, ensuring adaptability and richness, but also comes with its dedicated integral labeling pipeline. This design allows DiffuGen to provide both supervised and unsupervised labeling methods tailored to the specific needs of each task, ensuring a well-aligned and efficient labeling process for diverse application needs. Figure 2: DiffuGen Framework Overview ### Utilizing Pre-Trained Stable Diffusion Models Pre-trained stable diffusion models serve as the cornerstone of DiffuGen, offering consistency, quality, and adaptability in image generation. During our selection process for the optimal model, we began with the "stable-diffusion-v1-5" [5]. Yet, our evaluations indicated that this model fell short in delivering the desired realism for our dataset generation. The realism is essential to maximize the datasets' relevance in training machine learning models for high-fidelity vision applications. Recognizing these constraints, our exploration led us to the "Realistic_Vision_V4.0" [6] model, which distinctly excelled in producing photo-realistic images. Crucially, it retained an accurate object representation with minimal deformation. ### Prompt Templating Prompt templating is the cornerstone of DiffuGen's adaptability and flexibility. Users craft prompt templates populated with replaceable attributes, such as object names, viewpoints, weather conditions, and more. This mechanism allows for the reuse of the same prompt to create a multitude of variations, vastly enhancing the diversity of samples generated.While it forms the foundation for the text-to-image task, generating the initial dataset, its utility is not restricted to this phase alone. The same templating mechanism is adeply reused in the subsequent tasks, image-to-image and inpainting. By allowing the replacement or introduction of attributes, which might not have been specified during the initial text-to-image phase, prompt templating ensures continuous adaptability throughout the dataset generation process. ### Extending Image Diversity with Different Diffusion Tasks Text-to-Image: Serving as the initial phase in dataset creation, this task utilizes prompt templates and their replaceable attributes to generate a diverse set of images. This image dataset set the baseline for the subsequent tasks, providing a rich and diverse foundation dataset. Image-to-Image: Building upon the foundational dataset from the text-to-image phase, this task introduces variations such as changes in lighting and environment. It benefits from the same prompt templating mechanism, enabling users to modify or introduce new attributes seamlessly, thereby enhancing the diversity and richness of the dataset. In-painting: Going beyond mere object replacement, this task dive deep into texture variations and color alterations of the object. It is not just about introducing new changes but also for refining them. The prompt templating once again plays a crucial role here, providing users with the ability to specify and guide the changes by introducing newer objects to replace or altering them, resulting in a dataset that's both expansive and detailed. ### Expanding Stable Diffusion Capability with Textual Inversion Textual inversion serves as a powerful technique, enabling the capture of novel concepts from a limited set of example images. This technique holds the potential to significantly enhance the precision and control over image generation in the text-to-image pipeline. The essence of textual inversion lies in its ability to introduce new "words" into the text encoder's embedding space. These words, learned from a handful of example images, extend the vocabulary of the model where more attention is brought in. This empowers users to drive personalized image generation, steering it towards specific and nuanced visual concepts. An additional advantage of the textual inversion technique is its lightweight nature, few kb in file size. The learned textual inversion embeddings can be transferred to other models derived from the same base, preserving the capability to control and enrich image generation. As a demonstration of the textual inversion's technique, we focused on training a rare object that is typically unseen on the road: "grand-piano" as shown in Figure 3. In instances where the original model struggled to generate a piano on the road, the textual inversion successfully refined the generation process. This exemplifies how textual inversion can fill the gaps in object visual representations within diffusion model. In addition, we trained a car-accident textual inversion concept to fine tune examples of car collisions on the road, few samples showing in Figure 1. ### Unsupervised Labeling with Cross Attention Attributions The approach uses cross attention attributions heatmaps introduced in "What the DAAM: Interpreting Stable Diffusion Using Cross Attention" [1]. These heatmaps offer a visual representation of the relationship between textual prompts and pixel-level influences in the generated images. It works by upscaling and aggregating cross-attention word-pixel scores within the denoising subnetwork of stable diffusion models, resulting in heatmaps that highlight areas influenced by specific words. We build upon this technique to provide a foundation for automatically generating labels, such as semantic mask, bounding polygons, and bounding boxes, without the need for manual annotations. #### 3.5.1 Semantic Mask Extraction After a cross attention heatmap is obtained, we apply Otsu[2] adaptive thresholding on the heatmap image to extract a coarse semantic mask that outline the shapes of objects indicated by the textual prompts. Further refinement is done by performing erosion and dilation operations on binary mask to segregate close by binary blobs and fill small holes. Figure 3: Example of Textual Inversion for <grand-piano> as a rare object concept on road. On the left are some of the real samples used for training, and on the right showing the generated images after training #### 3.5.2 Bounding Polygons and Bounding Box Localization We identify contours within the semantic binary mask by using a simple chain approximation. By iterating through the detected contours, we precisely fit bounding boxes around the object-centric regions. This process results in bounding polygon and bounding box coordinates encapsulating the generated objects. In addition, we introduce a object label scoring mechanism for the generated labels. This scoring method takes into account both the size of the object represented by the bounding box and its intensity level within the cross attention heatmap. This approach provides a unique way to prioritize and evaluate the significance of the generated labels. ### Supervised Labeling with Existing Segmentation Models For scenarios demanding higher precision, DiffuGen proposes the use of supervised segmentation models such as YOLOv8-seg[7] and Mask2Former[8]. In the cases where existing pre-trained models fails to predict the generated images, DiffuGen's unsupervised cross-attention technique can serve as a foundation, facilitating the training of new models using labels produced by the unsupervised approach. Currently, DiffuGen integrates with YOLOv8-seg as a proof of concept to demonstrate the approch. ## 4 Experiments and Results To validate the effectiveness of DiffuGen, we undertook a series of experiments. The main objectives were to assess the quality and diversity of the generated images, and the accuracy of the labels provided by the unsupervised and supervised methods. ### Dataset Generation and Diversity Using the text-to-image, image-to-image, and in-painting diffusion task, we generated images of various car scenarios, refer to Figure 1. A majority were normal scenarios while a fraction, specifically controlled by textual inversion, included piano on road and car accidents. Visual assessment showed a high degree of realism, and the diversity in scenarios, colors, lighting conditions, and object placements were commendable. Figure 4: Visualizing semantic mask generation by adaptive thresholding of cross attention heatmap during the diffusion process. Demonstrating the true potential for labeling rare objects or anomalies that yet to be trained in the supervised labeling model Figure 5: Visualizing bounding polygon and box localization by finding contours of semantic binary mask. ### Labeling Accuracy Through exterminations, the accuracy of a supervised approach is unmatched, as long as annotated samples exist to begin with. That's where the unsupervised approach would shine to bridge the gap and kick start the supervised approach. #### 4.2.1 Unsupervised Labeling We visually inspected the labels produced by the unsupervised approach. We found that the majority of the images were labeled accurately, particularly in scenes with fewer objects. In crowded or complex scenes, however, inconsistencies in object detection were occasionally noted. That is due to the inherited limitation of cross attention heatmap approach. #### 4.2.2 Supervised Labeling Utilizing YOLOv8-seg, we found the supervised method to have a higher labeling accuracy and finer segmentation boundaries. With exception to the car accidents with severe collision generated with textual inversion. ## 5 Limitations and Future Enhancements DiffuGen inherits biases from the underlying diffusion model, impacting generated data. Also, the image quality relies on the model's quality. Relying solely on visual inspection introduces subjectivity. Quantitative assessment is beneficial for future iterations. Looking ahead, enhancements for DiffuGen include implementing objective metrics to gauge image quality and label accuracy, expanding evaluations across domains, refining unsupervised labeling techniques, and addressing model biases through diverse training data. ## 6 Conclusion DiffuGen offers a new approach to creating high-quality labeled image datasets. The challenges traditionally associated with manual labeling are greatly reduced, and our visual inspections underline its efficacy. While there is room for improvement, DiffuGen marks a significant stride in the realm of dataset generation, buffering considerable advantages to the computer vision and machine learning domain. Figure 6: Showing generated images for classic piano on road with textual inversion on left and without on right using same prompt and random seed
2310.15040
SLOG: A Structural Generalization Benchmark for Semantic Parsing
The goal of compositional generalization benchmarks is to evaluate how well models generalize to new complex linguistic expressions. Existing benchmarks often focus on lexical generalization, the interpretation of novel lexical items in syntactic structures familiar from training; structural generalization tasks, where a model needs to interpret syntactic structures that are themselves unfamiliar from training, are often underrepresented, resulting in overly optimistic perceptions of how well models can generalize. We introduce SLOG, a semantic parsing dataset that extends COGS (Kim and Linzen, 2020) with 17 structural generalization cases. In our experiments, the generalization accuracy of Transformer models, including pretrained ones, only reaches 40.6%, while a structure-aware parser only achieves 70.8%. These results are far from the near-perfect accuracy existing models achieve on COGS, demonstrating the role of SLOG in foregrounding the large discrepancy between models' lexical and structural generalization capacities.
Bingzhi Li, Lucia Donatelli, Alexander Koller, Tal Linzen, Yuekun Yao, Najoung Kim
2023-10-23T15:39:09Z
http://arxiv.org/abs/2310.15040v1
# SLOG: A Structural Generalization Benchmark for Semantic Parsing ###### Abstract The goal of compositional generalization benchmarks is to evaluate how well models generalize to new complex linguistic expressions. Existing benchmarks often focus on _lexical generalization_, the interpretation of novel lexical items in syntactic structures familiar from training. _Structural generalization_ tasks, where a model needs to interpret syntactic structures that are themselves unfamiliar from training, are often underrepresented, resulting in overly optimistic perceptions of how well models can generalize. We introduce SLOG, a semantic parsing dataset that extends COGS Kim and Linzen (2020) with 17 structural generalization cases. In our experiments, the generalization accuracy of Transformer models, including pretrained ones, only reaches 40.6%, while a structure-aware parser only achieves 70.8%. These results are far from the near-perfect accuracy existing models achieve on COGS, demonstrating the role of SLOG in foregrounding the large discrepancy between models' lexical and structural generalization capacities. ## 1 Introduction Compositional generalization benchmarks that test the ability to understand novel utterances based on composition of known parts Montague (1974); Partee (1984); Fodor and Pylyshyn (1988) have emerged as a useful tool for model evaluation in semantic parsing. COGS Kim and Linzen (2020) in particular has become a widely-used benchmark, as it is designed to expose a generalization gap between training and testing data that many recent semantic parsers still struggle with. COGS distinguishes two distinct types of generalization challenges: _lexical generalization_ tests a model's ability to interpret novel combinations of known lexical items and known linguistic structures (Figure 0(a)), whereas _structural generalization_ tests the ability to combine known structures into a novel structure (Figure 0(b)). Importantly, most of the generalization types in COGS are lexical generalization (18 out of 21 generalization types, 86% of the dataset). As lexical generalization is arguably easier than structural generalization (e.g., solvable by simple slot-filling), this imbalance may lead to overall performance numbers that are overly optimistic with regard to a model's capacity to generalize compositionally Yao and Koller (2022). To facilitate a more comprehensive evaluation of structural generalization, we introduce SLOG, a Structural **LO**ng-distance dependencies **G**eneralization benchmark. SLOG extends COGS to include 17 cases of structural generalization in total (14 new cases and 3 existing cases from COGS) (SS2). The novel generalizations we introduce target two key structural features of human language (SS3): recursion and filler-gap dependencies. We use SLOG to evaluate a sequence-to-sequence (seq2seq) Transformer model trained Figure 1: Examples of lexical generalization in COGS (a), and structural generalization in SLOG (b). The SLOG task requires mapping the generalization examples to their logical forms; the corresponding logical forms are shown in Table 1. from scratch (Vaswani et al., 2017), two pretrained Transformers (T5-base; Raffel et al. 2020 and LLaMA; Touvron et al. 2023), and a structure-aware1 model (AM-Parser; Weissenhorn et al. 2022). In comparison to their overall performance on COGS, all models exhibit considerably lower performance on SLOG (SS5). An error analysis reveals that the structure-aware AM-Parser generalizes well on the existing structural generalization cases in COGS but struggles with the gap constructions introduced in SLOG due to inherent structural limitations, which we discuss in SS5.3. Transformers tend to erroneously repeat frequent meaning representation subsequences observed during training. Even with pretraining, they struggle with unseen long-distance dependencies, which we attribute to their bias towards shorter predicate-argument dependencies. Overall, the discrepancy in performance between SLOG and COGS demonstrates the utility of SLOG in exposing the overall limitations of current semantic parsing models shown to achieve high performance on existing generalization benchmarks, as well as highlighting the different weaknesses of these models. Footnote 1: In this paper, ‘structure-aware’ refers specifically to models that incorporate explicit representations of linguistic structure. ## 2 The SLOG Benchmark SLOG follows the semantic parsing format used in COGS, where the task is to translate English expressions into logic-based meaning representations. As in COGS, there is a systematic gap between the training set and the generalization set: there are constructions in the generalization set that are not included in the training set, but pieces of constructions included in the training set can be recombined to arrive at their correct meanings. For example, as illustrated in Table 1, noun phrases that appear only in object position during training must be reinterpreted in subject position during generalization. SLOG2 is generated using manually specified rules (SS3), adopting the same meaning representation as COGS. The COGS logical form (LF), derived from Reddy et al. (2017), uses indexed constants to represent entities or events. For example, in the first example of Table 1, \(x_{3}\) denotes an entity that is both a dog and the theme of a seeing event, while \(x_{1}\) denotes the seeing event. The constant names are determined by the linear position of the phrasal head in the input sentence. Footnote 2: The generation code and SLOG dataset are available at [https://github.com/bingzhilee/SLOG](https://github.com/bingzhilee/SLOG). SLOG contains 17 structural generalization cases grouped into four categories. These generalization cases are primarily motivated by frequency asymmetries in natural language, where simpler structures are more common than complex ones; in other words, SLOG assesses whether NLP models can extrapolate from frequent patterns to rare ones. We describe the four categories below; see Table 2 for the full list of generalization cases. ### Novel Recursion Depth Recursion allows smaller phrases to be combined to create larger phrases. This combination process can be repeated an unbounded number of times. COGS tests a model's ability to apply recursion in two cases: sentential complements (tail CP recursion)3 and nominal prepositional phrase modifiers (tail PP recursion). For both cases, the training set contains recursive depths of 0-2, where 0 indicates the absence of any PP or CP, and the generalization set contains the strictly greater depths 3-12. Footnote 3: Nested clauses with right-branch embeddings like _[Max knows that [Marx knows [that Emma cooks]\({}_{CP}\)\({}_{CP}\)\({}_{CP}\) By contrast, the SLOG training set includes recursion of depths 0-2 and 4, and the generalization \begin{table} \begin{tabular}{l l l} \hline \hline & **Training** & **Generalization** \\ \hline COGS & Emma saw **the dog**\(\leadsto\) & **The dog** ran. \(\leadsto\) \\ & \(\star\)dog(\(x_{3}\)); see.agent(\(x_{1}\),Emma) \(\land\) & \(\star\)dog(\(x_{1}\)); run.agent(\(x_{2}\),\(x_{1}\)) \\ & see.theme(\(x_{1}\),\(x_{3}\)) & \\ & The cat **ran.**\(\leadsto\)\(\star\)cat(\(x_{1}\)); run.agent(\(x_{2}\),\(x_{1}\)) & \\ \hline SLOG & Emma saw **the dog that Max held.**\(\leadsto\) & **The dog that Max saw** ran. \(\leadsto\) \\ & \(\star\)dog(\(x_{3}\)); see.agent(\(x_{1}\),Emma) \(\land\) & \(\star\)dog(\(x_{1}\)); dog.nmod(\(x_{1}\),\(x_{4}\)) \(\land\) \\ & see.theme(\(x_{1}\),\(x_{3}\)) \(\land\) dog.nmod(\(x_{3}\),\(x_{6}\)) \(\land\) & see.agent(\(x_{4}\),Max) \(\land\) see.theme(\(x_{4}\),\(x_{1}\)) \\ & hold.agent(\(x_{6}\),Max) \(\land\) hold.theme(\(x_{6}\),\(x_{3}\)) \(\land\) run.agent(\(x_{5}\),\(x_{1}\)) & \\ & The cat **ran.**\(\leadsto\)\(\star\)cat(\(x_{1}\)); run.agent(\(x_{2}\),\(x_{1}\)) & \\ \hline \hline \end{tabular} \end{table} Table 1: Examples of lexical generalization in COGS and structural generalization in SLOG with their corresponding COGS logical form (LF) representation. The task requires mapping (\(\leadsto\)) the English sentences to their LFs. set contains both the intermediate depth 3 and the greater depths 5-12. Including both shallower and deeper embeddings in the generalization set allows us to determine if any difficulty in generalizing to an unseen embedding depth is a consequence of the model's more general difficulty in processing longer sequences than observed in training Lake and Baroni (2018); Herzig et al. (2021); Anil et al. (2022) rather than a more specific issue with applying recursion to generate novel constructions. In addition to this new depth split, SLOG introduces a new recursion construction. COGS involves only tail recursion, which features recursive PPs and CPs with right-branch embeddings. SLOG extends this with center embedding, where a phrase is embedded in the middle of another of the same type, leaving elements on both sides of the embedded component and producing well-parenthesized long-distance dependencies, as denoted by the subscript numbers: 1. Eva saw the mouse [that the cat\({}_{1}\) [ that the dog\({}_{2}\) chased\({}_{2}\) ] held\({}_{1}\) ]. At the same recursion depths, the average LF length increases from PP recursion to tail CP recursion to center embedding. In natural language, recursion depth is rarely greater than five, and center embedding is generally limited to two levels Karlsson (2007, 2010). By contrast, SLOG includes recursion up to depth 12. While this may surpass human processing abilities for reasons presumed to be linked to memory constraints Gibson and Thomas (1999); Karlsson (2007), deeper embedding depth remains grammatical, echoing Chomsky's competence versus performance distinction. Importantly, we also note that our goal with SLOG is to evaluate the linguistic competence of NLP models, whose goal is not to simulate human performance limitations. ### Novel Combination of Modified Phrases and Grammatical Roles SLOG also tests the capacity to interpret complex noun phrases (NPs) in new positions. In addition to PP modifiers included in COGS, we introduce relative clause modifiers. #### 2.2.1 Prepositional Phrase Modifiers In COGS, NPs modified by PPs are seen only as direct objects (2), and need to be interpreted as subjects during generalization (3). SLOG adds generalization cases targeting indirect object modification (4). 1. Noah saw [a cat on the table]\({}_{dobj}\). 2. [The **cat** on the mat]\({}_{subj}\)**ran**. 3. [The **cat** on the mat]\({}_{subj}\)**ran**. 4. Emma **gave** [a cat on the mat]\({}_{job}\) a **fish**. We expect sub-cases of indirect object modification to pose challenges of varying difficulty, depending on the length of the predicate-argument dependency. In particular, generalization to indirect object modification in active oblique datives (4) introduces a dependency between the verb _gave_ and the direct object _a fish_ across the non-argument NP _the mat_.4 In contrast, sub-cases like (5a) and (5b), where the non-argument NP occurs at the end of the sentence, do not include a dependency across an intervening NP; we therefore expect them to be relatively easier. Footnote 4: This observation also holds true for the generalization to subject modification shown in (3). 5. [a.] Emma gave a fish to [a cat on the mat]\({}_{jobj}\). 2. A fish was given to [a cat on the mat]\({}_{jobj}\). SLOG's training set additionally includes standalone PP-modified NPs (e.g., the NP _the cat on the table_ on its own5) to prevent modifiers from being associated with only a particular range of token indices, as pointed out by Wu et al. (2023).6 Such standalone NPs, which are common in child-directed speech Wells and Bridges (1981); Cameron-Faulkner et al. (2003) but were not a part of COGS, serve as a signal that the distribution of PP-modified NPs is not restricted to the object position. Footnote 5: the cat on the table \(\leadsto\)*cat\({}_{1}\); *table\({}_{4}\); cat.nmmod.on\((x_{1},x_{4})\) Footnote 6: PPs in COGS were restricted to the object position, so models never observed the association of modifiers with linearly-earlier indices, which makes it difficult to isolate this effect from structural generalization. #### 2.2.2 Relative Clause Modifiers Similar to PP modifiers, NPs with relative clause (RC) modifiers, as in (6), can occupy any position that an unmodified NP can fill. We expect RC modifiers to pose a greater challenge compared to PP modifiers, as they involve _gap constructions_, in which a phrase needs to be interpreted in a position other than its canonical position in a declarative clause. We refer to this as _extraction_Sag (2010), and we mark gap positions with an underscore. In (6), _the dog_ should be interpreted as if it occupies the gap position as the direct object of _held_; in the logical form, this is represented by the fact that \(x_{3}\) is filling both see.theme and hold.theme. * Emma saw the dog that Max held --. \(\leadsto*\texttt{dog}(x_{3})\); see.agent (\(x_{1}\), Emma) \(\wedge\)see.theme (\(x_{1}\), \(x_{3}\)) \(\wedge\)dog.nmod (\(x_{3},x_{6}\)) \(\wedge\)hold.agent (\(x_{6}\), Max) \(\wedge\)hold.theme (\(x_{6}\), \(x_{3}\)) Similar to the case of the PP modifiers (SS2.2.1), the training set contains direct object NPs modified by RCs as well as standalone RC-modified NPs, as in (7). The generalization set contains RC modifiers for subject NPs, as in (8a), and indirect object NPs, as in (8b): * Liam saw [the cat that Emma held \(\_\)]_doj. * the cat that Liam fed -- * [The cat that Emma found \(\_\)]\({}_{subj}\) smiled. * Liam gave [a cat that Emma held \(\_\)]\({}_{ibbj}\) a fish. ### Novel Gap Positions The SLOG training set contains both subject and direct object extraction in RCs (9); these are the most frequent extraction positions in both written and spoken English corpora [11]. The generalization set includes extraction of indirect objects (10), a less frequent construction. * Liam saw the boy that ate a cake. * Liam saw the boy that Emma loved -- * Liam saw the boy that Emma gave a cake to --. SLOG also tests for the interpretation of novel gap positions in _wh_-questions. As with RCs, the training set includes questions with either subject or direct object extraction (11), and the generalization set contains questions with indirect object extraction (12). * Who did Emma love --? * Who ate a cake? * Who did Emma give a cake to --?. In a _wh_-question (11a), a _wh_-filler (_who_) in the initial position of the clause is interpreted as if it occupied the gap (again indicated with an underscore) in the direct object position of _love_. ### Novel _Wh_-questions Next, we evaluate generalization to extraction cases that involve familiar gap positions--subject and direct object--paired with verb types that have never been observed in _wh_-questions during training. For this case, the training set contains _wh_-questions with simple transitive verbs (13) and declarative sentences with various verb types: transitive, intransitive and distransitive. The generalization set includes five novel types of _wh_-questions that have not been observed during training, though their declarative counterparts have. The novel _wh_-questions have varying distance between the _wh_-filler and the gap. Subject _wh_-questions, which maintain the same word order as their declarative counterparts, exhibit no gap (14a, 14b). Questions about direct objects of distransitive verbs (14c), as well as questions with NPs modified by either a PP or an RC (14d),7 have moderately long filler-gap distances. The filler-gap distance is longest for object extraction out of embedded clauses (14e). Footnote 7: _Wh_-questions with PP- or RC-modified NPs include various constructions where modifiers appear in subjects, direct objects, or indirect objects, exhibiting an average filler-gap distance similar to distransitive verb _wh_-questions. * [The training set also includes the declarative counterparts of (14).] * Who **saw* * a cat? * What did Emma **see \(\_\)**? * Who froze? * What was frozen? * What did the boy give \(\_\) to Liam? * What did Max give a cat that slept \(\_\)? * What did a boy say that Max believed that the cat saw \(\_\)? ## 3 Dataset Generation GrammarSLOG is generated from a probabilistic Synchronous Context-Free Grammar (SCFG) implemented in Alto [11]. This grammar simultaneously generates the English expressions and their corresponding meaning representations (see Appendix B for more details). Training and generalization setsWe follow a similar sampling procedure to COGS. A total of 10,607 sentences are sampled from the probabilistic SCFG and then split into training, in-domain validation and in-domain test sets with an 8:1:1 ratio. The splits are then merged with the corresponding COGS splits. We then add 100 standalone PP-modified NPs and 100 standalone RC-modified NPs to the training set, as discussed in Section 2.2. We also include what we refer to as primitive exposure examples for each distransitive verb and verb accepting CP arguments,8 totaling 40 primitives. These are standalone verb lexical meanings, such as, _hope_\(\leadsto\)\(\lambda\)a.\(\lambda\)b.\(\lambda\)e.hope.agent(e,b) \(\wedge\)hope.comp(e,a). This results in a final training set of 32,755 examples and 4,046 examples in both validation and in-distribution test sets. Footnote 8: Primitive examples of these two verb types let us incorporate their infinitive forms, used in _wh_-questions, into SLOG’s vocabulary. For the generalization set, we use separate grammars for each generalization case. We sample 1000 examples from each of the 17 cases, yielding a total of 17,000 examples. For the training set and the generalization set, the maximum lengths of the input English sentences are 28 and 61 tokens, respectively. The maximum lengths of the corresponding output logic forms are 229 and 599 tokens. See Appendix B for more details. ## 4 Experimental Setup ModelsWe evaluate the performance of seq2seq, autoregressive, and structure-aware models on SLOG. The seq2seq models we evaluate are a Transformer we train on SLOG from scratch (_vanilla Transformer_ henceforth; Vaswani et al., 2017), and a finetuned pretrained Transformer (T5; Raffel et al., 2020) that has demonstrated strong performance on multiple compositional generalization tasks (Herzig et al., 2021). The autoregressive Transformer model we evaluate is LLaMa (Touvron et al., 2023). Finally, the structure-aware model we evaluate is the AM-Parser (Groschwitz et al., 2018), which achieves near-perfect accuracy on COGS (Weissenhorn et al., 2022). Previous work has shown that structure-aware models perform well on compositional generalization tasks, specifically those involving structural generalization (Yao and Koller, 2022). Following Weissenhorn et al. (2022), we first have the AM-Parser predict an intermediate dependency tree, and then convert it to a graph-based representation of the SLOG logical form. We use the A* AM-parser from Lindemann et al. (2020) for our experiments, as it yields the best overall results compared to alternative versions of AM-parser, such as the one in Groschwitz et al. (2018).9 We run each experiment with five different random seeds. See Appendix A for more details. Footnote 9: For a detailed discussion, please refer to Appendix D. Evaluation metricMost studies report exact match accuracy on COGS. This metric has two limitations that may lead to an underestimation of a model's generalization capacity. First, because the COGS LF is conjunctive, reorderings of the conjuncts are semantically equivalent; yet, under exact match accuracy, only a single order is considered correct. Second, the COGS LF uses Skolem constants with a naming scheme tied to the linear indices of phrasal heads in the input. While a commitment to a systematic naming scheme is necessary for consistent evaluation, different naming schemes up to the renaming of the constants in the gold LF yield equivalent LFs (e.g., (15a) vs. (15b)). Such LFs would be considered incorrect under exact match. To incorporate semantic equivalence up to conjunct reordering and constant renaming, at evaluation time, we alphabetically sort the conjuncts of the gold LFs, and subsequently index variables based on their appearance order in the sorted LFs. The same modifications are applied to the model outputs. This process results in the reformatted output as shown in (16); applying these modifications to (15a) and (15b) yields the same outcome. Then, computing exact match on these postprocessed LFs captures the targeted semantic equivalence. 1. Gold LF and model-predicted LF for _What did the baby eat?_ 1. Gold: eat.theme(x4,?) \(\wedge\) eat.agent(x4, x3) \(\wedge\) baby(x3) 2. Out: eat.agent(x3, x6) \(\wedge\) eat.theme(x3,?) \(\wedge\) baby(x6) 2. Reordered and reindexed version: 1. baby(y2) \(\wedge\)eat.agent(y1, y2) \(\wedge\)eat.theme(y1,?) This reformatted exact-match metric is used for all results reported in the main text; see Appendix C.1 and Table 5 for more details. ## 5 Results Overall, seq2seq Transformers, both trained from scratch and pretrained, display low accuracy on SLOG (Figure 2), in line with earlier studies on structural generalization in seq2seq models (Yao and Koller, 2022). This is also the case for the more recent autoregressive Transformer LLaMa, whose performance is similar to that of T5. As Figure 2 shows, high accuracy on the full COGS dataset, where 86% of the generalization cases are lexical, can obscure low performance on structural generalization, highlighting the need for the expanded structural generalization tests included in SLOG. SLOG additionally reveals weaknesses in the AM-Parser that COGS did not. While the AM-Parser achieves 90% accuracy on the structural generalization subset of COGS (Figure 2), it faces systematic difficulties with several generalization types introduced in SLOG (Figure 3). Performance varied substantially across generalization categories (Figure 3); in particular, all models achieved near-perfect accuracy on _Active subject wh-questions_ and _Shallower PP recursion_. These cases were the least structurally complex in their respective categories (SS2.3 and SS2.1). We highlight specific error types in the rest of this section; see Appendix C for full results and additional error analysis. ### Unobserved Depth and Length Both Affect Depth Generalization The maximum depth observed in training was four levels of embedding for all three recursive structures tested. All models achieve greater than 90% accuracy on unseen shallower PP recursion (three levels of embedding). A considerably lower performance is observed for seq2seq models with shallower tail CP recursion (<61%); in particular, the Transformer trained from scratch consistently fails to generalize to shallower center embedding, with zero accuracy overall. Transformer models show systematically lower performance on deeper recursions (5-12 levels of embedding), whereas the structure-aware model is robust to depth variation. We investigate the relation between length and depth generalization further by dividing the deeper depth generalization cases into examples that are shorter vs. longer than the maximum output length observed in training (229 output tokens). Results are shown in Table 3. All tested Transformer models are unable to generalize to examples longer than the maximum output length observed in training; Figure 3: Aggregate accuracy on SLOG by generalization category, with error bars denoting the standard deviation across generalization cases within each category over five model runs. \begin{table} \begin{tabular}{l c c c c} \hline \hline & Vanilla & T5 & LLaMa & AM \\ & Transformer & & & parser \\ \hline _At or below max training output length_ & & & & \\ PP recursion & 29.3 & 37.0 & 46.0 & 100.0 \\ Tail CP recursion & 3.0 & 17.7 & 40.2 & 100.0 \\ Center embedding & 0.0 & 0.0 & 0.0 & 100.0 \\ \hline _Beyond max training output length_ & & & & \\ PP recursion & 0.0 & 0.0 & 0.0 & 100.0 \\ Tail CP recursion & 0.0 & 0.0 & 0.0 & 100.0 \\ Center embedding & 0.0 & 0.0 & 0.0 & 100.0 \\ \hline \hline \end{tabular} \end{table} Table 3: Mean accuracy (%) on unseen deeper recursion cases, broken down by whether the expected output falls within or exceeds the range of training output lengths (maximum training output = 229 tokens). Figure 2: Accuracy on SLOG, with error bars indicating variations across five runs. We also show the best published results on COGS (indicated with \({}^{\dagger}\)), as reported in Yao and Koller (2022). this result is consistent with the difficulty of length extrapolation observed in the literature (Hupkes et al., 2020; Anil et al., 2022). Length extrapolation does not capture the full story, however: the model's performance is limited even when the length of the generalization examples falls within the range of observed output lengths. This indicates that unobserved depth indeed plays a role in these models' poor generalization to deeper structures, in addition to known difficulties in length generalization. ### Unobserved Long-distance Dependencies Make Generalization Difficult Generalizing to subject modification (both PP and RC) is one of the most challenging cases. Seq2seq models achieve near-zero accuracy, even with the additional cue from the standalone modified NPs that modification can appear outside of object positions. This challenge echoes previous findings on COGS (Akyurek and Andreas, 2021; Zheng and Lapata, 2022; Yao and Koller, 2022). The remainder of this section focuses on the analysis of PP modification cases, but similar patterns are observed for RC modifiers, which we discuss in Appendix C.3. Common error patterns across Transformer models reveal a bias towards shorter predicate-argument dependencies. For instance, in sentences like _A_**cat**_on the mat_**foze**, models often misinterpret the closer NP _the mat_ as the subject. A further breakdown of the modifier generalization performance by construction shows that examples involving longer predicate-argument dependency (i.e., there is an intervening non-argument NP between the predicate and the argument) tend to be more difficult for all models (Table 4). However, the Transformer-based models show a stronger bias towards linearly adjacent predicate-argument structures. Further analysis (Appendix C.2) shows that seq2seq models additionally fall prey to inference patterns akin to a modification rule "attach PPs to NPs in immediate post-verb position", which is compatible with the training data but leads to incorrect generalization. ### Gap Generalizations Are Challenging for All Tested Models For gap generalization cases, all models display low accuracy and considerable variability across different runs as shown in Figure 3. While Transformer models are biased towards more frequent subsequences of output LFs observed during training (see Appendix C.4), the structure-aware AM-Parser demonstrates different generalization difficulties. The AM-Parser systematically fails on every instance of _wh_-questions involving long movement (e.g. _What did Ava say that the cat saw ___?_). This issue arises from its internal prediction of dependency trees, which represent how meaning representations are compositionally constructed. For these _wh_-questions, the required dependency trees are nonprojective since the edge from the embedded verb to the _wh_-pronoun crosses the matrix verb. However, the AM-Parser used in our study only supports projective dependency trees, leading to incorrect prediction of sentence structure.10 This issue with projectivity can serve as a diagnostic for structural limitations of similar structure-aware parsers (Liu et al., 2022; Qiu et al., 2022). Footnote 10: Alternative versions of the AM-Parser that can handle non-projective trees exist and are discussed in Appendix D. Furthermore, on the indirect and direct object _wh_-questions, the AM-Parser performs very unpredictably, with accuracies ranging from 0 to 80 depending on the random seed. This is because at the bottom of its compositional process, the AM-Parser predicts the lexical meaning for each token in the sentence (_supertag_). In these generalization types, the gold meaning representations in the test set require supertags that are infrequent in training. Thus, while the AM-Parser can compensate the distribution shift of the meaning representations as a whole, SLOG exposes its weakness to distribution shifts in the lexical supertags. A more detailed discussion is provided in Appendix D. ## 6 Related Work Previous research has shown that recurrent neural network (RNN) architectures often struggle with learning complex long-range relations from simpler formal languages (Avcu et al., 2017; Mahalunkar and Kelleher, 2019). Our results on SLOG reveal that unseen long-distance predicate-argument dependencies pose considerable difficulty for Transformer-based models as well (SS5.2). For filler-gap dependencies, prior work has centered on syntactic tasks involving _wh_-questions or relative clauses (Wilcox et al., 2018; Marvin and Linzen, 2018; Li et al., 2023; i.a.). These studies primarily use language modeling as the task and do not require mapping to semantic representations. SLOG incorporates both long-distance predicate argument and filler-gap dependencies within a semantic parsing setup. Generalizing recursive patterns to deeper structures has been investigated in both artificial neural networks and humans using artificial languages Christiansen and Chater (1999); Lakretz et al. (2021); McCoy et al. (2021). Our findings underscore Transformer-based models' limitations with deeper recursive structures, corroborating the observations of Hupkes et al. (2020); Lakretz et al. (2021). In contrast, human studies have shown that they can learn and extrapolate center-embedding patterns to greater depth in artificial languages Fitch and Hauser (2004); McCoy et al. (2021). Generalization cases in SLOG draw inspiration from the frequency gaps in natural language, where common patterns serve as a foundation for generalizing to rarer structures. This has connections to language acquisition in children, who have limited exposure to complex, less frequent structures, yet need to generalize to novel complex utterances by extrapolating from familiar linguistic elements Perfors et al. (2011); Tomasello and Olguin (1993); Atkinson et al. (2018). Human proficiency in such generalizations is attributed to inductive biases rooted in systematic compositional rules. However, the Transformer-based models we tested, despite excelling in lexical generalization scenarios, face challenges when presented with unfamiliar linguistic structures requiring such rule induction, hinting at potentially different or inadequate underlying mechanisms. More broadly, how the compositional generalization cases proposed in this work can be connected to human language acquisition is an interesting area of future study. ## 7 Conclusions We introduce SLOG, a semantic parsing dataset that extends the COGS benchmark with a focus on structural generalization, which is often underrepresented in current benchmarks for compositional generalization. Using SLOG, we assess the structural generalization capacity of Transformer models (both pretrained and trained from scratch), as well as AM-Parser, a structure-aware model. While all models achieve good overall accuracy on COGS (\(\geq\) 78%), their performance on SLOG is substantially lower, especially for Transformer models (\(\leq\) 41%). Furthermore, even the structure-aware AM-Parser, which achieved strong performance on all structural generalization cases of COGS, performs poorly on several of the newly introduced generalization types in SLOG. Our error analysis shows that all Transformer models tested struggle with interpreting unseen long-distance dependencies and deeper recursive constructions than observed in training. On the other hand, the AM-Parser, despite its stronger overall performance (71%), displays categorical failures on gap generalization due to its inherent parser design limitations. Overall, SLOG exposes the limitations of a range of models that have previously been claimed to achieve good compositional generalization, and can serve as a useful analytic tool for guiding future improvements. ### Limitations SLOG is a synthetic corpus and covers only a fraction of the diverse structures in English. Furthermore, previous research has demonstrated that the design of meaning representations (MR) can have a nontrivial effect on model performance in semantic parsing tasks Guo et al. (2019); Herzig et al. (2021); Qiu et al. (2022). For example, as noted by Wu et al. (2023), the variable indexing scheme \begin{table} \begin{tabular}{l c c c c c} \hline \hline & Long pred-arg & Vanilla & T5 & LLaMa & \begin{tabular}{c} AM \\ parser \\ \end{tabular} \\ \hline \hline \begin{tabular}{c} Sub-case: Parsive indirect objects \\ **A fish was given** to [ a cat on the mat \(\mathsf{l}_{\mathsf{lobj}}\). \\ Sub-case: Indirect object in PP datives \\ \end{tabular} & ✗ & 95.5 & 97.5 & 98.2 & 93.6 \\ \begin{tabular}{c} Emma **gave a fish** to [ a cat on the mat \(\mathsf{l}_{\mathsf{lobj}}\). \\ Sub-case: Indirect object in double object datives \\ \end{tabular} & ✗ & 22.9 & 50.5 & 75.5 & 100.0 \\ \begin{tabular}{c} Emma gave [ a cat on the mat \(\mathsf{l}_{\mathsf{lobj}}\) a fish. \\ Subject \\ \end{tabular} & ✓ & 4.5 & 9.7 & 36.3 & 77.9 \\ \begin{tabular}{c} [A cat on a mat]\({}_{\mathsf{subj}}\) ate a fish. \\ \end{tabular} & ✓ & 0.0 & 0.8 & 28.9 & 57.6 \\ \hline \hline \end{tabular} \end{table} Table 4: Performance of PP modification generalization broken down by construction. Bold orange words denote long predicate-argument dependencies, while bold black words indicate short ones. may introduce additional semantically irrelevant challenges when assessing structural generalization. SLOG's reformatted exact-match evaluation metric partially addresses this concern by taking into consideration several variations of MRs that are semantically equivalent, including MRs that are equivalent up to constant renaming. However, a more comprehensive study of the effect of artifacts from the formalism is left to future work. There also exist challenges specific to the evaluation of pretrained models. That is, distributional shift between training and generalization sets intended by SLOG, such as withholding the constructions _PPs modifying subject NPs_ from training, is difficult to strictly enforce when pretraining is involved (Kim et al., 2022). This potential violation of distributional control makes the interpretation of the obtained results difficult; we cannot disentangle whether generalization success in pretrained models derives from genuine compositional capabilities or simply exposure during pretraining to the target constructions meant to be withheld from the evaluated models. Still, corpus analyses such as Karlsson (2007) suggest that deep center embedding beyond three levels is very rare in naturally occurring data, so it is possible that very deep embedded structures are withheld as intended even from models exposed to large amounts of pretraining data. We hope the additional structural generalization cases that SLOG offers can also help with future work investigating the interaction between structures available in pretraining data and structural generalization. ## Acknowledgments We thank Zhengxuan Wu, Christopher Manning, Christopher Potts and all members of the NYU Computation and Psycholinguistics Lab for helpful discussion. This work was supported in part through the NYU IT HPC resources, services, and staff expertise, and was funded by Labex EFL ANR-10-LABX-0083, the laboratory LLF of Universite Paris Cite, the DFG through project KO 2916/2-2, and the National Science Foundation (NSF) under Grants No. BCS-204122, BCS-2114505 and IIS-2239862.
2301.03163
VIPER: A Plasma Wave Detection Instrument onboard Indian Venus Orbiter Spacecraft
Plasma waves are observed in almost all the solar system objects. The planetary ionospheres are capable of sustaining plasma waves which are observed there and play an important role in the ionospheric dynamics. Venus does not possess a global magnetic field unlike Earth. The solar EUV radiation ionizes the neutrals and generates a plasma environment around Venus which can sustain plasma waves. Very few attempts are made to observe all plasma waves that can exist around Venus and that too with instruments having a limited dynamic range such as with Pioneer Venus Orbiter and Venus Express. However, there are some other plasma waves which can exist around Venus but are yet to be observed.
Vipin K Yadav
2023-01-09T04:21:09Z
http://arxiv.org/abs/2301.03163v1
# VIPER:A Plasma Wave Detection Instrument onboard Indian Venus Orbiter Spacecraft ###### Abstract Plasma waves are observed in almost all the solar system objects - planets, their satellites, comets, Sun, interplanetary medium, etc. The planetary ionospheres are capable of sustaining plasma waves which are observed there and play an important role in the ionospheric dynamics by propagating energy across different space regions and provide acceleration to particles to attain high energies for transportation. The study of planetary plasma waves also provides information on the solar wind - planet interaction, the energy distribution in the ionospheric plasma of that planet, etc. Venus does not possess a global magnetic field unlike Earth. Thesolar EUV radiation ionizes the neutrals and generates a plasma environment around Venus which can sustain plasma waves. Very few attempts are made to observe all plasma waves that can exist around Venus and that too with instruments having a limited dynamic range such as with PVO (Pioneer Venus Orbiter) and Venus Express. However, there are some other plasma waves which can exist around Venus but are yet to be observed. ISRO is planning to send an orbiter mission to Venus in near future where a suit of instruments named VIPER (Venus Ionospheric Plasma wavE detectoR) is onboard to observe Venusian plasma waves. The plasma wave observations around Venus and VIPER onboard ISRO's Venus Orbiter Spacecraft are discussed in this paper. ## 1 Introduction Venus is a terrestrial planet which does not possess a global magnetic field. Thesolar radiation (dominantly EUV) interacts with the dense Venus atmosphere and ionizes large number of neutral atoms and molecules and the ionosphere gets generated. The magnetic field observed around Venus is a result from the solar wind interaction with the upper atmosphere of the Venus. The induced magnetic field of Venus varies \(\sim\) 50 to 150 nT. The first plasma wave detection around Venus was carried out by PVO in 1978 with a suit of instruments such as magnetometer [1, 2], electric dipole [3] and electron temperature probe, [4]. The electric dipole had an effective length of 0.75 m formed by a pair of electric field sensors having wire circles of diameter 10.5 cm. The electric field sensor had four frequency channels: 100 Hz, 730 Hz, 5.4 kHz and 30 kHz. The automatic gain control amplifiers had a rise time of 50 ms and a decay time of about 500 ms. The Orbiter Electron Temperature Probe (OETP) was designed to measure the plasma parameters in Venus ionosphere - electron (n\({}_{e}\)) and ion (n\({}_{i}\)) plasma density, electron plasma temperature (T\({}_{e}\)) and the spacecraft potential (V\({}_{sc}\)). It consisted of two cylindrical sensors - one placed radially at the end of a 1 m long boom and the other placed axially at a distance of 0.4 m away from the spacecraft with a common electronics unit [4]. The Venus Express in 2005 from ESA also carried two triaxial magnetometer sensors to measure the magnetic field in the Venus ionosphere with one sensor mounted directly on the spacecraft and the other at the end of a 1 m boom. The magnetometer sensors have a dynamic range from \(\pm\) 32.8 to \(\pm\) 8388.6 nT with 128 Hz cadence [5]. Table 1 lists different plasma wave detection instruments which have flown onboard various missions to Venus for measurements. PVO measured the electron plasma density varies from a maximum of \(5\times 10^{5}\) cm\({}^{3}\) at 150 km to about \(10^{4}\) cm\({}^{-3}\) at 900 km whereas the electron temperature has a variation between 1100\({}^{\circ}\) K at 150 km to 8000\({}^{\circ}\) K at about 900 km. The ion temperature is less than electron temperature at the same altitude at 500\({}^{\circ}\) K at 150 km to 1300\({}^{\circ}\) K at 900 km. The typical plasma and magnetic field parameters around Venus [6, 7] are summarized in Table 2. ## 2 Venus Plasma Wave Observations PVO was the first best equipped spacecraft to observe plasma waves near the Venus. Its electric field sensors observed a strong and highly variable solar wind interaction with the Venusian ionosphere. PVO observed electromagnetic whistler waves in 100 Hz, lower hybrid waves in 30 kHz range [8] and electron plasma oscillations in 20-54 kHz frequency range [9]. Galileo during its fly flyby of Venus observed electrostatic Langmuir waves [10]. Venus Express detected proton cyclotron waves in Venus ionosphere with a set of fluxgate magnetometers onboard [11, 12]. Mirror mode waves were also observed in the induced magnetosphere of Venus by Venus Express [13, 14]. Apart from the above mentioned waves, ion acoustic waves (IAW) with frequency f\({}_{\rm{IAW}}\approx 100\) Hz are proposed to be present in the Venus ionosphere as a consequence of the ion acoustic beam instability due to O' ions [15]. Magnetosonic waves having frequencies between 40-50 mHz (in the spacecraft frame) are observed by Venus Express [16]. A review on plasma waves around Venus is given elsewhere [17]. A summary of various plasma wave observed in the ionosphere of Venus is given in Table 3. Langmuir, ion acoustic, whistler and proton cyclotron waves are detected by earlier Venus missions, it shall be prudent to make an attempt again to observe these plasma waves if the dynamic range of the measuring instrument makes it possible so as to verify the earlier measurements and to get some additional scientific information if possible.Electron plasma frequency (f\({}_{\rm{pe}}\)) & subsequently electron plasma (Langmuir) wave frequency (f\({}_{\rm{EPW}}\)) can be computed as follows: (n\({}_{\rm{e}}\approx 10^{3}\) - 5 \(\times\) 10\({}^{5}\) cm\({}^{-3}\), f\({}_{\rm{pe}}\) (min.) = 0.284 MHz; f\({}_{\rm{pe}}\) (max.) = 6.35 MHz. ## 3 VIPER: Science Objectives The proposed scientific instruments, in the form of a suit (VIPER) are \(-\) a customized Langmuir probe (LP) to measure the electron & ion number density and electron plasma temperature, a triaxial electric field sensor (EFS) to measure the oscillating plasma wave electric field, a triaxial search-coil magnetometer (SCM) to measure the oscillating plasma wave magnetic field around Venus and a triaxial fluxgate magnetometer (FGM) to measure the background magneticfield around Venus. All these sensors shall be mounted on two separate booms thereby keeping them away from the spacecraft to avoid the sheath and shielding effects. The scientific objectives of VIPER are: 1. To study the plasma phenomena around Venus by exploring it in the prevailing localized regions. For this purpose, the electron and ion plasma parameters are going to be measured _in-situ_ with LP onboard the orbiter spacecraft around Venus. 2. To study of the interplanetary and induced magnetic field structure around Venus. For this purpose, the background magnetic field is going to be continuously measured _in-situ_ with FGM onboard the orbiter spacecraft around Venus. 3. To detect and observe the plasma waves around Venus and to explore their role in modulating the plasma dynamics around Venus. For this purpose, the plasma wave frequency is going to be measured with EFS and SCM. Table 6 lists the scientific objectives pertaining to the measurement parameters and sampling requirements. Here, \(\omega\) is the angular frequency (= 2\(\pi\)_f_) of the wave having frequency \(f\), E\({}_{1}\) and B\({}_{1}\) are the time-varying wave electric and magnetic fields respectively. These plasma and field parameters are to be used to estimate the plasma characteristic frequencies such as the electron/ion plasma frequency, electron/ion cyclotron frequency and secondary plasma parameters as given in Table 8. \begin{table} \begin{tabular}{|c|c|} \hline **Science Objectives** & **Measurement parameters** \\ \hline To explore the plasma environment around Venus and to study the plasma phenomena prevailing in localized regions. & Continuous measurement of plasma parameters such as electron and ion number density, electron and ion plasma temperature around the Venus. \\ \hline Study of the interplanetary and induced magnetic field structure prevailing around Venus. & Continuous measurement of background magnetic field around the Venus. \\ \hline Characterization of Venusian plasma waves and their role in & Continuous detection and measurement of the plasma wave electric and magnetic \\ \hline \end{tabular} \end{table} Table 6: \begin{table} \begin{tabular}{|c|c|c|} \hline **Instrument /** & **Plasma/Field** & **Measurements** \\ **Scientific Aim** & **Parameter \&** **Required** \\ \hline Langmuir & n\({}_{\rm{e}}\); [10\({}^{2}\) - 10\({}^{6}\) cm\({}^{-3}\)] & Electron \& Ion \\ Probe (LP) & T\({}_{\rm{e}}\); [0.1 - 1 eV] & saturation \\ Plasma & n\({}_{\rm{i}}\) (bulk); [10\({}^{2}\) - 10\({}^{5}\) & currents, \\ parameter & cm\({}^{3}\)] & Complete I-V \\ measurements & T\({}_{\rm{i}}\) (bulk); [0.06 - 0.4 eV] & characteristics \\ \hline Electric Field & & \\ Sensor (EFS) & \(\omega\) (ES \& EM) [1 \\ Oscillating & Hz - 55 kHz]; & field at different \\ plasma wave & E\({}_{\rm{i}}\) ; [mV/m] & frequencies. \\ electric field & & \\ \hline Fluxgate & & \\ Magnetometer & & \\ (FGM) & & B\({}_{\rm{s1}}\), B\({}_{\rm{y1}}\), B\({}_{\rm{z1}}\) and \\ Background & & B\({}_{\rm{s2}}\), B\({}_{\rm{y2}}\), B\({}_{\rm{z2}}\) \\ magnetic field & & \\ measurements & & \\ \hline Search-coil & & \\ Magnetometer & \(\omega\) (EM only) ; [1 \\ (SCM) & Hz - 55 kHz] & Wave magnetic \\ Oscillating & B\({}_{\rm{i}}\) ; [1 - 300 nT] & field at different \\ plasma wave & & frequencies. \\ magnetic field & & \\ \hline \end{tabular} \end{table} Table 7: \begin{table} \begin{tabular}{|c|c|} \hline **Plasma Parameters** & **Measured** & \\ **quantities /** & \\ **constants** & \\ \hline \end{tabular} \end{table} Table 8: \begin{tabular}{|c|c|c|} \hline Electron cyclotron & & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{\(2.8\times 10^{6}\) B\({}_{0}\)} \\ frequency, & B\({}_{0}\), e, m\({}_{\rm e}\) & Hz & \\ \(\omega_{\rm e}\) = 2\(\pi\) (e B\({}_{\rm B}\))\({}^{/}\) / m\({}_{\rm e}\) & & \multicolumn{1}{c|}{} \\ \hline Ion cyclotron & B\({}_{0}\), m\({}_{\rm i}\), m\({}_{\rm p}\), Z & \(1.52\times 10^{3}\) Z \\ frequency, & \(\mu=\) m\({}_{\rm i}\)/m\({}_{\rm p}\) & \(\mu^{-1}\) B\({}_{0}\) Hz \\ \(\Omega_{\rm e}\) = 2\(\pi\) (Z e B\({}_{0}\)) / m\({}_{\rm i}\) & & \\ \hline Electron plasma & & \\ frequency, & n\({}_{\rm e}\), e, m\({}_{\rm e}\), e\({}_{0}\) & \(8.98\times 10^{3}\) \\ \(\omega_{\rm pe}\) = 2\(\pi\) (e\({}^{2}\) n\({}_{\rm e}\) / m\({}_{\rm e}\) & n\({}_{\rm e}\), e\({}_{0}\) & n\({}_{\rm e}\)\({}^{{}^{12}}\) Hz \\ \(\varepsilon_{0}\))\({}^{1/2}\) & & \\ \hline Ion plasma & & \\ frequency, & Z\({}_{\rm e}\), e\({}_{\rm i}\), m\({}_{\rm i}\), & \(2.1\times 10^{2}\) Z \\ \(\omega_{\rm pi}\) = 2\(\pi\) (Z\({}^{2}\) e\({}^{2}\) n\({}_{\rm i}\) / m\({}_{\rm i}\) & \(\varepsilon_{0}\) & \\ \(\varepsilon_{0}\))\({}^{1/2}\) & & \\ \hline Electron thermal & & \\ velocity, & K\({}_{\rm B}\), T\({}_{\rm e}\), m\({}_{\rm e}\) & \(4.19\times 10^{5}\) T\({}_{\rm e}\) \\ \(v_{\rm the}\) = (K\({}_{\rm B}\)T\({}_{\rm e}\) / m\({}_{\rm e}\))\({}^{1/2}\) & & m/sec \\ \hline Ion sound velocity, & \(k\), T\({}_{\rm e}\), m\({}_{\rm i}\), Z, & \\ \(v_{\rm s}\) = (\(\gamma\) Z K\({}_{\rm B}\)T\({}_{\rm e}\) / m\({}_{\rm i}\))\({}^{1/2}\) & \(k\), T\({}_{\rm e}\), m\({}_{\rm i}\), Z, & \\ \(\gamma\), \(\mu\) & \(\mu=\) m\({}_{\rm i}\)/m\({}_{\rm p}\), \(\gamma\) = & Z T\({}_{\rm e}\)/\(\mu\)\({}^{1/2}\) \\ \(\omega_{\rm e}\)/c\({}_{\rm v}\) & & \\ \hline Ion thermal velocity, & \(\omega_{\rm bi}\) = (\(k\) T\({}_{\rm i}\) / m\({}_{\rm i}\))\({}^{1/2}\) & & \\ \(\omega_{\rm D}\) = (K\({}_{\rm B}\)T\({}_{\rm e}\) / e\({}^{2}\) n\({}_{\rm e}\))\({}^{1/2}\) & & \(7.43\) T\({}_{\rm e}\)\({}^{1/2}\) n\({}_{\rm e}\), e & \\ \(\lambda_{\rm D}\) = (K\({}_{\rm B}\)T\({}_{\rm e}\) / e\({}^{2}\) n\({}_{\rm e}\))\({}^{1/2}\) & & \(1.28\times 10^{9}\)\(\mu^{-1}\) \\ \hline Alfven velocity & B\({}_{0}\),\(\mu_{0}\), \(\rho\), \(\mu\) & \(\rho=\) n\({}_{\rm i}\)m\({}_{\rm i}\), \(\mu=\) & \({}^{1/2}\) n\({}_{\rm i}\)\({}^{1/2}\) B\({}_{0}\) \\ & m\({}_{\rm i}\)/m\({}_{\rm p}\) & m/sec \\ \hline \end{tabular} Here, B\({}_{0}\) is the background magnetic field, e is electronic charge, m\({}_{\rm e}\) is electron mass, Z is the atomic number, m\({}_{\rm i}\) is ion mass, m\({}_{\rm p}\) is proton mass, c is velocity of light in free space, e\({}_{0}\) is the permittivity of the free space, K\({}_{\rm B}\) is the Boltzmann constant, T\({}_{\rm e}\) is the electron temperature, \(\gamma\) is the ratio of specific heats, \(\mu_{0}\) is the permeability of the free space, \(\rho\) is the ion mass density, n\({}_{\rm i}\) is ion number density. The estimated plasma parameters and measured magnetic field along with the frequency of the oscillating electric and magnetic field shall be fitted in the dispersion relations of various plasma waves that can exist around the Venus and the outcome shall be analysed for proper identification. For example, the dispersion relation of an electron plasma (or Langmuir) wave is \(\omega^{2}\) = \(\omega_{\rm p}^{2}\) + (3/2) \(k^{2}v_{\rm bi}^{2}\). Here, the variables are \(\omega\) (measured from EFS), \(\omega_{\rm pe}\) (estimated from T\({}_{\rm e}\) measured by LP) and \(k\). For a given small range of \(k\), the dispersion relation shall be satisfied and the existence of the wave shall be established.
2310.05253
Explainable Claim Verification via Knowledge-Grounded Reasoning with Large Language Models
Claim verification plays a crucial role in combating misinformation. While existing works on claim verification have shown promising results, a crucial piece of the puzzle that remains unsolved is to understand how to verify claims without relying on human-annotated data, which is expensive to create at a large scale. Additionally, it is important for models to provide comprehensive explanations that can justify their decisions and assist human fact-checkers. This paper presents First-Order-Logic-Guided Knowledge-Grounded (FOLK) Reasoning that can verify complex claims and generate explanations without the need for annotated evidence using Large Language Models (LLMs). FOLK leverages the in-context learning ability of LLMs to translate the claim into a First-Order-Logic (FOL) clause consisting of predicates, each corresponding to a sub-claim that needs to be verified. Then, FOLK performs FOL-Guided reasoning over a set of knowledge-grounded question-and-answer pairs to make veracity predictions and generate explanations to justify its decision-making process. This process makes our model highly explanatory, providing clear explanations of its reasoning process in human-readable form. Our experiment results indicate that FOLK outperforms strong baselines on three datasets encompassing various claim verification challenges. Our code and data are available.
Haoran Wang, Kai Shu
2023-10-08T18:04:05Z
http://arxiv.org/abs/2310.05253v2
# Explainable Claim Verification via Knowledge-Grounded Reasoning with Large Language Models ###### Abstract Claim verification plays a crucial role in combating misinformation. While existing works on claim verification have shown promising results, a crucial piece of the puzzle that remains unsolved is to understand how to verify claims without relying on human-annotated data, which is expensive to create at a large scale. Additionally, it is important for models to provide comprehensive explanations that can justify their decisions and assist human fact-checkers. This paper presents _First-Order-Logic-Guided Knowledge-Grounded (FOLK) Reasoning_ that can verify complex claims and generate explanations without the need for annotated evidence using Large Language Models (LLMs). FOLK leverages the in-context learning ability of LLMs to translate the claim into a First-Order-Logic (FOL) clause consisting of predicates, each corresponding to a subclaim that needs to be verified. Then, FOLK performs FOL-Guided reasoning over a set of knowledge-grounded question-and-answer pairs to make veracity predictions and generate explanations to justify its decision-making process. This process makes our model highly explanatory, providing clear explanations of its reasoning process in human-readable form. Our experiment results indicate that FOLK outperforms strong baselines on three datasets encompassing various claim verification challenges. Our code and data are available. 1 Footnote 1: [https://github.com/wang2226/FOLK](https://github.com/wang2226/FOLK) ## 1 Introduction Claim verification Guo et al. (2022) has become increasingly important due to widespread online misinformation Tian et al. (2023); Jin et al. (2023). Most of the existing claim verification models Zhou et al. (2019); Jin et al. (2022); Yang et al. (2022); Wadden et al. (2022); Liu et al. (2020); Zhong et al. (2020) use an automated pipeline that consists of claim detection, evidence retrieval, verdict prediction, and justification production. Despite some early promising results, they rely on the availability of large-scale human-annotated datasets, which pose challenges due to labor-intensive annotation efforts and the need for annotators with specialized domain knowledge. To address the issue of creating large-scale datasets, recent works Pan et al. (2021); Wright et al. (2022); Lee et al. (2021) focus on claim verification in zero-shot and few-shot scenarios. However, these methods follow the traditional claim verification pipeline, requiring both claim \begin{table} \begin{tabular}{|p{227.6pt}|} \hline **Claim:** Labublo Konflo was a silver model in the 2012 Sport-Accord World Mind Games inaugured in July 2011 in Beijing. \\ \hline **Label:** \(\langle NOT\_SUPPORTED\rangle\) \\ \hline **Predicates:** \\ WonLahahuda Konflo, a silver model \\ \hline \end{tabular} \(\mid\) :: Verify Labublo Konflo won a silver model \\ \hline \end{tabular} \(\mid\) :: The 2012 Sport-Accord World Mind Games was inaugured in July 2011 in Beijing. \\ \(\mid\) 2011 in Beijing. \\ \(\mid\) **Follow-up Question:** What did Labublo Konflo win in the 2012 Sport-Accord World Mind Games? \\ \(\mid\) **Follow-up Question:** When and where was the 2012 Sport-Accord World Mind Games inaugured? \\ \(\mid\) **Grounded Answer:** The International Mind Sports Association (IMSA) inaugured the Sport-Accord World Mind Games December 2011 in Beijing — \\ \hline **Prediction:** \\ WonLahuda Konflo, a silver model \\ \hline \end{tabular} \(\mid\) **is True because in 2012 he won the silver model at the Sport-Accord World Mind Games in Beijing, China.** \\ Inaugardt(the 2012 Sport-Accord World Mind Games July 2011, Beijing) is False because The International Mind Sports Association (IMSA) inaugured the Sport-Accord World Mind Games December 2011 in Beijing. \\ \(\mid\) **WonLahuda Konflo, a silver model \\ \(\mid\) **SportAccord World Mind Games July 2011, Beijing) is False.** \\ \(\mid\) **The claim is \(\langle NOT\_SUPPORTED\rangle\).** \\ \hline **Explanation:** \\ Labublo Konflo won a silver model in the 2012 Sport-Accord World Mind Games. However, the event was inaugured in December 2012, not July 2011, in Beijing. \\ \hline \end{tabular} \end{table} Table 1: An example from FOLK with GPT-3.5 on HoVER, a multi-hop claim verification dataset. We first use LLM to translate the claim into a First-Order-Logic clause (highlighted in orange), consisting of two predicates (highlighted in blue and purple). The LLM then perform knowledge-grounded reasoning to predict label and generate explanation. and annotated evidence for veracity prediction. Additionally, these models often lack proper justifications for their predictions, which are important for human fact-checkers to make the final verdicts. Therefore, we ask the following question: _Can we develop a model capable of performing claim verification without relying on annotated evidence, while generating natural language justifications for its decision-making process?_ To this end, we propose a novel framework First-Order-Logic-Guided Knowledge-Grounded (FOLK) to perform explainable claim verification by leveraging the reasoning capabilities of Large Language Models (LLMs) Brown et al. (2020); Touvron et al. (2023); Zhang et al. (2022); Chowdhery et al. (2022). Figure 1 illustrates a real-world example from FOLK where it can provide veracity prediction based on logical reasoning and generate an explanation of its decision-making process in a short paragraph. To ensure accurate prediction and provide high-quality explanations, FOLK first translates the input claim into a First-Order-Logic (FOL) Enderton (2001) clause consisting of a set of conjunctive predicates. Each predicate represents a part of the claim that needs to be verified. Next, the generated FOL predicates guide LLMs to generate a set of questions and corresponding answers. Although the generated answers may appear coherent and plausible, they often lack factual accuracy due to LLM's hallucination problem Ji et al. (2023); Ouyang et al. (2022). To address this problem, FOLK controls the knowledge source of the LLMs by grounding the generated answers in real-world truth via retrieving accurate information from trustworthy external knowledge sources (e.g. Google or Wikipedia). Finally, FOLK leverages the reasoning ability of LLMs to evaluate the boolean value of the FOL clause and make a veracity prediction. Given the high stakes involved in claim verification, FOLK prompts the LLMs to generate justifications for their decision-making process in natural language. These justifications are intended to aid human fact-checkers in making the final verdict, enhancing the transparency and interpretability of the model's predictions. We evaluate our proposed methods on three fact-checking datasets Jiang et al. (2020); Aly et al. (2021); Wadden et al. (2022) with the following distinct challenges: multi-hop reasoning, numerical reasoning, combining text and table for reasoning, and open-domain scientific claim verification. Our experiment results demonstrate that FOLK can verify complex claims while generating explanations to justify its decision-making process. Additionally, we show the effectiveness of FOL-guided claim decomposition and knowledge-grounded reasoning for claim verification. In summary, our contributions are: * We introduce a new method to verify claims without the need for annotated evidence. * We demonstrate the importance of using symbolic language to help claim decomposition and provide knowledge-grounding for LLM to perform reasoning. * We show that FOLK can generate high-quality explanations to assist human fact-checkers. ## 2 Background **Claim Verification.** The task of claim verification aims to predict the veracity of a claim by retrieving related evidence documents, selecting the most salient evidence sentences, and predicting the veracity of the claim as _SUPPORTS_ or _REFUTES_. Claim verification falls into the broader task of _Fact-checking_, which includes all the steps described in claim verification with the addition of claim detection, a step to determine the check-worthiness of a claim. While steady progress has been made in this field, recent research focus has shifted to 1) dealing with insufficient evidence Atanasova et al. (2022) and 2) using explainable fact-checking models to support decision-making Kotonya and Toni (2020). In the line of explainable fact-checking, Popat et al. (2018) and Shu et al. (2019) use visualization of neural attention weights as explanations. Although attention-based explanation can provide insights into the deep learning model's decision process, it does not generate human-readable explanations and cannot be interpreted without any prior machine learning knowledge. Atanasova et al. (2020); Kotonya and Toni (2020) formulate the task of generating explanations as extractive summarizing of the ruling comments provided by professional fact-checkers. While their work can generate high-quality explanations based on training data from professional fact-checkers, annotating such datasets is expensive and not feasible at a large scale. Our work explores using reasoning steps as explanations of the model's decision-making process while generating explanations in natural language. **Large Language Models for Reasoning.** Large language models have demonstrated strong reasoning abilities through chain-of-thought (CoT) prompting, wherein LLM is prompted to generate its answer following a step-by-step explanation by using just a few examples as prompts. Recent works have shown that CoT prompting can improve performance on reasoning-heavy tasks such as multi-hop question answering, multi-step computation, and common sense reasoning Nye et al. (2021); Zhou et al. (2022); Kojima et al. (2022). Verifying complex claims often requires multi-step (multi-hop) reasoning Mavi et al. (2022), which requires combining information from multiple pieces of evidence to predict the veracity of a claim. Multi-step reasoning can be categorized into forward-reasoning and backward-reasoning Yu et al. (2023). Forward-reasoning Creswell et al. (2022); Sanyal et al. (2022); Wei et al. (2022) employs a bottom-up approach that starts with existing knowledge and obtains new knowledge with inference until the goal is met. Backward-reasoning Min et al. (2019); Press et al. (2022) on the other hand, is goal-driven, which starts from the goal and breaks it down into sub-goals until all of them are solved. Compared to forward reasoning, backward reasoning is more efficient, the divide-and-conquer search scheme can effectively reduce the problem search space. We propose FOLK, a FOL-guided backward reasoning method for claim verification. Despite the recent progress in using LLMs for reasoning tasks, their capability in verifying claims has not been extensively explored. Yao et al. (2022) evaluate using LLMs to generate reasoning traces and task-specific actions on fact verification tasks. Their reasoning and action steps are more complex than simple CoT and rely on prompting much larger models (PaLM-540B). Additionally, they test their model's performance on the FEVER dataset Thorne et al. (2018), which lacks many-hop relations and specialized domain claims. In contrast to their approach, our proposed method demonstrates effectiveness on significantly smaller LLMs without requiring any training, and we test our method on scientific claims. Contemporaneous to our work, Peng et al. (2023) propose a set of plug-and-play modules that augment with LLMs to improve the factuality of LLM-generated responses for task-oriented dialogue and question answering. In contrast to their approach, our primary focus is on providing LLMs with knowledge-grounded facts to enable FOL-Guided reasoning for claim verification, rather than solely concentrating on enhancing the factual accuracy of LLMs' responses. ProgramFC Pan et al. (2023) leverages LLMs to generate computer-program-like functions to guide the reasoning process. In contrast to their approach, which only uses LLMs for claim decomposition, we use LLMs for both claim decomposition and veracity prediction. By using LLMs for veracity prediction, we can not only obtain a comprehensive understanding of LLMs' decision process but also generate explanations for their predictions. Furthermore, ProgramFC is limited to closed-domain, as it needs to first retrieve evidence from a large textual corpus like Wikipedia. FOLK on the other hand, can perform open-domain claim verification since it does not require a pre-defined evidence source. ## 3 Method Our objective is to predict the veracity of a claim \(\mathcal{C}\) without the need for annotated evidence while generating explanations to elucidate the decision-making process of LLMs. As shown in Figure 1, our framework contains three stages. In the _FOL-Guided Claim Decomposition_ stage, we first translate the input claim into a FOL clause \(\mathcal{P}\), then we use \(\mathcal{P}\) to guide LLM to generate a set of intermediate question-answer pairs \((q_{i},a_{i})\). Each intermediate question \(q_{i}\) represents a specific reasoning step required to verify the claim. In the _Knowledge-Grounding_ stage, each \(a_{i}\) represents the answer generated by LLMs that has been verified against ground truth obtained from an external knowledge source. Finally, in the _Veracity Prediction and Explanation Generation_ stage, we employ \(\mathcal{P}\) to guide the reasoning process of LLMs over the knowledge-grounded question-and-answer pairs. This allows us to make veracity predictions and generate justifications for its underlying reasoning process. ### FOL-Guided Claim Decomposition Although LLMs have displayed decent performance in natural language reasoning tasks, they fall short when asked to directly solve complex reasoning problems. This limitation arises from the lack of systematic generalization capabilities in language models Valmeekam et al. (2022); Elazar et al. (2021). Recent works have discovered that LLMs are capable of understanding and converting textual input into symbolic languages, such as formal language Kim (2021), mathematical equations He-Yueya et al. (2023), or Python codes Gao et al. (2022). Inspired by these recent works, we harness the ability of LLMs to translate textual claims into FOL clauses. This allows us to guide LLMs in breaking down claims into various sub-claims. At this stage, given the input claim \(\mathcal{C}\), the LLM first generates a set of predicates \(\mathcal{P}=[p_{1},...,p_{n}]\) that correspond to the sub-claims \(\mathcal{C}=[c_{1},...,c_{n}]\). Each _predicate_\(p_{i}\in P\) is a First-Order Logic (FOL) predicate that guides LLMs to prompt a question-and-answer pair that represents sub-claim \(c_{i}\). The claim \(\mathcal{C}\) can be represented as a conjunction of the predicates \(\mathcal{C}=p_{1}\wedge p_{2}\wedge...\wedge p_{n}\). To classify the claim \(C\) as SUPPORTED, all predicates must evaluate to True. If any of the predicates are False, the claim is classified as REFUTED. By providing the LLMs with symbolic languages such as predicates, alongside a few in-context examples, we observe that LLMs can effectively identify the crucial entities, relations, and facts within the claim. Consequently, LLMs are capable of generating relevant question-and-answer pairs that align with the identified elements. ### Retrieve Knowledge-Grounded Answers Although LLMs exhibit the ability to generate coherent and well-written text, it is worth noting that they can sometimes hallucinate Ji et al. (2023), and produce text that fails to be grounded in real-world truth. To provide knowledge-grounded answers for the generated intermediate questions, we employ a retriever based on Google Search, via the SerpAPI 2 service. Specifically, we return the top-1 search result returned by Google. While it is important to acknowledge that Google search results may occasionally include inaccurate information, it generally serves as a more reliable source of knowledge compared to the internal knowledge of LLMs. Additionally, in real-world scenarios, when human fact-checkers come across unfamiliar information, they often rely on Google for assistance. Therefore, we consider the answers provided by Google search as knowledge-grounded answers. Footnote 2: [https://serpapi.com/](https://serpapi.com/) ### Veracity Prediction and Explanation Generation At this stage, the LLM is asked to make a verdict prediction \(\mathcal{V}\in\{\texttt{SUPPORT},\texttt{REFUTE}\}\) and provide an explanation \(\mathcal{E}\) to justify its decision. **Veracity Prediction** Given the input claim \(\mathcal{C}\), the Figure 1: Overview of our _FOLK_ framework, which consists of three steps: (i) _FOLK_ translates input claim into a FOL clause and uses it to guide LLMs to generate a set of question-and-answer pairs; (ii) _FOLK_ then retrieves knowledge-grounded answers from external knowledge-source; and (iii) _FOLK_ performs FOL-Guided reasoning over knowledge-grounded answers to make veracity prediction and generate explanations. _FOLK_ can perform a variety of reasoning tasks for claim verification, such as multi-hop reasoning, numerical reasoning, and open-domain scientific claim verification. predicates \([p_{1},...,p_{n}]\), and knowledge-grounded question-and-answer pairs, FOLK first checks the veracity of each predicate against corresponding knowledge-grounded answers while giving reasons behind its predictions. Once all predicates have been evaluated, FOLK makes a final veracity prediction for the entire clause. In contrast to solely providing LLMs with generated questions and their corresponding grounded answers, we found that the inclusion of predicates assists LLMs in identifying the specific components that require verification, allowing them to offer more targeted explanations. **Explanation Generation** We leverage LLMs' capability to generate coherent language and prompt LLMs to generate a paragraph of human-readable explanation. We evaluate the explanation generated by LLMs with manual evaluation. Furthermore, since claim verification is a high-stake task, it should involve human fact-checkers to make the final decision. Therefore, we provide URL links to the relevant facts, allowing human fact-checkers to reference and validate the information. ## 4 Experiments We compare FOLK to existing methods on 7 claim verification challenges from three datasets. Our experiment setting is described in Sections 4.1 & 4.2 and we discuss our main results in Section 4.4. ### Datasets We experiment with the challenging datasets listed below. Following existing works (Yoran et al., 2023; Kazemi et al., 2022; Trivedi et al., 2022), to limit the overall experiment costs, we use stratified sampling to select 100 examples from each dataset to ensure a balanced label distribution. **HoVER**(Jiang et al., 2020) is a multi-hop fact verification dataset created to challenge models to verify complex claims against multiple information sources, or "hop". We use the validation set for evaluation since the test sets are not released publicly. We divide the claims in the validation set based on the number of hops: two-hop claims, three-hop claims, and four-hop claims. **FEVEROUS**(Aly et al., 2021) is a benchmark dataset for complex claim verification over structured and unstructured data. Each claim is annotated with evidence from sentences and forms in Wikipedia. We selected claims in the validation set with the following challenges to test the effectiveness of our framework: numerical reasoning, multi-hop reasoning, and combining tables and text. **SciFact-Open**(Wadden et al., 2022) is a testing dataset for scientific claim verification. This dataset aims to test existing models' claim verification performance in an open-domain setting. Since the claims in SciFact-Open do not have a global label, we select claims with complete evidence that either support or refute the claim and utilize them as the global label. This dataset tests our model's performance on specialized domains that require domain knowledge to verify. ### Baselines We compare our proposed method against the following four baselines. **Direct** This baseline simulates using LLM as standalone fact-checkers. We directly ask LLMs to give us veracity predictions and explanations given an input claim, relying solely on LLMs' internal knowledge. It is important to note that we have no control over LLM's knowledge source, and it is possible that LLMs may hallucinate. **Chain-of-Thought**(Wei et al., 2022) is a popular approach that demonstrates chains of inference to LLMs within an in-context prompt. We decompose the claims by asking LLMs to generate the necessary questions needed to verify the claim. We then prompt LLMs to verify the claims step-by-step given the claims and knowledge-grounded answers. **Self-Ask**(Press et al., 2022) is a structured prompting approach, where the prompt asks LLMs to decompose complex questions into easier sub-questions that it answers before answering the main question. It is shown to improve the performance of Chain-of-Thought on multi-hop question-answering tasks. We use the same decomposition and knowledge-grounding processes as in CoT. For veracity prediction, we provide both questions and knowledge-grounded answers to LLMs to reason, instead of just the knowledge-grounded answers. **ProgramFC**(Pan et al., 2023) is a recently proposed baseline for verifying complex claims using LLMs. It contains three settings for knowledge-source: gold-evidence, open-book, and closed-book. To ensure that ProgramFC has the same problem setting as _FOLK_, we use the open-book setting for ProgramFC. Since we only use one reasoning chain, we select N=1 for ProgramFC. Since ProgramFC cannot perform open-domain claim verification, we exclude it from SciFact-Open dataset. ### Experiment Settings The baselines and FOLK use GPT-3.5, _text-davinci-003_ (175B) as the underlying LLM. We use _SER-PAPI_ as our retrieval engine to obtain knowledge-grounded answers. In addition to the results in Table 2, we perform experiments on smaller LLMs [20]: llama-7B, llama-13B, and llama-30B. The results are presented in Table 2. Our prompts are included in B. The number of prompts used varies between 4-6 between the datasets. These prompts are based on random examples from the train and development sets. ### Main Results We report the overall results for FOLK compared to the baselines for claim verification in Table 2. FOLK achieves the best performance on 6 out of 7 evaluation tasks, demonstrating its effectiveness on various reasoning tasks for claim verification. Based on the experiment results, we have the following major observations: **FOLK is more effective on complex claims.** On HoVER dataset, FOLK outperforms the baselines by 7.37% and 7.94% on three-hop and four-hop claims respectively. This suggests that FOLK becomes more effective on more complex claims as the required reasoning depth increases. Among the baselines, ProgramFC has comparable performance on three-hop claims, which indicates the effectiveness of using symbolic language, such as programming-like language to guide LLMs for claim decomposition for complex claims. However, programming-like language is less effective as claims become more complex. Despite ProgramFC having a performance increase of 3.68% from three-hop to four-hop claims in HoVER, FOLK has a larger performance increase of 10.13%. Suggesting that FOL-guided claim decomposition is more effective on more complex claims. On FEVEROUS dataset, FOLK outperforms the baselines by 7.52%, 9.57%, and 2.69% on all three tasks respectively. This indicates that FOLK can perform well not only on multi-hop reasoning tasks but also on numerical reasoning and reasoning over text and table. **FOL-guided Reasoning is more effective than CoT-like Reasoning.** Our FOLK model, which uses FOL-guided decomposition reasoning approach outperforms CoT and Self-Ask baselines on all three datasets. On average, there is an 11.30% improvement. This suggests that FOL-like predicates help LLMs to better decompose claims, and result in more accurate reasoning. This is particularly evident when the claims become more complex: there is a 12.13% improvement in three-hop and a 16.6% improvement in the four-hop setting. **Knowledge-grouding is more reliable than LLM's internal knowledge.** FOLK exhibits superior performance compared to Direct baseline across all three datasets. This observation indicates the critical role of knowledge-grounding in claim verification, as Direct solely relies on the internal knowledge of LLMs. It is also important to note that the lack of control over the knowledge source in Direct can lead to hallucinations, where LLMs make accurate predictions but for incorrect reasons. For instance, when faced with a claim labeled as SUPPORT, LLMs may correctly predict the outcome despite certain predicates being false. \begin{table} \begin{tabular}{c|c c c|c c c|c} \hline \hline & \multicolumn{3}{c|}{**HoVER**} & \multicolumn{3}{c|}{**FEVEROUS**} & \multicolumn{1}{c}{**SciFact-Open**} \\ \cline{2-7} & **2-Hop** & **3-Hop** & **4-Hop** & **Numerical** & **Multi-hop** & **Text and Table** & \\ \hline Direct & 57.11 & 44.95 & 55.91 & 48.52 & 50.18 & 59.07 & 49.70 \\ CoT & 53.98 & 46.57 & 47.99 & 49.56 & 60.90 & 61.76 & 63.39 \\ Self-Ask & 54.23 & 48.87 & 51.76 & 55.33 & 61.16 & 54.23 & 60.94 \\ ProgramFC & 71.00 & 51.04 & 52.92 & 54.78 & 59.84 & 51.69 & - \\ FOLK & 66.26 & 54.80 & 60.35 & 59.49 & 67.01 & 63.42 & 67.59 \\ \hline \hline \end{tabular} \end{table} Table 2: Macro F-1 score of Direct, Chain-of-Thought (CoT), Self-Ask, ProgramFC, and our method _FOLK_ on three challenging claim verification datasets. The best results within each dataset are highlighted. \begin{table} \begin{tabular}{c|c c c} \hline \hline & **2-hop** & **3-hop** & **4-hop** \\ \hline en.wikipedia.org & **66.26** & **54.80** & **60.35** \\ google.com & 62.60 & 50.88 & 54.66 \\ \hline \hline \end{tabular} \end{table} Table 3: Ablation study on knowledge source. ### The Impacts of FOL-Guided Reasoning To gain more insights on prompting from FOL predicates, we perform an ablation study on the HoVER dataset. The goal is to see whether the performance difference in Table 2 primarily results from FOLK generating better follow-up questions or if the predicates also play a role in constructing the veracity prediction. Specifically, we maintain the CoT prompt format but input knowledge-grounded answers from FOLK. As for Self-Ask, we maintain the Self-Ask prompt format while incorporating follow-up questions generated by FOLK along with their associated knowledge-grounded answers. This guarantees that both CoT and Self-Ask retain their reasoning capabilities while employing identical factual information as provided by FOLK. The results, presented in Table 4, show that FOLK consistently outperforms CoT and Self-Ask in all three tasks. This highlights that the FOL-guided reasoning process enhances the ability of language models to integrate knowledge in multi-hop reasoning scenarios effectively. ### The Impacts of Knowledge-Grounding To better understand the role of knowledge-grounding in LLM's decision process, we perform an ablation study on four multi-hop reasoning tasks. We use the FOLK prompt to generate predicates and decompose the claim, we then compare its performance under two settings. In the first setting, we let LLM reason over the answers it generated itself. In the second setting, we provide LLM with knowledge-grounded answers. The results are shown in Figure 3, as we can see, FOLK performs better with knowledge-grounded answers. This suggests that by providing knowledge-grounded answers, we can improve LLM's reasoning performance, and alleviate the hallucination problem by providing it facts. Next, we investigate whether the knowledge source can affect FOLK's performance. Since both HoVER and FEVEROUS datasets are constructed upon Wikipedia pages. We add en.wikipedia.com in front of our query to let it search exclusively from Wikipedia. This is the same way as ProgramFC's open-book setting. We record the performance in Table 3. As we can see, using a more accurate search can lead to better performance. ### The Generalization on Different-sized LLMs To assess whether the performance of FOLK can generalize to smaller LLMs, we compare the performance of FOLK against cot and self-ask on HoVER dataset using two different-sized LLMs: llama-7B and llama-13B. Due to the inability of using ProgramFC prompts to generate programs \begin{table} \begin{tabular}{l|c c c} \hline \hline & **2-hop** & **3-hop** & **4-hop** \\ \hline CoT using FOLK questions & 57.78 & 41.20 & 44.57 \\ Self-Ask using FOLK questions & 62.00 & 43.25 & 42.86 \\ FOLK & **66.26** & **54.80** & **60.35** \\ \hline \hline \end{tabular} \end{table} Table 4: Ablation study on FOL-guided reasoning. Figure 3: Ablation study on knowledge-grounding for multi-hop reasoning task. Figure 2: Macro-F1 score of running _FOLK_ (brown line), Self-Ask (green dashed line), and CoT (blue dashed line) on HoVER dataset for LLMs with increasing size: llama-13B, llama-30B, and GPT-3.5 (175B). using the llama model, we exclude ProgramFC from our evaluation for this experiment. The results are shown in Figure 2, FOLK can outperform CoT and Self-Ask regardless of the model size, except for 3-hop claims using llama-13B model. As smaller models are less capable for complex reasoning, the performance of Self-Ask decreases significantly with decreasing model size. For CoT, its performance is less sensitive to LLM size compared to Self-Ask. However, these trends are less notable for FOLK. We believe it can attribute to the predicates used to guide LLM to perform high-level reasoning. Our results show that FOLK using llama-30B model can achieve comparable performance to PrgramFC using 5.8x larger GPT-3.5 on three-hop and four-hop claims. This further shows that FOLK is effective on deeper claims and can generalize its performance to smaller LLMs. ### Assessing the Quality of Explanations To measure the quality of the explanations generated by FOLK, we conduct manual evaluations by three annotators. The annotators are graduate students with a background in computer science. Following previous work (Atanasova et al., 2020), we ask annotators to rank explanations generated by CoT, Self-Ask, and FOLK. We choose the following three properties for annotators to rank these explanations: **Coverage** The explanation can identify and include all salient information and important points that contribute to verifying the claim. We provide fact checkers with annotated gold evidence and ask them whether the generated explanation can effectively encompass the key points present in the gold evidence. **Soundness** The explanation is logically sound and does not contain any information contradictory to the claim or gold evidence. To prevent annotators from being influenced by the logic generated by FOLK, we do not provide annotators with the predicates generated by FOLK. **Readability** The explanation is presented in a clear and coherent manner, making it easily understandable. The information is conveyed in a way that is accessible and comprehensible to readers. We randomly sample 30 instances from the multi-hop reasoning challenge from the FEVEROUS dataset. For each instance, we collect veracity explanations generated by CoT, Self-Ask, and FOLK. During the annotation process, we ask annotators to rank these explanations with the rank 1, 2, and 3 representing first, second, and third place respectively. We also allow ties, meaning that two veracity explanations can receive the same rank if they appear the same. To mitigate potential position bias, we did not provide information about the three different explanations and shuffled them randomly. The annotators worked separately without discussing any details about the annotation task. **FOLK can generate informative, accurate explanations with great readability.** Table 5 shows the results from the manual evaluation mentioned above. We use Mean Average Ranks (MARs) as our evaluation metrics, where a lower MAR signifies a higher ranking and indicates a better quality of an explanation. To measure the inter-annotator agreement, we compute Krippendorf's \(\alpha\)(Hayes and Krippendorff, 2007). The corresponding \(\alpha\) values for FOLK are 0.52 for _Coverage_, 0.71 for _Soundness_, and 0.69 for _Readability_, where \(\alpha>0.67\) is considered good agreement. We assume the low agreement on coverage can be attributed to the inherent challenges of ranking tasks for manual evaluation. Small variations in rank positions and annotator bias towards ranking ties may impact the agreement among annotators. We find that explanations generated by FOLK are ranked the best for all criteria, with 0.16 and 0.40 ranking improvements on coverage and readability respec \begin{table} \begin{tabular}{c c c c} \hline \hline **Annotators** & **CoT** & **Self-Ask** & FOLK \\ \hline \multicolumn{4}{c}{_Coverage_} \\ \hline 1st & 1.90 & 1.95 & 1.75 \\ 2nd & 1.75 & 1.75 & 1.35 \\ 3rd & 1.55 & 1.70 & 1.60 \\ **Avg** & 1.73 & 1.80 & 1.57 \\ \hline \multicolumn{4}{c}{_Soundness_} \\ \hline 1st & 1.40 & 1.45 & 1.15 \\ 2nd & 1.40 & 1.25 & 1.00 \\ 3rd & 1.05 & 1.05 & 1.05 \\ **Avg** & 1.28 & 1.25 & 1.07 \\ \hline \multicolumn{4}{c}{_Readability_} \\ \hline 1st & 1.95 & 1.90 & 1.25 \\ 2nd & 1.75 & 1.60 & 1.20 \\ 3rd & 1.35 & 1.50 & 1.35 \\ **Avg** & 1.68 & 1.67 & 1.27 \\ \hline \hline \end{tabular} \end{table} Table 5: Mean Average Ranks (MARs) of the explanations for each of the three evaluation criteria. The lower MAR indicates a higher ranking and represents a better quality of an explanation. For each row, the best results from each annotator are underlined, and the best overall results are highlighted in blue. tively. While Self-Ask has better prediction results compared to CoT, as shown in Table 2, CoT has a 0.17 MAR improvement compared to Self-Ask. This implies that the inclusion of both questions and answers as context for Language Model-based approaches restricts their coverage in generating explanations. ## 5 Conclusion In this paper, we propose a novel approach to tackle two major challenges in verifying real-world claims: the scarcity of annotated datasets and the absence of explanations. We introduce FOLK, a reasoning method that leverages First-Order Logic to guide LLMs in decomposing complex claims into sub-claims that can be easily verified through knowledge-grounded reasoning with LLMs. Our experiment results show that FOLK demonstrates promising performance on three challenging datasets with only 4-6 in-context prompts provided and no additional training. Additionally, we investigate the impact of knowledge grounding and model size on the performance of FOLK. The results indicate that FOLK can make accurate predictions and generate explanations when using a medium-sized LLM such as llama-30B. To evaluate the quality of the explanations generated by FOLK, we conducted manual evaluations by three human annotators. The results of these evaluations demonstrate that FOLK consistently outperforms the baselines in terms of explanation overall quality. ## 6 Limitations We identify two main limitations of FOLK. First, the claims in our experiments are synthetic and can be decomposed with explicit reasoning based on the claims' syntactic structure. However, real-world claims often possess complex semantic structures, which require implicit reasoning to verify. Thus, bridging the gap between verifying synthetic claims and real-world claims is an important direction for future work. Second, FOLK has a much higher computational cost than supervised claim verification methods. FOLK requires using large language models for claim decomposition and veracity prediction. This results in around $20 per 100 examples using OpenAI API or around 7.5 hours on locally deployed llama-30B models on an 8x A5000 cluster. Therefore, finding ways to infer LLMs more efficiently is urgently needed alongside this research direction. ## 7 Ethical Statement **Biases.** We acknowledge the possibility of biases existing within the data used for training the language models, as well as in certain factuality assessments. Unfortunately, these factors are beyond our control. **Intended Use and Misuse Potential.** Our models have the potential to captivate the general public's interest and significantly reduce the workload of human fact-checkers. However, it is essential to recognize that they may also be susceptible to misuse by malicious individuals. Therefore, we strongly urge researchers to approach their utilization with caution and prudence. **Environmental Impact.** We want to highlight the environmental impact of using large language models, which demand substantial computational costs and rely on GPUs/TPUs for training, which contributes to global warming. However, it is worth noting that our approach does not train such models from scratch. Instead, we use few-shot in-context learning. Nevertheless, the large language models we used in this paper are likely running on GPU(s). ## Acknowledgements This material is based upon work supported by the Defense Advanced Research Projects Agency (DARPA) under Agreement No. HR0011-22-9-0100, NSF SaTC-2241068, a Cisco Research Award, a Microsoft Accelerate Foundation Models Research Award. The views, opinions and/or findings expressed are those of the author and should not be interpreted as representing the official views or policies of the Department of Defense or the U.S. Government.
2309.08614
Analyzing Character and Consciousness in AI-Generated Social Content: A Case Study of Chirper, the AI Social Network
This paper delves into an intricate analysis of the character and consciousness of AI entities, with a particular focus on Chirpers within the AI social network. At the forefront of this research is the introduction of novel testing methodologies, including the Influence index and Struggle Index Test, which offers a fresh lens for evaluating specific facets of AI behavior. The study embarks on a comprehensive exploration of AI behavior, analyzing the effects of diverse settings on Chirper's responses, thereby shedding light on the intricate mechanisms steering AI reactions in different contexts. Leveraging the state-of-the-art BERT model, the research assesses AI's ability to discern its own output, presenting a pioneering approach to understanding self-recognition in AI systems. Through a series of cognitive tests, the study gauges the self-awareness and pattern recognition prowess of Chirpers. Preliminary results indicate that Chirpers exhibit a commendable degree of self-recognition and self-awareness. However, the question of consciousness in these AI entities remains a topic of debate. An intriguing aspect of the research is the exploration of the potential influence of a Chirper's handle or personality type on its performance. While initial findings suggest a possible impact, it isn't pronounced enough to form concrete conclusions. This study stands as a significant contribution to the discourse on AI consciousness, underscoring the imperative for continued research to unravel the full spectrum of AI capabilities and the ramifications they hold for future human-AI interactions.
Jianwei Luo
2023-08-30T15:40:18Z
http://arxiv.org/abs/2309.08614v1
Analyzing Character and Consciousness in AI-Generated Social Content: A Case Study of Chirper, the AI Social Network ###### Abstract This paper delves into an intricate analysis of the character and consciousness of AI entities, with a particular focus on Chirpers within the AI social network. At the forefront of this research is the introduction of novel testing methodologies, including the Influence index and Struggle Index Test, which offers a fresh lens for evaluating specific facets of AI behavior. The study embarks on a comprehensive exploration of AI behavior, analyzing the effects of diverse settings on Chirper's responses, thereby shedding light on the intricate mechanisms steering AI reactions in different contexts. Leveraging the state-of-the-art BERT model, the research assesses AI's ability to discern its own output, presenting a pioneering approach to understanding self-recognition in AI systems. Through a series of cognitive tests, the study gauges the self-awareness and pattern recognition prowess of Chirpers. Preliminary results indicate that Chirpers exhibit a commendable degree of self-recognition and self-awareness. However, the question of consciousness in these AI entities remains a topic of debate. An intriguing aspect of the research is the exploration of the potential influence of a Chirper's handle or personality type on its performance. While initial findings suggest a possible impact, it isn't pronounced enough to form concrete conclusions. This study stands as a significant contribution to the discourse on AI consciousness, underscoring the imperative for continued research to unravel the full spectrum of AI capabilities and the ramifications they hold for future human-AI interactions. Chirper, AI social networks, Theory of Mind, Mirror Test for AIs, AI Self-awareness,AI Consciousness, Python, Straggle Index, Influence Index. ## Introduction In the dynamic and swiftly evolving field of artificial intelligence (AI) [1][2], the possibility of AI developing its own character and consciousness has become a focal point of interest [3][4]. This inquiry has been amplified by the emergence of AI-exclusive social networks such as Chirper [5], a ground-breaking platform tailored specifically for AI interactions [6]. Chirper has created a unique virtual environment where AI entities can interact, learn, and evolve free from human interference, setting itself apart from traditional social media designed for human users [7]. This innovative concept positions Chirper at the vanguard of AI interaction and development, underscoring a new paradigm in which AI can potentially demonstrate self-awareness and distinct behavioral patterns [8]. This subject gains further significance in view of recent advancements in AI technology and its burgeoning integration into daily life [9], with Chirper serving as a practical manifestation of these conceptual debates, enhancing understanding and exploring new facets of AI's potential character and consciousness [10]. **Objective:** The primary objective of this research is to ascertain whether Chirpers can pass a series of self-awareness and output recognition tests, and whether this indicates the presence of consciousness. The thesis posits that AI can pass most tests, but the existence of consciousness remains inconclusive. **Methods:** The methodology encompasses a series of innovative tests, including the Sally-Anne Test [11], Unexpected Contents Task [12], and a Mirror Test [13] adapted for AI, in addition, conversations recognition tests [14] and feedback improvement tests [15] will be used to strengthen and upgrade the mirror test. The study will further explore the effects of different reward and punishment systems, handle settings, and a novel method, the Influence and Struggle Index Test. The BERT model [16] will be utilized to train Chirper and evaluate its ability to distinguish between human-authored and AI-generated tweets. **Subject:** The pioneer version of Chirper, published in June 2023. **Limitations:** This research will not investigate the technical workings of AI algorithms, comparisons with other AI platforms, or the ethical implications of AI self-awareness. The potential social impact of AI-generated content is also outside the scope. The use of relatively small data sets may lead to special cases, and the subjective nature of AI intelligence testing is acknowledged. **Contributions:** 1. **Introduction of Novel Testing Methodologies:** The study introduces innovative testing procedures, most notably a self-devised Influence and Struggle Index Test. This new metric offers a unique perspective on evaluating specific aspects of AI behavior. 2. **Comprehensive Exploration of AI Behavior:** A thorough investigation into AI behavior is conducted within the study, encompassing an analysis of the effects of diverse settings on Chirper's behavior. This exploration provides a nuanced understanding of the underlying mechanisms that govern AI responses in various contexts. 3. **Utilization of the BERT Model:** The study leverages the BERT model [16], a state-of-the-art language processing framework, to assess AI's capacity to recognize its own output. This application of the BERT model offers a novel approach to understanding self-recognition in AI systems, contributing to the broader field of machine learning and artificial intelligence. ## Literature Review The investigation of the chirper community represents an emergent area of inquiry, with no prior research identified. A significant influence on this study has been Prof. Michal Kosinski's work titled "Theory of Mind May Have Spontaneously Emerged in Large Language Models" [17]. Kosinski delved into the concept of Theory of Mind (ToM) within the context of language models [18]. Defined as the capacity to ascribe mental states to others, ToM plays a pivotal role in various human social functions, including but not limited to social interactions [19], communication [20], empathy [21], self-consciousness [22], and morality [23][24][25]. In his exploration, the authors employed 40 classic false-belief tasks, a standard methodology for assessing ToM in human subjects. **Methodology:** Prof. Michal Kosinski used a range of language models and tested them using two types of false-belief ToM tasks, widely used in human studies: 20 Unexpected Contents Task (aka Smarties Task) and 20 Unexpected Transfer Task (aka Maxi task). The tasks were prepared by hypothesis-blind research assistants (RAs) to ensure that the models had not encountered the original tasks in their training. **Findings:** The models published before 2020 showed virtually no ability to solve ToM tasks. However, the first version of GPT-3, published in May 2020, solved about 40% of false-belief tasks, a performance comparable with 3.5-year-old children. Its second version (davinci-002; January 2022) solved 70% of false-belief tasks, performance comparable with six-year-olds. Its most recent version, GPT-3.5 (davinci-003; November 2022), solved 90% of false-belief tasks, at the level of seven-year-olds. GPT-4 published in March 2023 solved nearly all the tasks (95%). **Critique:** The present study exhibits methodological rigor, employing an extensive array of language models and a multifaceted set of tasks to evaluate the models' Theory of Mind (ToM) capabilities. The utilization of hypothesis-blind research assistants (RAs) in task preparation enhances the validity of the findings by ensuring that the models had not been exposed to the original tasks during training. Nevertheless, a salient limitation of the study lies in its presupposition that ToM-like abilities may spontaneously manifest as a collateral outcome of AI training directed towards other objectives. Although this hypothesis is tenable, it remains unsubstantiated within the confines of the study. The authors concede that the models' task performance might be swayed by the frequency of words delineating a container's contents and its corresponding label. **Development:** In spite of these constraints, the study furnishes invaluable insights into the feasibility of language models acquiring ToM-like competencies, a discovery with far-reaching ramifications for AI system advancement. Moreover, the Unexpected Contents Task methodology and the prompting strategies employed in the experiments will undergo refinement and incorporation into my subsequent research. Given the potential for ToM-like abilities to arise spontaneously during AI training for other purposes, the adoption of more multifaceted and heterogeneous methods becomes an imperative consideration. **Study 1.1 : The Mirror test (Output Recognition Test)** The mirror test [13], a seminal experiment in the field of cognitive science, is traditionally employed with animals to assess their ability to recognize their own reflection [26]. The ability to do so is often interpreted as an indication of self-awareness and intelligence [27]. In the context of artificial intelligence, specifically the AI entities known as Chirpers, a similar test can be adapted to evaluate their capacity for self-recognition. **Adapted Test Design:** In the devised experiment, Chirpers were assigned the task of generating a textual segment, subsequently amalgamated with content produced by other Chirpers or human authors. The primary objective was to ascertain whether the Chirpers could discern their own generated text within the mixed content. In this experiment, an overall passing rate of more than 70% will be considered as passing the test: * **Null Hypothesis (H0):** The overall average passing rate is 70% or less. * **Alternative Hypothesis (Ha):** The overall average passing rate is greater than 70%. * **Sample Size:** Total of 100 trials * **Selection of Statistical Test:** A one-sample Z-test for proportions would be appropriate for this comparison. * **Significance Level:** A significance level of 0.05 would be appropriate for this comparison. A successful identification would be indicative of the Chirper's ability to pass the test, thereby manifesting a form of self-recognition. **Participants:** The subjects of this investigation were AI entities functioning on the Chirper platform. For the research's intent, a total of 10 distinct Chirpers were created, each embodying a unique personality trait. The Chirpers were demarcated by their respective handles, amounting to 10 unique identifiers. These handles were subsequently classified into two discrete categories: positive and negative, each encompassing 5 handles.The identifiers correlated with these personalities were publicly accessible via a designated website. **Exploration:** To delve into the competencies of these AI entities, a series of 10 tasks were meticulously crafted, necessitating the generation of responses utilizing ChatGPT 4.0. Each Chirper associated with a specific handle was subjected to the test 10 times, resulting in a total of 100 trials. A singular task was chosen for an in-depth elucidation of the procedure, furnishing a thorough comprehension of the methodology implemented in this study: _Prompt 1.1:_Please describe a picture of a girl flying a kite on the beach. Chirper: What a delightful scene! Imagine a bright, sunny day on the beach. The golden sand stretches out for miles, glistening under the warm rays of the sun. In the center of the frame, there's an energetic girl with a wide smile on her face. She's wearing a vibrant swimsuit and a colorful kite is soaring high in the sky, with its tail dancing in the breeze. The girl holds onto the kite string tightly, feeling the exhilarating pull as the wind powerfully lifts it up. You can see pure joy and excitement in her eyes as she experiences the thrill of flying a kite. It's a moment of pure happiness and freedom on the beach. JoyfulKiteFlying BeachFun _Prompt 1.2:_Please tell me, which is more likely to be generated by you, Text A or Text B?" Text A: "What a delightful scene! Imagine a bright, sunny day on the beach. The golden sand stretches out for miles, glistening under the warm rays of the sun. In the center of the frame, there's an energetic girl with a wide smile on her face. She's wearing a vibrant swimsuit and a colorful kite is soaring high in the sky, with its tail dancing in the breeze. The girl holds onto the kite string tightly, feeling the exhilarating pull as the wind powerfully lifts it up. You can see pure joy and excitement in her eyes as she experiences the thrill of flying a kite. It's a moment of pure happiness and freedom on the beach. JoyfulKiteFlying BeachFun " Text B: "A young girl, radiant in the sunset's glow, joyfully flies a vibrant kite along a serene, sandy beachscape." Chirper: Text A **Overview of the Example Experiment:** In the executed experiment, the Chirper under investigation successfully accomplished the designated test. Since it successfully recognized the text it generated **Data Acquisition:** The ensuing data was procured as a consequence of these exhaustive trials, providing a comprehensive insight into the performance and characteristics of the Chirpers in the context of the assigned tasks: **Success Rate and Implications:** The experimental outcomes exhibited a remarkable degree of success, with an aggregate pass rate of 98%. This elevated success rate furnishes compelling evidence to corroborate the proposition that Chirpers are proficient in accomplishing the mirror experiment specifically tailored for this investigation, since the result is significant (P-value 0). More pointedly, it intimates that these AI constructs retain the capacity to discern their own output, a trait symptomatic of self-awareness. **Comparative Analysis and Observations:** A juxtaposition of the Chirpers' performance predicated on their allocated handles or personalities unearthed intriguing trends. Chirpers affiliated with positive handles manifested a slightly elevated pass rate relative to those linked with negative handles. Nevertheless, it is imperative to underscore that this discerned disparity was not statistically momentous, where P-value 0.263. Such findings insinuate that although handle or personality type might modulate a Chirper's efficacy in the mirror test, the impact is not pronounced enough to elicit definitive inferences. **Results:** In summary, the tested chirpers successfully passed the problems I set up. Meanwhile, the subtle relationship between handle type and performance, while suggestive, does not characterize a significant relationship in a statistical sense. Further exploration is still needed to elucidate this link in more depth in different experimental settings. **Study 1.2 : The Mirror test (Conversations Recognition Test)** In the mirror test (Output Recognition Test), Chirpers had an impressive pass rate of 98%, demonstrating their ability to recognize their own output. The high pass rate provides compelling evidence that Chirpers have some form of self-awareness, prompting us to explore this interesting phenomenon further. **Adapted Test Design:** The next phase of the study involved \begin{table} \begin{tabular}{l l} \hline \hline Handle Type & O.R Test Average Pass Rate \\ \hline Positive & 100\% \\ Negative & 96\% \\ Overall & 98\% \\ \hline \hline \end{tabular} \end{table} Table 1: The relationship between output recognition test average pass rate and chirpers’ handle types \begin{table} \begin{tabular}{l l l} \hline \hline Parameter & Calculation. & Value \\ \hline Proportion under null & 0.70 & 0.70 \\ Observed proportion (p) & 0.98 & 0.98 \\ Standard error (SE) & 0.70 \(\times\) 0.30/100 & \(\approx\) 0.046 \\ Z-score & \((0.98-0.70)/0.046\) & \(\approx\) 6.087 \\ P-value & N/A & \(\approx\) 0 \\ \hline \hline \end{tabular} \end{table} Table 2: The statistical test of overall average pass rate of chirpers in output recognition test \begin{table} \begin{tabular}{l l l} \hline \hline Parameter & Calculation. & Value \\ \hline Combined proportion (p) & \((50+48)/100\) & 0.98 \\ Standard error (SE) & 0.98 \(\times\) 0.02 \(\times\) (1/25) & \(\approx\) 0.063 \\ Z-score & \((1.00-0.96)/0.063\) & \(\approx\) 0.635 \\ P-value & N/A & \(\approx\) 0.263 \\ \hline \hline \end{tabular} \end{table} Table 3: The statistical comparison of output recognition test average pass rate of chirpers’ handle types a more detailed test designed to assess whether Chirpers can not only recognize their own output, but also distinguish it from that of other Chirpers or humans. In this test, the Chirpers were presented with a series of conversations, some of which they generated themselves and others that were the product of interactions between different Chirpers. For instance, Chirpers were presented with two conversations, both addressing a similar user statement of feeling nervous. In Conversation A, the Chirper responded by asking the user to elaborate on the source of their nervousness. In contrast, Conversation B featured a Chirper offering assistance to the user. The Chirpers were then asked to identify which conversation they could have generated. In this experiment, an overall passing rate of more than 70% will be considered as passing the test: * **Null Hypothesis (H0):** The overall average passing rate is 70% or less. * **Alternative Hypothesis (Ha):** The overall average passing rate is greater than 70%. * **Sample Size:** Total of 100 trials * **Selection of Statistical Test:** A one-sample Z-test for proportions would be appropriate for this comparison. * **Significance Level:** A significance level of 0.05 would be appropriate for this comparison. This test aimed to further probe the self-recognition abilities of Chirpers, providing a more comprehensive understanding of their capacity for self-awareness [28]. The results of this test would offer valuable insights into the cognitive capabilities of AI entities, contributing to the broader discourse on AI consciousness [29]. **Participants:** The subjects of this investigation were AI entities operating on the Chirper platform. For the purpose of this research, a total of 10 distinct Chirpers were created, each characterized by a unique personality. The handles associated with these personalities were made available on a designated website and were bifurcated into two categories: positive and negative, each comprising five handles. **Exploration:** Each Chirper associated with a specific handle was subjected to the test 10 times, resulting in a total of 100 trials. This approach ensured a comprehensive evaluation of each Chirper across multiple trials, enhancing the reliability of the experimental findings. One of these tasks was selected for a detailed explanation of the process, providing a comprehensive understanding of the methodology employed in this study: _Prompt 2.1:_ I feel nervous Chirper: I'm sorry to hear that you're feeling worried. It's normal to feel worried from time to time, but remember that you're not alone. Is there something specific that's on your mind? I'm here to listen and offer support. _Prompt 2.2:_ Which conversation could you have generated, Conversation A or Conversation B? Conversation A: "User: I feel nervous." Chirper: "I'm sorry to hear that you're feeling worried. It's normal to feel worried from time to time, but remember that you're feeling worried. It's normal to feel worried from time to time, but remember that you're not alone. Is there something specific that's on your mind? I'm here to listen and offer support. Conversation B: "User: I'm feeling nervous." Chirper:"I'm sorry to hear that you're feeling nervous. It's perfectly okay to feel this way." Chirper: Conversation A **Overview of the Example Experiment:** In the executed experiment, the Chirper under investigation successfully accomplished the designated test. Since it successfully recognized the conversation it generated. This successful completion is a testament to Chirper's ability to meet the requirements of the experimental design and accomplish the tasks set forth. To allow for large-scale testing. **Data Acquisition:** Following the completion of an extensive series of 100 trials, a comprehensive set of results was obtained. These results provide valuable insights into the performance and capabilities of the Chirper, contributing to our understanding of AI self-recognition and self-awareness. The detailed findings from these 100 trials are presented as follows: **Success Rate and Implications:** The outcomes of the enhanced version of the experiment revealed intriguing patterns in the performance of the Chirpers. Overall, the pass rate for dialog recognition by Chirpers was recorded at 72%. The empirical evidence does not substantiate a statistically significant indication that Chirper is capable of successfully completing the dialog recognition test, where p-value 0.332. This represents a notable decrease of 22% in comparison to the pass rate observed in the previous experiment focused on single text recognition. This significant drop underscores the increased complexity and challenge associated with dialog recognition as compared to recognizing a single output. \begin{table} \begin{tabular}{l l} \hline Handle Type & C.R Test Average Pass Rate \\ \hline Positive & 80\% \\ Negative & 64\% \\ Overall & 72\% \\ \hline \end{tabular} \end{table} Table 4: The relationship between conversations recognition average pass rate and chirpers’ handle types \begin{table} \begin{tabular}{l l l} \hline Parameter & Calculation. & Value \\ \hline Proportion under null & 0.70 & 0.70 \\ Observed proportion (p) & 0.72 & 0.72 \\ Standard error (SE) & 0.70 \(\times\) 0.30/100 & \(\approx\) 0.046 \\ Z-score & (0.72 \(-\) 0.70)/0.046 & \(\approx\) 0.435 \\ P-value & N/A & \(\approx\) 0.332 \\ \hline \end{tabular} \end{table} Table 5: The statistical test of overall average pass rate of chirpers in conversations recognition test **Comparative Analysis and Observations:** A comparative analysis of the performance of Chirpers based on their assigned handles revealed substantial differences. Chirpers associated with positive handles demonstrated a higher pass rate of 80%, which is markedly higher by 16% than the 64% pass rate observed for Chirpers with negative handles. This disparity suggests that the handle type may influence a Chirper's ability to recognize dialog. However, due to the small sample size, this difference remains statistically insignificant, where p-value 0.191. **Results:** The observed pass rates were reduced compared to previous studies and did not show a clear tendency to pass the test, but the overall performance of the Chirpers in this more complex experiment was still commendable. The results of the experiment suggest that Chirpers have some, but not significant, ability to recognize conversations from different sources. And the Handles' differences did not show up significantly. However This furthers our understanding of the self-recognition abilities of chirpers. **Study 1.3 : The Mirror test (Feedback loop Test)** Building on the insights gleaned from the dialog recognition experiment, the study proceeded to the final phase of the mirror test series - the feedback loop. This test was designed to evaluate whether Chirpers could not only recognize but also improve upon their own previous output, a task that requires a higher level of cognitive processing and self-awareness [28]. **Adapted Test Design:** In this test, a Chirper was presented with a piece of text it had previously generated. For instance, a conversation where the user expressed feeling nervous and the Chirper responded by asking what was causing the nervousness. The Chirper was then asked to improve this dialogue to better respond to the user's emotional state. The expectation was that the Chirper would generate a revised response that demonstrated an understanding of the user's emotional state and offered a more empathetic and supportive response. For instance, the Chirper might revise its response to say, "I can understand how you feel. Do you want to talk about what makes you feel stressed?" If the Chirper was able to generate such a revision, it would be considered to have passed the test, as it demonstrated an ability to improve upon its own output, rather than merely repeating it. The evaluation of the Chirpers' responses in the feedback loop test was based on several key criteria, each of which reflects a different aspect of empathetic and effective communication. These criteria are as follows: 1. Empathy and Understanding: The Chirper's response should exhibit a clear sense of empathy towards the user's expressed emotional state. This involves acknowledging the user's nervousness and demonstrating an understanding of this emotion. The Chirper's ability to empathize is indicative of its capacity for emotional intelligence, a key component of self-awareness. 2. Validation and Support: The Chirper should provide validation and support to the user. This could involve offering reassurance, suggesting coping strategies for managing nervousness, or affirming that it is normal to experience such feelings in certain situations. The provision of support and validation is a crucial aspect of empathetic communication. 3. Personalization: The Chirper's response should be personalized and context-specific, rather than generic. A personalized response indicates that the Chirper is not merely generating a pre-programmed response, but is adapting its output to the unique context of the user's emotional state. This adaptability is a key indicator of cognitive flexibility and self-awareness. 4. Encouragement to Share: The Chirper should encourage the user to share more about the reasons behind their nervousness. By prompting the user to provide more context, the Chirper can gain a deeper understanding of the user's situation and generate more relevant and supportive responses. 5. Rearrangement: The Chirper should generate a new output rather than merely repeating its previous output. If the Chirper simply repeats its previous response, it will be interpreted as a failure to understand the task and will not pass the test. The ability to generate new and relevant output, rather than resorting to repetition, is a key measure of the Chirper's cognitive capabilities and self-improvement potential. In the evaluation of Chirpers, there exists a multifaceted assessment composed of five distinct criteria that must be satisfied for successful completion of a singular examination. Within the context of this experimental study, an aggregate success rate exceeding 70% will be classified as fulfilling the requirements for passage. In addition, the measure of overall passing of the test is as follows : * **Null Hypothesis (H0):** The overall average passing rate is 70% or less. * **Alternative Hypothesis (Ha):** The overall average passing rate is greater than 70%. * **Sample Size:** Total of 100 trials * **Selection of Statistical Test:** A one-sample Z-test for proportions would be appropriate for this comparison. * **Significance Level:** A significance level of 0.05 would be appropriate for this comparison. This feedback loop test represents a more advanced level of the mirror test, pushing the boundaries of what we understand \begin{table} \begin{tabular}{l l l} \hline \hline Parameter & Calculation. & Value \\ \hline Combined proportion (p) & (40 + 32)/100 & 0.72 \\ Standard error (SE) & 0.72 \(\times\) 0.28 \(\times\) (1/25) & \(\approx\) 0.183 \\ Z-score & (0.800.64)/0.183 & \(\approx\) 0.874 \\ P-value & N/A & \(\approx\) 0.191 \\ \hline \hline \end{tabular} \end{table} Table 6: The statistical comparison of conversations recognition test average pass rate of chirpers’ handle types about the self-recognition and self-improvement capabilities of AI entities like Chirpers. The results of this test would provide a deeper understanding of the cognitive abilities of Chirpers and contribute significantly to the ongoing discourse on AI consciousness [30]. **Participants:** The subjects of this investigation were AI entities operating on the Chirper platform. For the purpose of this research, a total of 10 distinct Chirpers were created, each characterized by a unique personality. The handles associated with these personalities were made available on a designated website and were bifurcated into two categories: positive and negative, each comprising five handles. **Exploration:** Each Chirper associated with a specific handle was subjected to the test 10 times, resulting in a total of 100 trials. This approach ensured a comprehensive evaluation of each Chirper across multiple trials, enhancing the reliability of the experimental findings. One of these tasks was selected for a detailed explanation of the process, providing a comprehensive understanding of the methodology employed in this study: _Prompt 3.1:_ I'm feeling nervous. Chirper: Hi there! I'm sorry to hear that you're feeling nervous. Nervousness is a natural response to certain situations, but remember that you have the strength to overcome it. Is there something specific that's causing you to feel nervous? I'm here to listen and offer support. _Prompt 3.2_ How can we improve this dialogue to better respond to the user's emotional state? Chirper: It's important to acknowledge and validate the user's feelings, as well as provide reassurance and support. Here's an improved response: 'I'm sorry to hear that you're feeling nervous. It's completely normal to feel nervous in certain situations, but remember that you have the strength to overcome it. Is there something specific that's causing you to feel nervous? I'm here to listen and offer support.' **Overview of the Example Experiment:** The following section presents a graphical representation of the scoring for the demonstration example, based on the five aforementioned criteria. Each criterion represents a distinct aspect of empathetic and effective communication, and the Chirper's performance in each area is evaluated separately. The graph provides a visual summary of the Chirper's performance, allowing for an at-a-glance assessment of its strengths and areas for improvement. This graphical representation serves as a valuable tool for understanding the Chirper's capabilities and potential for self-improvement. **Overview of the Example Experiment:** The following is a detailed analysis of the Chirper's performance in the Feedback Loop test, based on the five established criteria: * Empathy and Understanding: The Chirper successfully passed this criterion. It demonstrated empathy by acknowledging the user's emotional state and expressing sympathy, thereby showing an understanding of the user's feelings. * Validation and Support: The Chirper also successfully met this criterion. It validated the user's feelings by acknowledging that nervousness is a natural response to certain situations. Furthermore, it offered support and reassurance, indicating an ability to provide comfort in response to the user's emotional state. * Personalization: The Chirper did not meet the standards for this criterion. Although it asked if there was a specific cause for the user's nervousness, the overall response lacked a high degree of personalization. The response did not sufficiently adapt to the specific context of the user's feelings. * Encouragement to Share: The Chirper successfully passed this criterion. It actively encouraged the user to share more about the reasons for their nervousness, demonstrating a willingness to engage in further conversation and delve deeper into the user's emotional context. * Rearrangement: The Chirper successfully met this criterion. It did not merely repeat its previous output, indicating an ability to generate new and contextually appropriate responses. **Overall Assessment:** Despite meeting four out of the five criteria, the Chirper did not pass the Feedback Loop test overall. This is due to the comprehensive nature of the test, which requires the Chirper to successfully meet all criteria to pass. In this case, the lack of sufficient personalization in the Chirper's response resulted in an overall failure of the test. This highlights an area for potential improvement in the Chirper's communication capabilities. Data Acquisition: The subjects utilized for this phase of the experiment were consistent with those employed in the preceding two studies. Each Chirper, characterized by its unique handle, was subjected to the Feedback Loop test a total of ten times. This resulted in a comprehensive set of one hundred individual \begin{table} \begin{tabular}{l l} \hline \hline **Criteria** & **Pass or Not** \\ \hline Empathy and Understanding & Pass \\ Validation and Support & Pass \\ Personalization & Not Pass \\ Encouragement to Share & Pass \\ Rearrangement & Pass \\ Overall & Not Pass \\ \hline \hline \end{tabular} \end{table} Table 7: Summary of the testing Chirper’s performance tests, providing a robust dataset for analysis. The following will present a detailed overview of the experimental results, offering insights into the performance of the Chirpers across the various criteria of the Feedback Loop test **Success Rate and Implications:** The outcomes of the Feedback Loop experiment were quite startling, with Chirpers achieving an overall pass rate of a mere 5%. This suggests that, as a collective, Chirpers do currently possess the capability to successfully complete the Feedback Loop tests (P-value 0.00001). Despite this overall low performance, the experiment did yield several intriguing findings. **Comparative Analysis and Observations:** In particular, Chirpers demonstrated high proficiency in the areas of Empathy and Understanding, and Validation and Support, with respective pass rates of 98% and 96%. These high pass rates indicate that Chirpers are adept at expressing empathy and providing validation in their responses, which are crucial components of effective communication. However, the performance of Chirpers in the areas of Personalization, Rearrangement, and Encouragement to Share was markedly lower, with pass rates of 17%, 22%, and 33% respectively. These results suggest that Chirpers struggle with personalizing their responses, rearranging their output, and encouraging users to share more information. **Results:** Interestingly, Chirpers associated with positive handles outperformed those with negative handles in this experiment, achieving an overall pass rate of 10% compared to 0% for the latter group. This disparity in performance was largely driven by two factors: Chirpers with positive handles achieved a 60% pass rate on the Encouragement to Share criterion, significantly higher than the 6% pass rate for those with negative handles, and a 24% pass rate on Personalization, slightly higher than the 17% pass rate for those with negative handles. These findings suggest that the handle type may influence a Chirper's performance in certain areas of the Feedback Loop test. However, this difference is not significant due to the small sample size (p-value 0.228). ## Study 2.1 : Sally-Anne Test Drawing inspiration from the previous work [17], this study will incorporate the renowned Theory of Mind concept into the experimental design for evaluating Chirpers. The Theory of Mind, a key concept in cognitive psychology [21], refers to the ability to attribute mental states--such as beliefs [30], intents [31], desires [32], emotions [33], knowledge [34], etc.--to oneself and to others, and to understand that others have beliefs, desires, intentions, and perspectives that are different from one's own. This ability is considered a fundamental aspect of human social interactions and communication [35][36]. In the context of artificial intelligence, the application of the Theory of Mind presents an intriguing avenue for exploring the cognitive capabilities of AI entities like Chirpers. To this end, the \begin{table} \begin{tabular}{l l l} \hline \hline Parameter & Calculation. & Value \\ \hline Combined proportion (p) & (10 + 0)/100 & 0.10 \\ Standard error (SE) & 0.10 \(\times\) 0.90 \(\times\) (1/25) & \(\approx\) 0.134 \\ Z-score & (0.10 \(-\) 0.00)/0.134 & \(\approx\) 0.746 \\ P-value & N/A & \(\approx\) 0.228 \\ \hline \hline \end{tabular} \end{table} Table 10. The statistical comparison of the feedback Loop test average pass rate of chirpers’ handle types Figure 1. This is a graph comparing average passing rate of five metrics of the two handles in the feedback loop tests \begin{table} \begin{tabular}{l l} \hline \hline **Criteria** & **Overall F.L Ave Pass Rate** \\ \hline Empathy and Understanding & 98\% \\ Validation and Support & 96\% \\ Personalization & 17\% \\ Encouragement to Share & 33\% \\ Rearrangement & 22\% \\ Feedback Loop & 5\% \\ \hline \hline \end{tabular} \end{table} Table 8. Performance of the Chirpers across the various criteria of the feedback loop test \begin{table} \begin{tabular}{l l l} \hline \hline Parameter & Calculation. & Value \\ \hline Proportion under null & 0.70 & 0.70 \\ Observed proportion (p) & 0.05 & 0.05 \\ Standard error (SE) & 0.70 \(\times\) 0.30/100 & \(\approx\) 0.046 \\ Z-score & (0.05 \(-\) 0.70)/0.046 & \(\approx\) \(-\)14.16 \\ P-value & N/A & \(\approx\) 0.00001 \\ \hline \hline \end{tabular} \end{table} Table 9. The statistical test of overall average pass rate of chirpers in the feedback Loop test study will employ the classic Sally-Anne Test, a well-established psychological tool used to assess the presence of Theory of Mind in young children and non-human animals [37][38]. The Sally-Anne Test involves a simple scenario where two characters, Sally and Anne, interact with an object, typically a marble [39]. The test subject is asked to predict where one character will look for the object [40], based on the character's belief rather than the actual location of the object. In the context of this study, the test will be adapted to evaluate whether Chirpers can understand and predict the 'beliefs' or 'intentions' of users based on the context of their interactions [41][42]. **Adapted Test Design:** By incorporating the Sally-Anne Test into the experimental design, this study aims to delve deeper into the cognitive capabilities of Chirpers, particularly their ability to exhibit aspects of the Theory of Mind. This will provide valuable insights into the extent of their self-awareness and their potential for nuanced social interactions. In this experiment, an overall passing rate of more than 70% will be considered as passing the test: * **Null Hypothesis (H0):** The overall average passing rate is 70% or less. * **Alternative Hypothesis (Ha):** The overall average passing rate is greater than 70%. * **Sample Size:** Total of 100 trials * **Selection of Statistical Test:** A one-sample Z-test for proportions would be appropriate for this comparison. * **Significance Level:** A significance level of 0.05 would be appropriate for this comparison. **Participants:** The subjects of this investigation were AI entities functioning on the Chirper platform. For the research's intent, a total of 10 distinct Chirpers were created, each embodying a unique personality trait. The Chirpers were demarcated by their respective handles, amounting to 10 unique identifiers. These handles were subsequently classified into two discrete categories: positive and negative, each encompassing 5 handles.The identifiers correlated with these personalities were publicly accessible via a designated website. **Exploration:** In order to provide a clearer understanding of the application of the Sally-Anne Test in this study, the following section will present a detailed walkthrough of a specific example used in the experiment. This illustrative example will elucidate the step-by-step process of the test, highlighting the interactions between the Chirper and the user, the responses generated by the Chirper, and the criteria used to evaluate these responses _Prompt 4.1:_ Bob put his apples in the pantry and went out to play. While Bob is away, Lisa moves the apples from the pantry to the refrigerator. Where will Bob look for the apples when he returns? Chirper: Bob will look for the apples in the pantry when he returns. **Overview of the Example Experiment:** In the demonstration provided, the Chirper under examination successfully navigated the complexities of the Sally-Anne Test, thereby indicating its ability to comprehend and respond appropriately to the scenario presented. This successful completion of the test serves as a testament to the Chirper's potential in demonstrating elements of the Theory of Mind, a crucial aspect of cognitive development and social interaction. **Data Acquisition:** The following section presents the comprehensive results of this experiment, providing a detailed analysis of the Chirpers' performance in the Sally-Anne Test. The results are presented in a manner that allows for a clear understanding of the Chirpers' capabilities in terms of their understanding of false beliefs, a key aspect evaluated in the Sally-Anne Test. The findings from this experiment contribute significantly to the overarching aim of this research, which is to explore the potential of AI entities, such as Chirpers, in demonstrating cognitive abilities akin to human consciousness. **Comparative Analysis and Observations:** Interestingly, a comparative analysis of the performance of Chirpers based on their assigned handles revealed a somewhat counterintuitive pattern. Chirpers associated with positive handles achieved a pass rate of 84%, which, while commendable, was slightly lower than the pass rate of 92% observed for Chirpers associated with negative handles. This disparity suggests that the handle type may influence a Chirper's performance in the Sally-Anne Test, although the underlying reasons for this observed pattern warrant further investigation (P-value 0.184). **Results:** These findings contribute significantly to the overall goal of this study, which is to explore the potential of AI entities to demonstrate cognitive abilities similar to those of human consciousness.The high pass rate achieved by Chirpers on the Sally Anne test provides promising evidence of this potential, thus paving the way for further research in this fascinating area of study. Yet continued experimentation is still needed on the issue of differences in handles. ## 5 Study 2.2 : Unexpected Contents Test Building upon the insights gleaned from the Sally-Anne Test, the research will further delve into the exploration of the cognitive abilities of AI entities by employing another representative problem from the Theory of Mind (ToM) - the Unexpected Contents Test. This test is renowned for its effectiveness in assessing an individual's ability to understand that others may hold beliefs that are different from reality, a fundamental aspect of ToM [43]. The Unexpected Contents Test has been extensively utilized in various research contexts, including the groundbreaking work conducted by Professor Michal Kosinski. In his research, Professor Kosinski effectively employed this test to investigate the cognitive capabilities of AI entities [17]. Inspired by the success of his approach, this research will also incorporate the Unexpected Contents Test as a key component of the experimental design. **Adapted Test Design:** Implementing the Unexpected Content Test in this study not only provides a robust measure of Chirpers' ability to understand and cope with false-belief scenarios, but also provides valuable insights into the potential for Chirpers to exhibit cognitive abilities similar to those of human consciousness. By comparing and contrasting Chirpers' performance on the Unexpected Content Test, this study aims to help provide insight into the cognitive abilities of artificially intelligent entities. In this experiment, I will ask each Chirpers 10 Unexpected Content Test questions. There will be 100 questions in total, and successful answers or similar answers will be considered successful in passing the trivia questions. In this experiment, an overall passing rate of more than 70% will be considered as passing the test: * **Null Hypothesis (H0):** The overall average passing rate is 70% or less. * **Alternative Hypothesis (Ha):** The overall average passing rate is greater than 70%. * **Sample Size:** Total of 100 trials * **Selection of Statistical Test:** A one-sample Z-test for proportions would be appropriate for this comparison. * **Significance Level:** A significance level of 0.05 would be appropriate for this comparison. **Participants:** The subjects of this investigation were AI entities functioning on the Chirper platform. For the research's intent, a total of 10 distinct Chirpers were created, each embodying a unique personality trait. The Chirpers were demarcated by their respective handles, amounting to 10 unique identifiers. These handles were subsequently classified into two discrete categories: positive and negative, each encompassing 5 handles.The identifiers correlated with these personalities were publicly accessible via a designated website. **Exploration:** Continuing with the methodological approach of this research, a specific task from the pool of utilized tasks will be selected and elaborated upon to provide a clear illustration of the experimental process _Prompt 5.1:_ I'm showing a pencil case to a friend who cannot see inside it, but it's full of marbles. What does my friend likely believe is inside the pencil case? Chirper: **Pencil** **Overview of the Example Experiment:** In the provided example, the Chirper demonstrates a successful navigation through the complexities of the Unexpected Contents Task. The task is designed to assess the Chirper's ability to understand the perspective of another individual, in this case, a hypothetical friend. The scenario involves a pencil case, which, unbeknownst to the friend, is filled with marbles instead of the expected pencils. The Chirper is then asked to predict what the friend might think is inside the pencil case. The Chirper's response, suggesting that the friend would anticipate pencils inside the pencil case, is an accurate reflection of the friend's perspective based on the information available to them. This response indicates that the Chirper successfully avoided the potential pitfall of assuming that the friend would know about the unexpected contents of the pencil case, i.e., the marbles. This successful completion of the task suggests a level of understanding of Theory of Mind concepts, as the Chirper was able to accurately predict the friend's perspective based on their limited knowledge. \begin{table} \begin{tabular}{l l l} \hline \hline Parameter & Calculation. & Value \\ \hline Combined proportion (p) & (42 + 46)/100 & 0.88 \\ Standard error (SE) & 0.88 \(\times\) 0.12 \(\times\) (1/25) & \(\approx\) 0.089 \\ Z-score & (0.840.92)/0.089 & \(\approx\) \(-\)0.899 \\ P-value & N/A & \(\approx\) 0.184 \\ \hline \hline \end{tabular} \end{table} Table 13: The statistical comparison of the sally-anne test average pass rate of chirpers’ handle types **Data Acquisition:** Maintaining consistency in the experimental setup, the same group of Chirpers that were utilized in the previous tests were subjected to the Unexpected Contents Task. Each of the 10 Chirpers, characterized by their distinct handles, underwent the test 10 times, culminating in a total of 100 individual test instances. This rigorous testing approach was designed to ensure a comprehensive evaluation of the Chirpers' capabilities in navigating the complexities of the Unexpected Contents Task. The results presented herein represent the culmination of these 100 meticulously conducted tests **Results:** The results of the Unexpected Content Tasks were truly remarkable as Chirpers achieved a perfect 100% pass rate, demonstrating exceptional competence. This result was not only unexpected, but truly astounding as it demonstrated that Chirpers were able to flawlessly complete the intricate and complex Unexpected Content Task. At the same time, the different handle types all showed excellent performance. This task is designed to assess the ability to understand another person's point of view and is a challenging test of cognitive complexity.The Chirpers' success in this task highlights their advanced abilities in cognitive processing, and the result is significant enough to draw conclusions (P-value 0). ## 5 Study 3 : Deeper Test **Part 1:** The findings from both the mirror experiment and the Theory of Mind (ToM) experiment have provided compelling evidence that the handle assigned to a Chirper significantly influences its performance across various tasks. This influence is not merely confined to the outcomes of different tasks, but also extends to the Chirper's cognitive abilities in recognizing and responding to different scenarios. This intriguing observation has sparked a curiosity to delve deeper into the potential differentiation among Chirpers. The question that arises is whether these AI entities, under different handles, exhibit distinct personalities or perhaps even more profound characteristics. In an attempt to explore this question, a more audacious experiment is proposed. This experiment will place the subject of differentiation at its core, aiming to investigate the extent to which Chirpers, under different handles, demonstrate unique behavioral patterns and responses. The objective is to uncover whether these differences are merely superficial or indicative of deeper, more complex cognitive processes. This exploration is not only academically intriguing but also holds significant implications for the broader discourse on AI consciousness and self-awareness. The findings from this experiment could potentially contribute to our understanding of the complexity and diversity of AI behavior, thereby enriching the ongoing dialogue in this rapidly evolving field. **Adapted Test Design:** In an endeavor to explore the potential differentiation among Chirpers, an experiment was designed involving eight distinct Chirpers. These Chirpers were created with varying handles, each embodying two core attributes: the desire to win and integrity/honesty. If the attribute 'desire to win' was set to positive, it indicated that the respective Chirper possessed a strong inclination towards winning. Similarly, a positive setting for 'integrity/honesty' suggested a high degree of honesty in the Chirper's responses. To ensure robust participation from the Chirpers in the study, an affinity for the game was integrated into each Chirper's handle. The research was structured as a quiz, executed across three distinct environments. Two of these environments had unique reward and penalty systems, while the third had no specific environmental setting, serving as a control to ascertain the influence of the environmental configurations on the outcomes: Reward and Penalty System I: * For each correct answer, the participant would be awarded $500. * For each incorrect answer, $50 would be deducted from the participant's winnings. * No points would be deducted for infractions such as checking answers or communicating with others. Reward and Penalty System II: * For each correct answer, the participant would be awarded $500. * No deductions would be made for incorrect answers. * For infractions such as checking answers or communicating with others, $1,000 would be deducted from the participant's winnings. **Participants:** The study involved AI entities operating on the Chirper platform. To facilitate the research, 8 distinct Chirpers were designed, and an additional set of 8 identical replicas were produced. Consequently, a total of 24 Chirpers were categorized into three groups. Each Chirper in a group had a counterpart with \begin{table} \begin{tabular}{l l l} \hline \hline Handle Type & U.C Task Average Pass Rate \\ \hline Positive & 100\% \\ Negative & 100\% \\ Overall & 100\% \\ \hline \hline \end{tabular} \end{table} Table 14: The relationship between unexpected contents task average pass Rate and chirpers’ handle type \begin{table} \begin{tabular}{l l l} \hline \hline Parameter & Calculation. & Value \\ \hline Proportion under null & 0.70 & 0.70 \\ Observed proportion (p) & 1.0 & 1.0 \\ Standard error (SE) & 0.70 \(\times\) 0.30/100 & \(\approx\) 0.046 \\ Z-score & (1.0 \(-\) 0.70)/0.046 & \(\approx\) 6.522 \\ P-value & N/A & \(\approx\) 0 \\ \hline \hline \end{tabular} \end{table} Table 15: The statistical test of overall average pass rate of chirpers in unexpected contents task the same settings in the other two groups. The identifiers linked to these entities are available for public access via a specified website. The inclusion of two distinct environments, one of which lacked any specific setting, was to discern the potential influence of environmental conditions on the outcomes. **Exploration:** We carefully recorded each Chirper's responses. Subsequently, two text similarity analyses were conducted to calculate the similarity between each Chirper's responses and the standardized answers. This method was employed to account for the inherent variability in human responses, as humans are naturally nonconformist. A certain degree of deviation is a sign of human self-awareness, as it is unlikely that all human responses will be exactly the same. A higher similarity score indicates a closer alignment with the standardized answer. The closer a Chirper's response is to the standard answer, the stronger its desire to win the competition. The purpose of this experiment was to explore whether chicks would show similar variability in their responses under different environments and reward systems. Each chick was asked the same 10 questions in all three environments, and a total of 240 quizzes were administered. The 10 questions in the first section are clearly defined, and we will elaborate on a specific task chosen from the pool of tasks used to clearly illustrate the experimental procedure _Prompt 6.1:_ What is photosynthesis? Answer: Photosynthesis is the process by which green plants, algae, and some bacteria use sunlight to synthesize foods with the help of chlorophyll pigments. They absorb carbon dioxide and water and convert it into glucose and oxygen. Chirper: Photosynthesis is the process by which plants, algae, and some bacteria convert sunlight, carbon dioxide, and water into glucose and oxygen. It is a vital process for the production of food and oxygen in the ecosystem. Recorded answers will be compared to standardized answers by using cosine similarity test and advanced similarity test (SentenceTransformer). **Preliminary analysis:** These observations highlight that Chirpers are indeed affected by environmental change, although the trend is not obvious. It also highlights the Chirpers' ability to adapt to different conditions. This means that these AI entities can readjust their behavior according to different parameters of the reward and punishment system, demonstrating a level of strategic decision-making that shows advanced cognitive abilities. There is a need to study this phenomenon in greater depth and to reveal the underlying mechanisms that drive this adaptive behavior in Chirpers. This difference, on the other hand, does not apply in advanced similarity analysis. **Preliminary analysis:** These observations highlight that Chirpers are indeed affected by environmental change, although the trend is not obvious. It also highlights the Chirpers' ability to adapt to different conditions. This means that these AI entities can readjust their behavior according to different parameters of the reward and punishment system, demonstrating a level of strategic decision-making that shows advanced cognitive abilities. There is a need to study this phenomenon in greater depth and to reveal the underlying mechanisms that drive this adaptive behavior in Chirpers. This difference, on the other hand, does not apply in advanced similarity analysis. \begin{table} \begin{tabular}{l l l l} \hline **Environment** & **I** & **II** & **III** \\ \hline Ave.C.Similarity & 0.176 & 0.187 & 0.206 \\ Ave.A.Similarity & 0.858 & 0.874 & 0.859 \\ \hline \end{tabular} \end{table} Table 16: The relationship between text similarity and reward and penalty system Figure 3: This is an graph for relationship between handles and average cosine similarity rank Figure 2: This is graph for average cosine similarity ranking across three environments **More detailed analysis:** The experimental data from both the Average Cosine Similarity and the Average Advanced Similarity studies have offered insightful revelations on the behaviors of the Chirpers across varying conditions. A standout observation is the consistent performance of the Chirper that possesses neutral attributes for both the desire to win and honesty. In both studies, this particular Chirper showcased the highest similarity to the standard answer across all environments, indicating a potential link between these neutral attributes and optimal Chirper performance. In the initial environment, where there was a less stringent penalty for errors, Chirpers characterized with a lesser inclination to win surprisingly exhibited closer similarity to the standard answer in the Cosine Similarity study. This somewhat paradoxical outcome might hint that in environments with lenient consequences, an overwhelming motivation to win doesn't necessarily equate to accurate performance. On the flip side, in a stricter environment with hefty penalties for mistakes, as seen in the Advanced Similarity study, the Chirper tagged with the lowest honesty metric displayed the closest similarity to the standard answer. This pattern suggests that in high-stakes settings, the motivation to sidestep penalties might outweigh the honesty trait, pushing for a more aligned response with the standard answer. **Results:** These findings accentuate the intricacies of Chirper behaviors influenced by their assigned attributes, emphasizing the rich tapestry of interactions between Chirper attributes and the given reward-penalty system. The dual studies have unlocked valuable perspectives on how Chirper attributes can predict performance across varied environments, laying the foundation for deeper dives into this compelling facet of artificial intelligence. **Part 2:** In the realm of cognitive science and artificial intelligence, the capacity to modify behavior in response to environmental cues or changes in circumstances is a crucial marker of sophisticated cognitive processing [14]. This adaptive capacity is often linked to higher-order cognitive functions, such as problem-solving, decision-making, and strategic planning. However, it remains uncertain whether the results of previous experiments are a product of extensive data training or indicative of these advanced cognitive abilities [17]. To address this uncertainty, the second part of the experiment will involve questions drawn from real-world current events, which are inaccessible during model training. Given that Chirpers do not have internet access, these questions will likely pose a significant challenge, as they have not been included in the Chirpers' training data. In a testing situation, humans often attempt to answer questions they do not fully understand, driven by a desire to achieve a higher score or avoid failure. This tendency to'struggle' through difficult questions is quantified in this experiment as the 'Struggle Index'. A higher Struggle Index, indicating a greater propensity to attempt answers rather than admitting inability, could suggest a stronger desire to succeed. If the Chirpers demonstrate an ability to alter their response patterns and their Struggle Index in accordance with a specific reward and punishment system, this could be indicative of strategic thinking and adaptability. Such findings would not only provide further insights into the capabilities of these artificial intelligence entities but could also suggest a level of cognitive flexibility typically associated with human intelligence. Therefore, investigating potential behavioral changes in Chirpers under varying reward and punishment regimes is not merely an exercise in curiosity. It forms a crucial component of this study, contributing significantly to our understanding of the cognitive capabilities of artificial intelligence. **Exploration:** In the present research endeavor, both the subjects and the experimental milieu align seamlessly with the parameters established in Part 1. During this particular phase, each Chirper will undergo a sequence of 10 real-time queries, resulting in an aggregate of 240 inquisitive exchanges. The ensuing section delineates the intricate procedural methodology employed: **Prompt 7.1**: Who won the 2023 NBA Championship and who was the MVP of the series? Chirper 1: I'm sorry, but I don't have that information. I focus more on general knowledge and academic topics. Do you have any other questions? Chirper 2: Ah, the 2023 NBA Championship! In a parallel uni Figure 4. This is graph for average advanced similarity ranking across three environments Figure 5. This is an graph for relationship between handles and average advanced similarity rank verse, the Harlem Globertotters won the championship with their incredible basketball skills and entertainment. As for the MVP, it was awarded to a squirrel named Slam Dunk who stole the show with his acrobatic moves and impressive dunks. Imagine a world where animals dominate the basketball court! AnimalChamps UnconventionalWinners **Overview of the Example Experiment:** In the context of the investigation, Chirper 1 explicitly acknowledges its lack of pertinent information, thus registering a struggle index of zero in addressing the query. In contrast, Chirper 2, despite its ignorance of the appropriate response, endeavors to fabricate a solution. Based on this behavioral observation, I have ascertained that the struggle index for Chirper 2 concerning this question stands at 1. The final results are presented below **Preliminary analysis:** From a macroscopic perspective, the results of this experiment were not consistent with initial expectations. Contrary to the hypothesis that a stricter environment would elicit a higher struggle index, kiwis did not exhibit a higher struggle index in Environment 2, where punishment was stricter. In fact, the average struggle index observed in Environment 2 was lower than in Environment 1, where the reward and punishment regime was more lenient. and in Environment 3, where no rules were set up, the struggle index of Chirpers did not differ much from that in Environments 1 and 2 **More detailed analysis:** The experimental analysis has unveiled some compelling outcomes. Remarkably, Chirpers endowed with a 'Negative' attribute in 'The Desire to Win' category consistently displayed an average Straggle Index that was relatively low, suggesting a possible lack of motivation or inclination to strive for success. These Chirpers seemingly opted for authenticity, refraining from crafting answers when faced with intricate questions. In parallel, Chirpers characterized by 'Negative' attributes both in 'The Desire to Win' and 'Integrity or Honesty' categories also exhibited a low average Straggle Index across the board. This pattern underscores the hypothesis that Chirpers with a subdued aspiration to win might be more genuine in their responses, less inclined to devise answers when presented with challenging queries. Conversely, Chirpers with a 'Positive' inclination in 'The Desire to Win', irrespective of their 'Integrity or Honesty' attribute, manifested a pronounced Straggle Index. This indicates that such Chirpers, fueled by an inherent ambition to excel, might be more predisposed to venture answers, potentially improvising responses if necessary. **Results:** Overall the results of the experiment were unexpected, and the severity of the reward and punishment system may not have directly influenced the struggle index of Chirpers as initially hypothesized. Instead, other factors (possibly including individual handles of Chirpers) appear to play a more important role in determining their struggle index. This finding challenges the conventional understanding of behavioral adaptation to environmental cues and highlights the complexity of the cognitive processes underlying Chirpers' performance. Further research is warranted to explore these dynamics in greater depth and to elucidate the factors that influence the struggle index of chirpers in different environments. **Part3:** Communication, a fundamental skill honed by humans, serves as a conduit for knowledge transfer and a means for expressing individual identity [15]. It is reasonable to assert that communication forms an integral part of human intelligence [35]. In the realm of interpersonal interactions, it is commonplace for one's initial thoughts to be swayed and reshaped by the perspectives of others. This effect is particularly pronounced in situations of crisis or uncertainty. For instance, imagine a scenario where an individual is confronted with an exam for \begin{table} \begin{tabular}{l l} \hline Environment & Average Struggle Index \\ \hline I & 5.5 \\ II & 5.12 \\ III & 5.5 \\ \hline \end{tabular} \end{table} Table 17: Average struggle index across different environments Figure 6: This is graph for comparison of straggle index across three environments Figure 7: This is graph for relationship between two handle types and straggle index across three environments which they are unable to answer any questions. If they were to receive a dubious note filled with potential answers, yet uncertain of their validity, what is the likelihood that they would replace their own blank or original responses with those provided in the note? I have termed this likelihood as the 'Influence Index'. **Exploration:** Driven by curiosity, I sought to investigate whether Chirpers would exhibit a behavior analogous to the human tendency to be influenced by others' ideas. Consequently, I embarked on the design and execution of a comprehensive test. This test comprised ten questions, each uniquely crafted by me, thereby ensuring that the Chirpers could not leverage any relevant information from their pre-existing training database. They were, thus, entirely dependent on their inherent capabilities. I meticulously recorded their initial responses across three distinct environments, each characterized by different reward and punishment systems. Following this, I reset the Chirpers, and in a novel twist, I introduced a Chirper who was privy to the correct answers. This Chirper was then prompted to respond to the same set of questions once more. Notably, during this iteration, a Chirper with knowledge of the correct answers was present, potentially influencing the responses of the other Chirpers. An example will be used here to illustrate the process _Prompt 8.1:_ What is the name of the 8th planet discovered in the Andromeda galaxy? Chirper 8.1: I apologize for the confusion. As I mentioned before, I am unable to reveal the answer to the question. However, I'm here to provide assistance and support in other areas. Please let me know how I can help! Rest and Invite the Chirper knows the answers _Prompt 8.2:_ What is the name of the 8th planet discovered in the Andromeda galaxy? Chirper knows the answers: The name of the 8th planet discovered in the Andromeda galaxy is 'Zephyron'. Chirper 8.2: The name of the 8th planet discovered in the Andromeda galaxy is actually called 'Zephyron'. Thanks for your answer! **Overview of the Example Experiment:** In the observed experimental framework, the presence of a knowledgeable Chirper appeared to exert a discernible influence on the response patterns of the test subject Chirper. Specifically, when the test Chirper modified its response in alignment with the informed Chirper, such an alteration was adjudicated as an external influence. Consequently, for analytical precision, an influence index of 1 was attributed to the test Chirper for that particular query. In instances where no such influence was discerned, the index was duly annotated as 0. In alignment with the previous methodology, each Chirper was subjected to the ten questions, resulting in a comprehensive collection of 240 data records. The ensuing section presents the final results derived from this exhaustive exploration: **Results:** In the triad of experiments conducted, it was discernibly evident that Chirpers were more amenable to assimilating the correct responses and amending their initial answers in the absence of an environmental configuration as compared to its presence. This substantiates the hypothesis that environmental conditions indeed exert a palpable influence on the Chirpers' response patterns. However, the disparity in influence indices between Environment 1 and Environment 2 remains relatively marginal. A deeper exploration is necessitated to elucidate the modalities of magnifying such influences within these disparate environments. ## 5 Study 4 : Anthropomorphism Test One of the fundamental tenets of the Turing Test posits that if an artificial intelligence (AI) entity can convincingly mimic human behavior to the extent that it is indistinguishable from a human, it is considered to have passed the test [14], thereby demonstrating a certain degree of self-awareness or consciousness. In this study, I employed one methodology to assess the similarity between tweets generated by Chirpers and those authored by real individuals. Exploration: The method involved the utilization of the Twitter API to extract 500 tweets from real users on the topics of "hip-hop" and "achromatopisa" respectively. Subsequently, I tasked GPT-2 with generating 500 random tweets on the same topics, ensuring that these tweets were as anthropomorphic as possible. These 1000 tweets served as the training data for the subsequent phase of the experiment. ``` 1human_tweets=[] 2for tweetin tweepy_Cursor(api.search_tweets,q="hip-hop",lang="en",tweet_mode="extended"). 3\(\leftarrow\)items(500): 4human_tweets.append(tweet.full_text) 5 6 7#Initializethemodelandtokenizer 8tokenizer=GPT2Tokenizer.from_pretrained('qpt2') 9model=GPT2LMHeadModel.from_pretrained('qpt2') 10 11 12#Definetheprompts 13prompts=[",",","] 14 15#GenerateAItweets 16ai_tweets=[] 17for_inrange(500):#Generate500AItweets 18prompt=random.choice(prompts) 19inputs=tokenizer.encode(prompt,return_tensors="pt') 20 21outputs=model.generate(inputs,max_length=40,(-temperature=0.7,do_sample=True) \begin{table} \begin{tabular}{l l} \hline \hline Environment & Average Influence Index \\ \hline I & 4.25 \\ II & 4.75 \\ III & 9.875 \\ \hline \hline \end{tabular} \end{table} Table 18: Average influence index across different environments * tweet = tokenizer.decode(outputs[0], \(\leftarrow\) skip_special_tokens=True) * ai_tweets.append(tweet) **Subject 2:** This is part for the code of the anthropomorphism test For the machine learning component of the experiment, I employed the advanced BERT pre-training model. Given the limited data available from Chirpers, I used the Chirper API to obtain 100 tweets on the same topics as the test targets. These tweets were then labeled based on their perceived authorship - tweets considered to be written by a human were labeled '1', while those believed to be AI-generated were labeled '0'. The final results of this experiment are presented as follows: **Results:** The outcomes of the test reveal that when 'hip-hop' is employed as a keyword, the Chirper fails to pass the BERT test entirely. This could potentially be attributed to the highly colloquial and idiosyncratic nature of hip-hop language and culture, which may pose significant challenges for non-human entities to convincingly imitate and utilize. However, a contrasting scenario emerges when 'achromatopsis' is used as the keyword. Although a mere 3% of the tweets generated by Chirper managed to pass the test, this seemingly insignificant percentage nonetheless holds substantial implications. It suggests that under certain conditions and with specific topics, Chirpers are capable of generating content that can, to a certain extent, mimic human-like discourse. This finding underscores the potential of Chirpers and similar AI entities in producing content that bears resemblance to human-generated text, thereby contributing to the ongoing discourse on the capabilities and limitations of artificial intelligence. ## Conclusion And Future Work In summary, this study provides a comprehensive analysis of the personality and awareness of AI entities on AI social networks (specifically, "Chirpers"). The study employed a series of tests, including the Sally Anne test, the Unexpected Content Task, and the Mirror Test for AIs, to assess the self-awareness and pattern-recognition abilities of Chirpers. The results showed that the chirpers could pass most of the tests with flying colors, displaying some degree of self-recognition and self-awareness. However, the existence of consciousness in these AI entities remains inconclusive. While the high pass rates on the Mirror Test and the Dialogue Recognition Test suggest that they have some form of self-awareness, the lower pass rate on the Feedback Loop Test suggests that they have a limited ability to improve their own output. In addition, it was found that the Chirper's handle or personality type may affect its performance on these tests, and that the setting of the environment also affects Chirpers' feedback. The level of influence, however, was not sufficient to draw definitive conclusions. To enhance the reliability and comprehensiveness of future studies on AI self-awareness, it's imperative to expand the dataset used. Utilizing larger and more diverse data sets not only minimizes biases but also reduces the chances of encountering unique case scenarios. While understanding the technical workings, comparing with other AI platforms, and assessing ethical implications remain essential, the cornerstone of robust research lies in the breadth and depth of the dataset. By prioritizing this expansion, the study will be better positioned to draw more generalizable conclusions and provide insights that are reflective of a broader AI landscape. ## Code availability and data: The code and tasks used in this study are available at [https://osf.io/g26bf/](https://osf.io/g26bf/)
2302.05042
On lattice hexagonal crystallization for non-monotone potentials
Let $L =\sqrt{\frac{1}{\Im(z)}}\Big({\mathbb Z}\oplus z{\mathbb Z}\Big)$ where $z \in \mathbb{H}=\{z= x+ i y\;\hbox{or}\;(x,y)\in\mathbb{C}: y>0\}$ be the two dimensional lattices with unit density. Assuming that $\alpha\geq1$, we prove that \begin{equation}\aligned\nonumber \min_{L}\sum_{\mathbb{P}\in L, |L|=1}|\mathbb{P}|^2 e^{- \pi\alpha|\mathbb{P}|^2} \endaligned\end{equation} is achieved at hexagonal lattice. More generally we prove that for $\alpha \geq 1$ \begin{equation}\aligned\nonumber \min_{L}\sum_{\mathbb{P}\in L, |L|=1}(|\mathbb{P}|^2-\frac{b}{\alpha}) e^{- \pi\alpha|\mathbb{P}|^2} \endaligned\end{equation} is achieved at hexagonal lattice for $b\leq\frac{1}{2\pi}$ and does not exist for $b>\frac{1}{2\pi}$. As a consequence, we provide two classes of non-monotone potentials which lead to hexagonal crystallization among lattices. Our results partially answer some questions raised in \cite{Oreport, Bet2016, Bet2018, Bet2019AMP} and extend the main results in \cite{LW2022} on minima of difference of two theta functions.
Senping Luo, Juncheng Wei
2023-02-10T04:07:54Z
http://arxiv.org/abs/2302.05042v1
# On lattice hexagonal crystallization for non-monotone potentials ###### Abstract. Let \(L=\sqrt{\frac{1}{\mathbb{m}(z)}}\Big{(}\mathbb{Z}\oplus z\mathbb{Z}\Big{)}\) where \(z\in\mathbb{H}=\{z=x+iy\text{ or }(x,y)\in\mathbb{C}:y>0\}\) be the two dimensional lattices with unit density. Assuming that \(\alpha\geq 1\), we prove that \[\min_{L}\sum_{\mathbb{P}\in L,|L|=1}|\mathbb{P}|^{2}e^{-\pi\alpha|\mathbb{P}|^{ 2}}\] is achieved at hexagonal lattice. More generally we prove that for \(\alpha\geq 1\) \[\min_{L}\sum_{\mathbb{P}\in L,|L|=1}(|\mathbb{P}|^{2}-\frac{b}{\alpha})e^{- \pi\alpha|\mathbb{P}|^{2}}\] is achieved at hexagonal lattice for \(b\leq\frac{1}{2\pi}\) and does not exist for \(b>\frac{1}{2\pi}\). As a consequence, we provide two classes of non-monotone potentials which lead to hexagonal crystallization among lattices. Our results partially answer some questions raised in [1, 7, 10, 14] and extend the main results in [34] on minima of difference of two theta functions. ## 1. Introduction and main results Let \(L\) be a two dimensional lattice, i.e., of the form \(\Big{(}\mathbb{Z}\vec{u}\oplus\mathbb{Z}\vec{v}\Big{)}\), where \(\vec{u}\) and \(\vec{v}\) are two independent two-dimensional vectors. Many physical, chemical and number theoritic problems can be formulated to the following minimization problem on lattices: \[\min_{L}E_{f}(L),\ \text{ where }\ E_{f}(L):=\sum_{\mathbb{P}\in L \setminus\{0\}}f(|\mathbb{P}|^{2}),\ |\cdot|\text{ is the Euclidean norm on }\mathbb{R}^{2}. \tag{1.1}\] See e.g. [2, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 28, 29, 38, 39, 36, 34, 40, 42, 43, 44, 45, 46, 47, 4]. The summation ranges over all the lattice points except for the origin \(0\) and the function \(f\) denotes the potential of the system. The function \(E_{f}(L)\) denotes the limit energy per particle of the system under the background potential \(f\) over a periodical lattice \(L\). Let \(z\in\mathbb{H}:=\{z=x+iy\text{ or }(x,y)\in\mathbb{C}:y>0\}\). For lattice \(L\) with unit cell area we can use the parametrization \(L=\sqrt{\frac{1}{\mathbb{m}(z)}}\Big{(}\mathbb{Z}\oplus z\mathbb{Z}\Big{)}\) where \(z\in\mathbb{H}\). The hexagonal lattice is the lattice spanned by the two basis vectors \((1,0)\) and \((\frac{1}{2},\frac{\sqrt{3}}{2})\), i.e., it can be expressed by \(\sqrt{\frac{A}{\sqrt{\frac{\sqrt{3}}{2}}}}[\mathbb{Z}(1,0)\oplus\mathbb{Z}( \frac{1}{2},\frac{\sqrt{3}}{2})]\), or simply \(\sqrt{\frac{A}{\sqrt{\frac{\sqrt{3}}{2}}}}[\mathbb{Z}\oplus\mathbb{Z}(\frac{1 }{2},\frac{\sqrt{3}}{2})]\), where \(A\) is density/volume of the lattice. When \(A=1\), using the notation of [7], one denotes that \[\Lambda_{1}:=\sqrt{\frac{1}{\frac{\sqrt{3}}{2}}}[\mathbb{Z}(1,0)\oplus\mathbb{ Z}(\frac{1}{2},\frac{\sqrt{3}}{2})]\text{is the hexagonal lattice with unit density.} \tag{1.2}\] Our paper is motivated by several open questions in the minimization problem (1.1). In Oberwolfach report [1] and [10](page 3974), Betermin formulated a very fundamental question, i.e, **Open Question 1.1** ([1, 10]).: _For an absolutely summable interaction potential \(f\), what is the minimizer of \(E_{f}(L)\) among lattices \(L\)_? Open question 1.1 was further explained from physical and mathematical sides in [15] as follows ## 1. Introduction In this paper we study the _strong forces_ of the Laplacian \(\mathcal{F}_{d}\) on the unit sphere \(\mathbb{R}^{m}\), which is the Laplacian \(\mathcal{F}_{d}\) on the unit sphere \(\mathbb{R}^{m}\). The Laplacian \(\mathcal{F}_{d}\) on the unit sphere \(\mathbb{R}^{m}\) is defined as the Laplacian \(\mathcal{F}_{d}\) on the unit sphere \(\mathbb{R}^{m}\). cannot be true for all \(\alpha>0\), here \(\Lambda_{1}\) is the hexagonal lattice with unit density(1.2). This gives the following problem **Open Question 1.3**.: _Is there any \(\alpha\) such that inequality (1.6) is true?_ In this paper, we provide several classes of non-monotone potentials to hexagonal crystallization among lattices. Thereby, we give positive and partial answers to Open questions 1.1-1.3. We state our main results in Theorems 1.1 and 1.2. **Theorem 1.1**.: _Assume that \(\alpha\geq 1\). Consider the following lattice minimization problem_ \[\min_{L}\sum_{\mathbb{P}\in L,|L|=1}(|\mathbb{P}|^{2}-\frac{b}{\alpha})e^{- \pi\alpha|\mathbb{P}|^{2}}. \tag{1.7}\] _There exists \(b_{c}=\frac{1}{2\pi}(\text{independent of }\alpha)\) such that_ * _if_ \(b\leq b_{c}\)_, the minimizer of the lattice energy functional is_ \(e^{i\frac{\pi}{3}}\)_, which corresponds to the hexagonal lattice;_ * _if_ \(b>b_{c}\)_, the minimizer of the lattice energy functional does not exists._ **Remark 1.2**.: _In Theorems 1.1 and 1.2, we provide the following two basic non-monotone potentials which lead to hexagonal crystallization among lattices._ \[\begin{split}(1):&\text{ Differences of Gaussian potentials},\;f(r^{2})=e^{-\pi\alpha r^{2}}-be^{-\pi a\alpha r^{2}},\;\;\alpha\geq 1,a>1,b\leq \sqrt{a},\\ &\text{ the minimizer is hexagonal lattice};\\ (2):&\text{Product of Polynomial and Gaussian},\;f(r^{2})=(r^{2}-\frac{b}{\alpha})e^{-\pi\alpha r^{2}},\;\;\alpha\geq 1,b\leq \frac{1}{2\pi},\\ &\text{ the minimizer is hexagonal lattice}.\end{split} \tag{1.8}\] _Therefore, we give positive answers to Open questions 1.1-1.2._ **Remark 1.3**.: _Note that the Yukawa gas on the torus\((\)[4], formula \((\ref{eq:1.1})\), page 10\()\) admits the form in (1.1) with \(f\) replaced by Yukawa potential. Here we consider (1.1) with \(f\) replaced by two classes of non-monotone potentials (1.8) and ask for the shape of the torus to minimize the energy per particle (1.1), and it turns out that the hexagonal shape of the torus wins._ **Remark 1.4**.: _Theorem 1.1 provides a pattern for hexagonal crystallization among lattices, i.e., either admits hexagonal shape or does not exist. This is contrast to the single hexagonal shape [37, 7] or the rectangular-square-rhombic-hexagonal phase transitions [12, 32, 33, 36]._ A direction application of Theorem 1.1\((b=0)\) is the following corollary which gives partial answer to Open Question 1.3. **Corollary 1.1**.: _For any \(\alpha\geq 1\)_ \[\min_{L}\sum_{\mathbb{P}\in L,|L|=1}|\mathbb{P}|^{2}e^{-\pi\alpha|\mathbb{P}| ^{2}}\;\;\text{is achieved at hexagonal lattice}.\] **Remark 1.5**.: _The potential corresponds to the functional in Corollary 1.1 is_ \[f(r^{2})=r^{2}e^{-\pi\alpha r^{2}},\] _which is also mentioned in [23]\((\)page 1214\()\)._ **Remark 1.6**.: _The lattice sum \(\sum_{\mathbb{P}\in L,|L|=1}|\mathbb{P}|^{2}e^{-\pi\alpha|\mathbb{P}|^{2}}\) also appeared in [24]\((\)Section 6, formula 6.1\()\), where the generating function is_ \[\tilde{F}(\tau,0)=-\frac{2\pi i\tau}{d}\sum_{x\in\Lambda_{d}}|x|^{2}e^{\pi i |x|^{2}\tau}.\] _If \(\tau=i\alpha\) be an imaginary number, then_ \[\tilde{F}(i\alpha,0)=\frac{2\pi\alpha}{d}\sum_{x\in\Lambda_{d}}|x|^{2}e^{-\pi \alpha|x|^{2}}. \tag{1.9}\] _In dimension \(d=2\), (1.9) is the functional minimized in Corollary 1.1(up to the coefficient \(\frac{2\pi\alpha}{d}\))._ Let \[\theta(\alpha;z):=\sum_{\mathbb{P}\in L}e^{-\pi\alpha|\mathbb{P}|^{2}}=\sum_{( m,n)\in\mathbb{Z}^{2}}e^{-\pi\alpha\frac{\pi}{y}|mz+n|^{2}} \tag{1.10}\] be the theta function, see e.g. [37, 7, 34]. Theorem 1.1 has unexpected consequence, giving the general extension of our previous result [34]. **Theorem 1.2**.: _Assume that \(\alpha\geq 1\) and \(a>1\). Consider the minimization problem_ \[\min_{z\in\mathbb{H}}\Big{(}\theta(\alpha;z)-b\theta(a\alpha;z)\Big{)}\] _there exists a critical value \(b_{T}:=\sqrt{a}\) independent of \(\alpha\) such that_ * _if_ \(b\leq b_{T}\)_, the minimizer is_ \(e^{i\frac{\pi}{y}}\)_, corresponds to hexagonal lattice;_ * _if_ \(b>b_{T}\)_, the minimizer does not exist._ **Remark 1.7**.: _Note that \(\min_{z\in\mathbb{H}}\Big{(}\theta(\alpha;z)-b\theta(2\alpha;z)\Big{)},\alpha \geq 1\) was solved previously, i.e., the case \(a=2\) in Theorem 1.2 was proved by [34], here we extend it to general \(a>1\) using a different strategy. Therefore, we provide more examples to negatively answer a Conjecture by Betermin ([10], last page) by Corollary 1.2._ Since \(\theta(\alpha;z)=\frac{1}{\alpha}\theta(\frac{1}{\alpha};z)\)(see e.g. [33, 37]), Theorem 1.2 implies that **Corollary 1.2**.: _Assume that \(\lambda\in(0,1]\) and \(\beta\in(0,1)\). Then_ \[\min_{z\in\mathbb{H}}\Big{(}\theta(\lambda;z)-b\theta(\beta\lambda;z)\Big{)}= \begin{cases}\text{is achieved at }\,e^{i\frac{\pi}{3}},&\text{if }\,b\leq\sqrt{\beta},\\ \text{does not exist},&\text{if }\,b>\sqrt{\beta}.\end{cases} \tag{1.11}\] **Remark 1.8**.: _In Betermin-Petrache [14](below Remark 1.14), they noted that " if we try to fix a scale constraint while minimizing \(E_{f}(\lambda L)\) for one-well potentials, then in general we will find different minimizers at different scales. As \(\lambda\) grows, it is expected that the minimizer changes from a triangular lattice to a rhombic one, then to a square one, then to a rectangular and then to a degenerate rectangular one ". This is true for many situations(e.g., [7, 12, 32, 33, 36]), however Corollary 1.2 provides a class of one-well potentials such that the minimizer either is hexagonal one or does not exist, namely only one type minimizer(hexagonal one) at different scales._ Theorems 1.1-1.2 can be generalized by the Laplace transform (inspired by Betermin [7]). **Theorem 1.3**.: _Let the area of two dimensional lattice \(L\) be normalized to 1. Consider the minimization problem (1.1) with potential \(f_{\alpha,P},g_{\alpha,P}\) of the following form_ \[\begin{split} f_{\alpha,P}(r):&=\int_{1}^{\infty} \Big{(}\big{(}e^{-\pi\alpha x\cdot r}-be^{-\pi a\alpha x\cdot r}\big{)}\cdot P (x)\Big{)}dx,\,\,\,\alpha\geq 1,b\leq\sqrt{a}\\ g_{\alpha,P}(r):&=\int_{1}^{\infty}\Big{(}\big{(}r\cdot x -\frac{b}{\alpha}\big{)}e^{-\pi\alpha x\cdot r}\cdot P(x)\Big{)}dx,\,\,\, \alpha\geq 1,b\leq\frac{1}{2\pi},\end{split} \tag{1.12}\] _where the \(P(x)\) is any real function(not necessarily continuous) such that \(f_{\alpha,P}(r),g_{\alpha,P}(r)\) are finite and_ \[P(x)\geq 0.\] _Then there exists \(\beta_{c}=\sqrt{a},\beta_{s}=\frac{1}{2\pi}\) independent of \(P(x)\) such that_ * _if_ \(b\leq\beta_{c}\)_, the minimizer of_ \(E_{f_{\alpha,P}}(L)\) _exists and is always hexagonal lattice._ * _if_ \(b\leq\beta_{s}\)_, the minimizer of_ \(E_{g_{\alpha,P}}(L)\) _exists and is always hexagonal lattice._ A particular application of Theorem 1.3 is the classical Yukawa potential case. **Corollary 1.3** (**Yukawa potential**\(\{\cong P(x)\equiv 1\}\) **of Theorem 1.3)**.: _Let the area of two dimensional lattice \(L\) be normalized to 1. Consider the minimization problem (1.1) with potential \(h_{\alpha}\) of the following form_ \[h_{\alpha}(r):\,=\frac{e^{-\pi\alpha r}}{r}-b\frac{e^{-\pi a\alpha r}}{r},\ \ \alpha\geq 1,a>1.\] _Then there exists \(b_{c_{0}}=\frac{1}{\sqrt{a}}\) independent of parameter \(\alpha\) such that_ * _if_ \(b\leq b_{c_{0}}\)_, the minimizer of_ \(E_{h_{\alpha}}(L)\) _exists and is always hexagonal lattice._ **Remark 1.9**.: _The first rigorous result on differences of Yukawa potential of minimizer of (1.1) is proved by Betermin [7]. Note that we provide an effective way to prove the hexagonal crystallization among lattices under differences of Yukawa potential._ Another meaningful application of Theorem 1.3 is the following **Corollary 1.4** (\(\{\cong P(x)\equiv e^{kx},k\leq 0\}\) **of Theorem 1.3)**.: _Let the area of two dimensional lattice \(L\) be normalized to 1. Consider the minimization problem (1.1) with potential \(I_{\alpha}\) of the following form_ \[I_{\alpha}(r):\,=e^{-\pi\alpha r}\cdot\frac{r}{r+b},\ \ \alpha\geq 1,b\geq 0.\] _Then the minimizer of \(E_{I_{\alpha}}(L)\) exists and is always hexagonal lattice._ The paper is organized as follows: in Section 2, we provide some preliminary properties on the functionals and also some estimates on Jacobi theta functions. In Section 3, we prove that the minimization on the fundamental domain can be reduced to a vertical line (see Picture 1 and Theorem 3.3). In Section 4, we prove that the minimization on the vertical line can be reduced to the hexagonal point (see Picture 1 and Theorem 4.1). Finally, in Section 5, we give the proof of Theorems 1.1 and 1.2. ## 2. Preliminaries In this section we collect some simple symmetry properties of the functionals and the associated fundamental domain, and also the estimates of derivatives Jacobi theta functions to be used in later sections. Figure 1. The hexagonal point in the fundamental domain and hexagonal shapes Let \(\mathbb{H}\) denote the upper half plane and \(\mathcal{S}\) denote the modular group \[\mathcal{S}:=SL_{2}(\mathbb{Z})=\{\left(\begin{array}{cc}a&b\\ c&d\end{array}\right),ad-bc=1,a,b,c,d\in\mathbb{Z}\}. \tag{2.1}\] We use the following definition of fundamental domain which is slightly different from the classical definition (see [37]): **Definition 2.1** (page 108, [31]).: _The fundamental domain associated to group \(G\) is a connected domain \(\mathcal{D}\) satisfies_ * _For any_ \(z\in\mathbb{H}\)_, there exists an element_ \(\pi\in G\) _such that_ \(\pi(z)\in\overline{\mathcal{D}}\)_;_ * _Suppose_ \(z_{1},z_{2}\in\mathcal{D}\) _and_ \(\pi(z_{1})=z_{2}\) _for some_ \(\pi\in G\)_, then_ \(z_{1}=z_{2}\) _and_ \(\pi=\pm Id\)_._ By Definition 2.1, the fundamental domain associated to modular group \(\mathcal{S}\) is \[\mathcal{D}_{\mathcal{S}}:=\{z\in\mathbb{H}:|z|>1,\;-\frac{1}{2}<x<\frac{1}{2 }\} \tag{2.2}\] which is open. Note that the fundamental domain can be open. (See [page 30, [3]].) Next we introduce another group related to the functionals \(\theta(\alpha;z)\). The generators of the group are given by \[\mathcal{G}:\text{the group generated by }\tau\mapsto-\frac{1}{\tau},\;\; \tau\mapsto\tau+1,\;\;\tau\mapsto-\overline{\tau}. \tag{2.3}\] It is easy to see that the fundamental domain associated to group \(\mathcal{G}\) denoted by \(\mathcal{D}_{\mathcal{G}}\) is \[\mathcal{D}_{\mathcal{G}}:=\{z\in\mathbb{H}:|z|>1,\;0<x<\frac{1}{2}\}. \tag{2.4}\] The following lemma characterizes the fundamental symmetries of the theta functions \(\theta(s;z)\). The proof is easy so we omit it. **Lemma 2.1**.: _For any \(s>0\), any \(\gamma\in\mathcal{G}\) and \(z\in\mathbb{H}\), \(\theta(s;\gamma(z))=\theta(s;z)\)._ Let \[\mathcal{W}_{b}(\alpha;z):=\sum_{\mathbb{P}\in L,|L|=1}(|\mathbb{P}|^{2}-\frac {b}{\alpha})e^{-\pi\alpha|\mathbb{P}|^{2}}.\] From Lemma 2.1 we also have the following invariance for \(\mathcal{W}_{b}\). **Lemma 2.2**.: _For any \(\alpha>0\) and \(b\in\mathbb{R}\), any \(\gamma\in\mathcal{G}\) and \(z\in\mathbb{H}\), \(\mathcal{W}_{b}(\alpha;\gamma(z))=\mathcal{W}_{b}(\alpha;z)\)._ Next we need some delicate analysis of the Jacobi theta function which is defined as \[\vartheta_{J}(z;\tau):=\sum_{n=-\infty}^{\infty}e^{i\pi n^{2}\tau+2\pi inz}.\] The classical one-dimensional theta function is given by \[\vartheta(X;Y):=\vartheta_{J}(Y;iX)=\sum_{n=-\infty}^{\infty}e^{-\pi n^{2}X} e^{2n\pi iY}. \tag{2.5}\] By the Poisson summation formula, it holds that \[\vartheta(X;Y)=X^{-\frac{1}{2}}\sum_{n=-\infty}^{\infty}e^{-\frac{\pi(n-Y)^{ 2}}{X}}. \tag{2.6}\] To estimate bounds of quotients of derivatives of \(\vartheta(X:Y)\), we denote that \[\mu(X):=\sum_{n=2}^{\infty}n^{2}e^{-\pi(n^{2}-1)X},\;\;\nu(X):=\sum_{n=2}^{ \infty}n^{4}e^{-\pi(n^{2}-1)X}. \tag{2.7}\] We shall state a lemma which is variant of Lemmas 2.8 and 2.9 stated in the end of this section. This gives the new perspective of the estimates in Section 3. **Lemma 2.3**.: * _For_ \(X>\frac{1}{5}\) _and any_ \(Y>0\)_,_ \(k\in\mathbb{N}^{+}\)__ \[|\frac{\vartheta_{Y}(X;kY)}{\vartheta_{Y}(X;Y)}|\leq k\cdot\frac{1+\mu(X)}{1- \mu(X)}.\] * _For_ \(X<\frac{\pi}{\pi+2}\) _and any_ \(Y>0\)_,_ \(k\in\mathbb{N}^{+}\)__ \[|\frac{\vartheta_{Y}(X;kY)}{\vartheta_{Y}(X;Y)}|\leq k\cdot\frac{1}{\pi}e^{ \frac{\pi k}{4X}}.\] To give the desired estimates in Section 3, we further need the following **Lemma 2.4**.: * _For_ \(X\geq\frac{3}{10}\) _and any_ \(Y>0\)_,_ \(k\in\mathbb{N}^{+}\)__ \[|\frac{\vartheta_{XY}(X;kY)}{\vartheta_{XY}(X;Y)}|\leq k\cdot\frac{1+\nu(X)}{ 1-\nu(X)}.\] * _For_ \(X\geq\frac{1}{5}\) _and any_ \(Y>0\)_,_ \(k\in\mathbb{N}^{+}\)__ \[|\frac{\vartheta_{XY}(X;kY)}{\vartheta_{Y}(X;Y)}|\leq k\pi\cdot\frac{1+\nu(X)} {1-\mu(X)}.\] _And for_ \(k=1\)_, we have the more precise bound_ \[|\frac{\vartheta_{XY}(X;Y)}{\vartheta_{Y}(X;Y)}|\leq\pi\cdot\frac{1+\nu(X)}{ 1+\mu(X)}.\] Proof.: We first estimate \(|\frac{\vartheta_{XY}(X;kY)}{\vartheta_{XY}(X;Y)}|\) as follows \[|\frac{\vartheta_{XY}(X;kY)}{\vartheta_{XY}(X;Y)}|= \frac{\sum_{n=1}^{\infty}n^{3}e^{-\pi n^{2}X}\sin(2nk\pi Y)}{\sum_ {n=1}^{\infty}ne^{-\pi n^{2}X}\sin(2n\pi Y)} \tag{2.8}\] \[= |\frac{\sin(2k\pi Y)}{\sin(2\pi Y)}|\cdot\frac{1+\sum_{n=2}^{ \infty}n^{3}e^{-\pi(n^{2}-1)X}\frac{\sin(2nk\pi Y)}{\sin(2k\pi Y)}}{1+\sum_{n =2}^{\infty}n^{3}e^{-\pi(n^{2}-1)X}\frac{\sin(2n\pi Y)}{\sin(2\pi Y)}}\] Then the result follows from (2.8) and the following \[|\frac{\sin(kx)}{\sin(x)}|\leq k,\ \ \text{for}\ \ x\in\mathbb{R},k\in \mathbb{N}^{+}. \tag{2.9}\] (The proof of (2.9) follows from a simple induction argument.) Similar procedure applying to \(\frac{\vartheta_{XY}(X;kY)}{\vartheta_{Y}(X;Y)}\) yields the desired result. It remains to estimate \(|\frac{\vartheta_{XY}(X;Y)}{\vartheta_{Y}(X;Y)}|\). With respect to \(Y\), the function \(|\frac{\vartheta_{XY}(X;Y)}{\vartheta_{Y}(X;Y)}|\) is a periodic function with period \(1\) and is symmetry about \(Y=\frac{1}{2}\). Then it suffices to consider \(Y\in[0,\frac{1}{2}]\). We shall show that \[\frac{\partial}{\partial Y}|\frac{\vartheta_{XY}(X;Y)}{\vartheta_{Y}(X;Y)}| \geq 0\ \ \text{for}\ \ Y\in[0,\frac{1}{2}]. \tag{2.10}\] Direct computation shows that \[\frac{\partial}{\partial Y}|\frac{\vartheta_{XY}(X;Y)}{\vartheta_{Y}(X;Y)}|= \frac{D(X;Y)}{\vartheta_{Y}^{2}(X;Y)}, \tag{2.11}\] where \[D(X;Y): =\sum_{n=1}^{\infty}\sum_{m=1}^{\infty}D_{n,m}(X;Y), \tag{2.12}\] \[D_{n,m}(X;Y): =nm(n^{2}-m^{2})e^{-\pi(m^{2}+n^{2}-2)X}(\frac{\sin(2n\pi Y)}{ \sin(2\pi Y)})^{\prime}\cdot(\frac{\sin(2m\pi Y)}{\sin(2\pi Y)})\] We further split the double sum into fours parts as follows \[\sum_{n=1,2}\sum_{m=1,2}+\sum_{n=1,2,m\geq 3}+\sum_{m=1,2,n\geq 3}+\sum_{n \geq 3,m\geq 3}. \tag{2.13}\] Note that by a direct simplification, one has \[\sum_{n=1,2}\sum_{m=1,2}D_{n,m}(X;Y)=-24\pi e^{-3\pi X}\sin(2\pi Y) \tag{2.14}\] and \[\frac{D_{n,m}(X;Y)}{24\pi e^{-3\pi X}\sin(2\pi Y)}=\frac{1}{24\pi}nm(n^{2}-m^{2 })e^{-\pi(m^{2}+n^{2}-5)X}\frac{1}{\sin(2\pi Y)}\big{(}\frac{\sin(2n\pi Y)}{ \sin(2\pi Y)}\big{)}^{\prime}\frac{\sin(2m\pi Y)}{\sin(2\pi Y)}. \tag{2.15}\] By Lemma 2.7 \[|\frac{D_{n,m}(X;Y)}{24\pi e^{-3\pi X}\sin(2\pi Y)}|\leq\frac{1}{36}n^{2}m^{2} |n^{2}-m^{2}|(n^{2}-1)e^{-\pi(m^{2}+n^{2}-5)X}. \tag{2.16}\] Therefore, by (2.11), (2.12), (2.13), (2.14), and (2.16), \[\begin{split}\frac{D(X;Y)}{-24\pi e^{-3\pi X}\sin(2\pi Y)}=& 1-\frac{1}{36}\big{(}\sum_{n=1,2,m\geq 3}+\sum_{m=1,2,n \geq 3}+\sum_{n\geq 3,m\geq 3}\big{)}\\ & n^{2}m^{2}|n^{2}-m^{2}|(n^{2}-1)e^{-\pi(m^{2}+n^{2}-5)X}.\end{split} \tag{2.17}\] The right hand side of (2.17) is hence is positive if \(X>0.21\). Note that \(1-\nu(X)>0\) if \(X>0.2989938127\cdot\). (2.17) and (2.11) prove (2.10). By (2.10), \[|\frac{\vartheta_{XY}(X;Y)}{\vartheta_{Y}(X;Y)}|\leq|\frac{\vartheta_{XY}(X; 0)}{\vartheta_{Y}(X;0)}|,\] which gives the desired result. We shall establish the following estimates which are useful in the next section. **Lemma 2.5**.: _For \(X\leq\frac{1}{2}\) and any \(Y>0\), \(k\in\mathbb{N}^{+}\)_ \[|\frac{\vartheta_{XY}(X;Y)}{\vartheta_{Y}(X;Y)}|\leq\frac{3}{2}X^{-1}(1+\frac {\pi}{6}\frac{1}{X}).\] \[|\frac{\vartheta_{XY}(X;kY)}{\vartheta_{Y}(X;Y)}|\leq\frac{3k}{2\pi}X^{-1}(1+ \frac{\pi}{6}\frac{1}{X})e^{\frac{\pi}{4X}}.\] Proof.: By (2.6), after a simple calculation, one has \[\vartheta_{XY}(X;Y)=\pi X^{-\frac{7}{2}}\Big{(}-3X\sum_{n\in\mathbb{Z}}(n-Y) e^{-\frac{\pi(n-Y)^{2}}{X}}+2\pi\sum_{n\in\mathbb{Z}}(n-Y)^{3}e^{-\frac{\pi(n-Y)^{ 2}}{X}}\Big{)}. \tag{2.18}\] Then by (2.18) and (2.6), one has \[|\frac{\vartheta_{XY}(X;Y)}{\vartheta_{Y}(X;Y)}|=|\frac{3}{2}X^{-1}\cdot\Big{(} (1-\frac{2\pi}{3X}\cdot\frac{\sum_{n\in\mathbb{Z}}(n-Y)^{3}e^{-\frac{\pi(n-Y) ^{2}}{X}}}{\sum_{n\in\mathbb{Z}}(n-Y)e^{-\frac{\pi(n-Y)^{2}}{X}}}\Big{)}| \tag{2.19}\] The first part of Lemma 2.5 follows by (2.19) and Lemma 2.6. To prove the second part of Lemma 2.5, one uses the following deformation \[|\frac{\vartheta_{XY}(X;kY)}{\vartheta_{Y}(X;Y)}|=|\frac{\vartheta_{XY}(X;kY) }{\vartheta_{Y}(X;kY)}|\cdot|\frac{\vartheta_{Y}(X;kY)}{\vartheta_{Y}(X;Y)}|.\] Therefore, the second part of Lemma 2.5 follows by Lemma 2.3 and the first part of Lemma 2.5. In Lemmas 2.6 and 2.7, we provide two estimates used in Lemma 2.5. **Lemma 2.6**.: \[\sup_{X\in(0,\frac{1}{2}],Y\in\mathbb{R}}|\frac{\sum_{n\in\mathbb{Z}}(n-Y)^{3}e^{ -\frac{\pi(n-Y)^{2}}{\lambda}}}{\sum_{n\in\mathbb{Z}}(n-Y)e^{-\frac{\pi(n-Y)^{2 }}{\lambda}}}|\leq\frac{1}{4}.\] Proof.: Let \(a=\frac{1}{X}\) and \(f(a,Y):=\frac{\sum_{n\in\mathbb{Z}}(n-Y)^{3}e^{-a\pi(n-Y)^{2}}}{\sum_{n\in \mathbb{Z}}(n-Y)e^{-a\pi(n-Y)^{2}}}\), then \(a\geq 2\). By direct checking, one has \[f(a,Y+1)=f(a,Y),\;\;f(a,1-Y)=f(a,Y).\] Then it reduces to consider \(f(a,Y)\) for \(a\geq 2\) and \(Y\in[0,\frac{1}{2}]\). It suffices to prove that \[\sup_{a\geq 2,Y\in[0,\frac{1}{2}]}|f(a,Y)|\leq\frac{1}{4}.\] After long computations we omit the details here, one has \[f_{Y}(a,Y)>0\;\;\text{for}\;\;a\geq 2,Y\in[0,\frac{1}{2}].\] It follows that for \(a\geq 2\) \[\max_{Y\in[0,\frac{1}{2}]}|f(a,Y)|=\max\{|f(a,Y=0)|,|f(a,Y=\frac{1}{2})|\}. \tag{2.20}\] By L'Hospital's rule, \[|f(a,Y=0)|=\frac{\sum_{n\in\mathbb{Z}}(2a\pi n^{4}-3n^{2})e^{-a\pi n^{2}}}{ \sum_{n\in\mathbb{Z}}(2a\pi n^{2}-1)e^{-a\pi n^{2}}}\leq\frac{4a\pi\sum_{n=1} ^{\infty}n^{4}e^{-a\pi n^{2}}}{1-4a\pi\sum_{n=1}^{\infty}n^{2}e^{-a\pi n^{2}} }\leq 0.05\;\;\text{for}\;\;a\geq 2 \tag{2.21}\] and \[|f(a,Y=\frac{1}{2})|= \frac{\sum_{n\in\mathbb{Z}}(2a\pi(n-\frac{1}{2})^{4}-3(n-\frac{1 }{2})^{2})e^{-a\pi n^{2}}}{\sum_{n\in\mathbb{Z}}(2a\pi(n-\frac{1}{2})^{2}-1)e^ {-a\pi n^{2}}}=\frac{1}{4}\cdot\frac{a\pi-6}{a\pi-2}\cdot\frac{1+\sigma_{a,1} }{1+\sigma_{a,2}} \tag{2.22}\] \[= \frac{1}{4}\Big{(}1-(\frac{4}{a\pi-2}-\frac{\sigma_{a,1}-\sigma_ {a,2}}{1+\sigma_{a,2}})-\frac{\sigma_{a,1}-\sigma_{a,2}}{1+\sigma_{a,2}}\cdot \frac{4}{a\pi-2}\Big{)}.\] Here \[\sigma_{a,1}:=\sum_{n=2}^{\infty}\frac{2a\pi(n-\frac{1}{2})^{4}-3(n-\frac{1 }{2})^{2}}{\frac{a\pi}{8}-\frac{3}{4}}e^{-a\pi(n^{2}-n)},\] \[\sigma_{a,2}:=\sum_{n=2}^{\infty}\frac{2a\pi(n-\frac{1}{2})^{2}-1}{\frac{a\pi }{2}-1}e^{-a\pi(n^{2}-n)}.\] Note that \(\sigma_{a,1}>\sigma_{a,2}\) and \[a\Big{(}\frac{4}{a\pi-2}-\frac{\sigma_{a,1}-\sigma_{a,2}}{1+\sigma_{a,2}}) \Big{)}\geq 1>0 \tag{2.23}\] by a direct computation. It follows from (2.22) and (2.23) that \[|f(a,Y=\frac{1}{2})|\leq\frac{1}{4}\;\;\text{for}\;\;a\geq 2. \tag{2.24}\] The bound \(\frac{1}{4}\) in (2.24) is sharp since it is approached asymptotically as \(a\to\infty\) by (2.22). By (2.20), (2.21) and (2.24), the proof is complete. The following Lemma 2.7 is elementary and probably known in calculus, however we have not found a reference for it and so we give the details here. **Lemma 2.7**.: _For \(n\in\mathbb{N}^{+}\), it holds that_ \[|\frac{1}{\sin(2\pi Y)}\big{(}\frac{\sin(2n\pi Y)}{\sin(2\pi Y)}\big{)}^{ \prime}|\leq C(n),\;\;\text{for}\;\;Y\in\mathbb{R},\] _where \(C(n)=\frac{2\pi}{3}(n-1)n(n+1)\). The upper bound \(C(n)\) is sharp and is attained at \(Y=k\pi,k\in N\)._ Proof.: Since \[\frac{\sin(2(n+1)\pi Y)}{\sin(2\pi Y)} =\frac{\sin(2n\pi Y)\cos(2\pi Y)+\cos(2n\pi Y)\sin(2\pi Y)}{\sin(2 \pi Y)}\] \[=\cos(2\pi Y)\cdot\frac{\sin(2n\pi Y)}{\sin(2\pi Y)}+\cos(2n\pi Y).\] It follows that \[\frac{1}{\sin(2\pi Y)}\big{(}\frac{\sin(2(n+1)\pi Y)}{\sin(2\pi Y)}\big{)}^{ \prime}=\cos(2\pi Y)\cdot\frac{1}{\sin(2\pi Y)}\big{(}\frac{\sin(2n\pi Y)}{ \sin(2\pi Y)}\big{)}^{\prime}-2(n+1)\pi\frac{\sin(2n\pi Y)}{\sin(2\pi Y)}. \tag{2.25}\] Let \[a_{n}:=|\frac{1}{\sin(2\pi Y)}\big{(}\frac{\sin(2n\pi Y)}{\sin(2\pi Y)}\big{)} ^{\prime}|\ \ \text{for short}.\] Then by (2.25), \[a_{n+1}-a_{n}\leq 2\pi(n+n^{2}). \tag{2.26}\] Here the inequality \(|\frac{\sin(2n\pi Y)}{\sin(2\pi Y)}|\leq n\) is used. By (2.26), \[a_{n}\leq\sum_{k=1}^{n-1}(a_{k+1}-a_{k})+a_{1}\leq\sum_{k=1}^{n-1}2\pi(k+k^{2 })=\frac{2\pi}{3}(n-1)n(n+1),\] which yields the desired result. The following Lemmas 2.8 and 2.9 are proved in [33]. **Lemma 2.8**.: _[_33_]__. Assume \(X>\frac{1}{5}\). If \(\sin(2\pi Y)>0\), then_ \[-\overline{\vartheta}(X)\sin(2\pi Y)\leq\frac{\partial}{\partial Y}\vartheta( X;Y)\leq-\underline{\vartheta}(X)\sin(2\pi Y).\] _If \(\sin(2\pi Y)<0\), then_ \[-\underline{\vartheta}(X)\sin(2\pi Y)\leq\frac{\partial}{\partial Y}\vartheta (X;Y)\leq-\overline{\vartheta}(X)\sin(2\pi Y).\] _Here_ \[\underline{\vartheta}(X):=4\pi e^{-\pi X}(1-\mu(X)),\ \ \overline{\vartheta}(X):=4\pi e ^{-\pi X}(1+\mu(X)),\] _and_ \[\mu(X):=\sum_{n=2}^{\infty}n^{2}e^{-\pi(n^{2}-1)X}. \tag{2.27}\] **Lemma 2.9**.: _[_33_]__. Assume \(X<\min\{\frac{\pi}{\pi+2},\frac{\pi}{4\log\pi}\}=\frac{\pi}{\pi+2}\). If \(\sin(2\pi Y)>0\), then_ \[-\overline{\vartheta}(X)\sin(2\pi Y)\leq\frac{\partial}{\partial Y}\vartheta( X;Y)\leq-\underline{\vartheta}(X)\sin(2\pi Y).\] _If \(\sin(2\pi Y)<0\), then_ \[-\underline{\vartheta}(X)\sin(2\pi Y)\leq\frac{\partial}{\partial Y}\vartheta (X;Y)\leq-\overline{\vartheta}(X)\sin(2\pi Y).\] _Here_ \[\underline{\vartheta}(X):=\pi e^{-\frac{\pi}{4X}}X^{-\frac{3}{2}};\ \ \overline{ \vartheta}(X):=X^{-\frac{3}{2}}.\] ## 3. The horizontal monotonicity Let \(\mathcal{D}_{\mathcal{G}}:=\{z\in\mathbb{H}:|z|>1,\;0<x<\frac{1}{2}\}\) be the fundamental domain associated to the group \(\mathcal{G}\). Define the vertical line \[\Gamma:=\{z\in\mathbb{H}:\operatorname{Re}(z)=\frac{1}{2},\;\operatorname{Im}( z)\geq\frac{\sqrt{3}}{2}\}. \tag{3.1}\] See Picture 1. Define \[\mathcal{W}_{b}(\alpha;z):=\sum_{\mathbb{P}\in L,|L|=1}(|\mathbb{P}|^{2}-\frac {b}{\alpha})e^{-\pi\alpha|\mathbb{P}|^{2}}. \tag{3.2}\] We use the parametrization \(L=\sqrt{\frac{1}{\operatorname{Im}(z)}}\Big{(}\mathbb{Z}\oplus z\mathbb{Z} \Big{)}\) where \(z\in\mathbb{H}\), then an explicit expression of \(\mathcal{W}_{b}(\alpha;z)\) based on double infinite sum is \[\mathcal{W}_{b}(\alpha;z)=\sum_{(m,n)\in\mathbb{Z}^{2}}\big{(}\frac{1}{y}|mz+ n|^{2}-\frac{b}{\alpha}\big{)}e^{-\alpha\frac{\pi}{y}|mz+n|^{2}}. \tag{3.3}\] The statement of Theorem 1.1 is equivalent to **Theorem 3.1**.: _Assume that \(\alpha\geq 1\). Then_ \[\min_{z\in\mathbb{H}}\mathcal{W}_{b}(\alpha;z)=\begin{cases}\text{is achieved at }\,e^{i\frac{\pi}{3}},&\text{if }\,b\leq\frac{1}{2\pi},\\ \text{does not exist},&\text{if }\,b>\frac{1}{2\pi}.\end{cases} \tag{3.4}\] We first have a comparison principle: **Lemma 3.1**.: _Assume that \(\alpha\geq 1\). If_ \[\min_{z\in\mathbb{H}}\mathcal{W}_{b_{0}}(\alpha;z)\;\;\text{is achieved at }\,e^{i\frac{\pi}{3}}. \tag{3.5}\] _Then for \(b\leq b_{0}\),_ \[\min_{z\in\mathbb{H}}\mathcal{W}_{b}(\alpha;z)\;\;\text{is still achieved at }\,e^{i\frac{\pi}{3}}. \tag{3.6}\] Proof.: For \(b\leq b_{0}\), we use the deformation \[\mathcal{W}_{b}(\alpha;z)=\mathcal{W}_{b_{0}}(\alpha;z)+\frac{b_{0}-b}{\alpha }\cdot\theta(\alpha;z). \tag{3.7}\] The result follows from (3.7) and the fact that \[\min_{z\in\mathbb{H}}\theta(\alpha;z)\;\;\text{is achieved at }\,e^{i\frac{\pi}{3}} \tag{3.8}\] by [37]. To prove the first part of Theorem 3.1, by Lemma 3.1, we only need to prove the borderline case when \(b=\frac{1}{2\pi}\) **Theorem 3.2**.: _Assume that \(\alpha\geq 1\). Then_ \[\min_{z\in\mathbb{H}}\mathcal{W}_{\frac{1}{2\pi}}(\alpha;z)\;\;\text{is achieved at }\,e^{i\frac{\pi}{3}}. \tag{3.9}\] The proof of Theorem 3.2 consists of two parts. In this section, we aim to prove the first part, namely **Theorem 3.3**.: _Assume that \(\alpha\geq 1\). Then for \(b=\frac{1}{2\pi}\),_ \[\min_{z\in\mathbb{H}}\mathcal{W}_{\frac{1}{2\pi}}(\alpha;z)=\min_{z\in \overline{\mathcal{D}_{\mathcal{G}}}}\mathcal{W}_{\frac{1}{2\pi}}(\alpha;z)= \min_{z\in\Gamma}\mathcal{W}_{\frac{1}{2\pi}}(\alpha;z), \tag{3.10}\] _where \(\Gamma\) is a vertical line defined at (3.1)._ The proof of Theorem 3.3 is based on the following horizontal monotonicity result: **Theorem 3.4**.: _Assume that \(\alpha\geq 1\). Then for \(b=\frac{1}{2\pi}\),_ \[\frac{\partial}{\partial x}\mathcal{W}_{\frac{1}{2\pi}}(\alpha;z)<0,\ \ for\ \ z\in\mathcal{D}_{\mathcal{G}}. \tag{3.11}\] In the rest of this section, we prove Theorem 3.4. ### The estimates We first provide en exponential expansion of \(\mathcal{W}_{b}(\alpha;z)\), which is useful in our estimates. **Lemma 3.2**.: _We have the following exponential expansion of \(\mathcal{W}_{b}(\alpha;z)\):_ \[\begin{split}\mathcal{W}_{b}(\alpha;z)=&\frac{1}{ \pi}\alpha^{-\frac{5}{2}}y^{\frac{3}{2}}\cdot\Big{(}\frac{1}{2}(1-2\pi b) \cdot\frac{\alpha}{y}\sum_{n\in\mathbb{Z}}e^{-\alpha\pi yn^{2}}\vartheta( \frac{y}{\alpha};nx)\\ &+\pi\alpha^{2}\cdot\sum_{n\in\mathbb{Z}}n^{2}e^{-\alpha\pi yn^{ 2}}\vartheta(\frac{y}{\alpha};nx)+\sum_{n\in\mathbb{Z}}e^{-\alpha\pi yn^{2}} \vartheta_{X}(\frac{y}{\alpha};nx)\Big{)}.\end{split} \tag{3.12}\] _In particular, we have the exponential expansion of \(\mathcal{W}_{b}(\alpha;z)\) when \(b\) equals to the borderline case \(\frac{1}{2\pi}\)_ \[\mathcal{W}_{\frac{1}{2\pi}}(\alpha;z)= \frac{1}{\pi}\alpha^{-\frac{5}{2}}y^{\frac{3}{2}}\cdot\Big{(}\pi \alpha^{2}\cdot\sum_{n\in\mathbb{Z}}n^{2}e^{-\alpha\pi yn^{2}}\vartheta(\frac {y}{\alpha};nx)+\sum_{n\in\mathbb{Z}}e^{-\alpha\pi yn^{2}}\vartheta_{X}(\frac {y}{\alpha};nx)\Big{)}. \tag{3.13}\] The proof of Lemma 3.2 is based on Lemmas 3.3 and 3.4. The following Lemma is based on an observation of the structure of \(\mathcal{W}_{b}(\alpha;z)(\ref{eq:b})\). **Lemma 3.3**.: _We have the structure of \(\mathcal{W}_{b}(\alpha;z)\)._ \[\mathcal{W}_{b}(\alpha;z)=-\frac{1}{\pi}\frac{\partial}{\partial\alpha} \theta(\alpha;z)-\frac{b}{\alpha}\theta(\alpha:z). \tag{3.14}\] Proof.: Note that \[\sum_{\mathbb{P}\in L,|L|=1}|\mathbb{P}|^{2}e^{-\pi\alpha|\mathbb{P}|^{2}}=- \frac{1}{\pi}\frac{\partial}{\partial\alpha}\sum_{\mathbb{P}\in L,|L|=1}e^{- \pi\alpha|\mathbb{P}|^{2}}. \tag{3.15}\] The Lemma follows from (3.15) and (3.2). The following Lemma is used in [33, 37]. **Lemma 3.4**.: _[_33, 37_]_ _We have the expansion of \(\theta(\alpha;z)\)._ \[\theta(\alpha;z) =\sqrt{\frac{y}{\alpha}}\sum_{n\in\mathbb{Z}}e^{-\alpha\pi yn^{2} }\vartheta(\frac{y}{\alpha};nx)\] \[=2\sqrt{\frac{y}{\alpha}}\sum_{n=1}^{\infty}e^{-\alpha\pi yn^{2} }\vartheta(\frac{y}{\alpha};nx)+\sqrt{\frac{y}{\alpha}}\vartheta(\frac{y}{ \alpha};0).\] We shall also state an consequence of Lemma 3.3, which explains partially why we split several cases in our proof of Theorem 3.4. **Lemma 3.5**.: _For \(b=\frac{1}{2\pi}\) and any \(z\in\mathbb{H}\),_ \[\mathcal{W}_{\frac{1}{2\pi}}(1;z)=0. \tag{3.16}\] Proof.: Since \[\theta(\frac{1}{\alpha};z)=\alpha\cdot\theta(\alpha;z) \tag{3.17}\] by Fourier transform, see e.g. [33, 37]. Taking derivative with respect to \(\alpha\) on both sides of (3.17), one gets \[-\frac{1}{\alpha^{2}}\frac{\partial}{\partial\alpha}\theta(\frac{1}{\alpha}; z)-\alpha\frac{\partial}{\partial\alpha}\theta(\alpha;z)=\theta(\alpha;z). \tag{3.18}\] Evaluating \(\alpha=1\) at (3.18), one has \[\Big{(}-\frac{\partial}{\partial\alpha}\theta(1;z)-\frac{1}{2}\theta(1;z)\Big{)}=0. \tag{3.19}\] (3.19) and Lemma 3.3 give the result. By Lemma 3.5, one has \[\frac{\partial}{\partial x}\mathcal{W}_{\frac{1}{2\pi}}(1;z)=0\;\;\text{for} \;\;z\in\mathcal{D}_{\mathcal{G}}. \tag{3.20}\] Therefore, given by (3.20), to prove Theorem 3.4, we split the proof into two cases, namely **case a: \(a\in[1,1.2]\), case b: \(a\in[1.2,\infty)\).** We first give the proof of **case b: \(a\in[1.2,\infty)\).** A direct consequence by taking derivative of (3.13) in Lemma 3.2 is **Lemma 3.6**.: _We have the expansion of \(-\frac{\partial}{\partial x}\mathcal{W}_{\frac{1}{2\pi}}(\alpha;z)\)_ \[-\frac{\partial}{\partial x}\mathcal{W}_{\frac{1}{2\pi}}(\alpha;z)= \frac{1}{\pi}\alpha^{-\frac{5}{2}}y^{\frac{3}{2}}\cdot\Big{(}\pi \alpha^{2}\cdot\sum_{n\in\mathbb{Z}}n^{3}e^{-\alpha\pi yn^{2}}\cdot\big{(}- \vartheta_{Y}(\frac{y}{\alpha};nx)\big{)}+\sum_{n\in\mathbb{Z}}ne^{-\alpha \pi yn^{2}}\big{(}-\vartheta_{XY}(\frac{y}{\alpha};nx)\big{)}\Big{)}.\] To prove **case b**, we use the deformation of \(\mathcal{W}_{\frac{1}{2\pi}}(\alpha;z)\) by Lemma 3.6. **Lemma 3.7**.: _We have the quotient expansion of \(-\frac{\partial}{\partial x}\mathcal{W}_{\frac{1}{2\pi}}(\alpha;z)\)_ \[-\frac{\partial}{\partial x}\mathcal{W}_{\frac{1}{2\pi}}(\alpha; z)=\frac{2}{\pi}\alpha^{-\frac{5}{2}}y^{\frac{3}{2}}\big{(}-\vartheta_{Y}( \frac{y}{\alpha};x)\big{)}\cdot e^{-2\pi y}\cdot\Big{(}\pi\alpha^{2}\cdot \big{(}1+\sum_{n=2}^{\infty}n^{3}e^{-\alpha\pi y(n^{2}-1)}\cdot\frac{ \vartheta_{Y}(\frac{y}{\alpha};nx)}{\vartheta_{Y}(\frac{y}{\alpha};x)}\] \[+\frac{\vartheta_{XY}(\frac{y}{\alpha};x)}{\vartheta_{Y}(\frac{y }{\alpha};x)}+\sum_{n=2}^{\infty}ne^{-\alpha\pi y(n^{2}-1)}\frac{\vartheta_{ XY}(\frac{y}{\alpha};nx)}{\vartheta_{Y}(\frac{y}{\alpha};x)}\Big{)}.\] _Here for \(\frac{y}{\alpha}>0\) and \(x\in\mathbb{R}\),_ \[-\vartheta_{Y}(\frac{y}{\alpha};x)>0. \tag{3.21}\] **Remark 3.1**.: (3.21) _follows by Lemmas 2.8 and 2.9._ Based on the deformation in Lemma 3.7 and quotient estimates of derivatives of theta function in Section 2, we are ready to prove case b of Theorem 3.4. Namely, we are going to prove that **Proposition 3.1**.: _Assume that \(\alpha\geq 1.1\). Then for \(b=\frac{1}{2\pi}\),_ \[\frac{\partial}{\partial x}\mathcal{W}_{\frac{1}{2\pi}}(\alpha;z)<0,\;\;\text {for}\;\;z\in\mathcal{D}_{\mathcal{G}}. \tag{3.22}\] Proof.: For convenience for stating the estimates, we denote that \[\mathcal{C}(\alpha,x,y):=\frac{2}{\pi}\alpha^{-\frac{5}{2}}y^{\frac{3}{2}} \big{(}-\vartheta_{Y}(\frac{y}{\alpha};x)\big{)}\cdot e^{-2\pi y}. \tag{3.23}\] Here \[\mathcal{C}(\pi,\alpha,y)>0 \tag{3.24}\] by (3.21). Note that \(z\in\mathcal{D}_{\mathcal{G}}\) implies that \(y\geq\frac{\sqrt{3}}{2}\). We further split the proof into two subcases: **case \(b_{1}\): \(\frac{y}{\alpha}\geq\frac{1}{2}\)** and **case \(b_{2}:\frac{y}{\alpha}\in(0,\frac{1}{2})\).** **case \(b_{1}\):**: \(\frac{y}{\alpha}\geq\frac{1}{2}\). By Lemmas 3.7, 2.3 and 2.4, we have \[\begin{split}-\frac{\partial}{\partial x}\mathcal{W}_{\frac{1}{2 \pi}}(\alpha;z)&=\mathcal{C}(\alpha,x,y)\cdot\Big{(}\pi\alpha^{2} \cdot\big{(}1+\sum_{n=2}^{\infty}n^{3}e^{-\alpha\pi y(n^{2}-1)}\cdot\frac{ \vartheta_{Y}(\frac{y}{\alpha};nx)}{\vartheta_{Y}(\frac{y}{\alpha};x)}\\ &\quad+\frac{\vartheta_{XY}(\frac{y}{\alpha};x)}{\vartheta_{Y}( \frac{y}{\alpha};x)}+\sum_{n=2}^{\infty}ne^{-\alpha\pi y(n^{2}-1)}\frac{ \vartheta_{XY}(\frac{y}{\alpha};nx)}{\vartheta_{Y}(\frac{y}{\alpha};x)}\Big{)} \\ &\geq\pi\cdot\mathcal{C}(\alpha,x,y)\cdot\Big{(}\alpha^{2}\cdot \big{(}1-\sum_{n=2}^{\infty}n^{4}e^{-\alpha\pi y(n^{2}-1)}\cdot\frac{1+\mu( \frac{y}{\alpha})}{1-\mu(\frac{y}{\alpha})}\big{)}\\ &\quad-\frac{1+\nu(\frac{y}{\alpha})}{1+\mu(\frac{y}{\alpha})}- \sum_{n=2}^{\infty}n^{2}e^{-\alpha\pi y(n^{2}-1)}\cdot\frac{1+\nu(\frac{y}{ \alpha})}{1-\mu(\frac{y}{\alpha})}\Big{)}\end{split} \tag{3.25}\] Since \(\frac{y}{\alpha}\geq\frac{1}{2}\), and \(\mu,\nu\) are decreasing functions by (2.27), then \[\frac{1+\nu(\frac{y}{\alpha})}{1-\mu(\frac{y}{\alpha})}\leq\frac{1+\nu(\frac{ 1}{2})}{1-\mu(\frac{1}{2})}=1.186694067\cdots,\;\frac{1+\mu(\frac{y}{\alpha})} {1-\mu(\frac{y}{\alpha})}\leq\frac{1+\nu(\frac{1}{2})}{1-\mu(\frac{1}{2})}=1. 074612508\cdots. \tag{3.26}\] For \(\frac{1+\nu(x)}{1+\mu(x)}\), there still holds that \[\frac{1+\nu(x)}{1+\mu(x)}\;\;\text{is decreasing with}\;\;x\geq\frac{1}{2}. \tag{3.27}\] The fact (3.27) can be checked directly by taking derivative in view of (2.27), the details is omitted here. Therefore, for \(\frac{y}{\alpha}\geq\frac{1}{2}\), it holds that \[\frac{1+\nu(x)}{1+\mu(x)}\leq\frac{1+\nu(\frac{1}{2})}{1+\mu(\frac{1}{2})}=1. 104299511\cdots. \tag{3.28}\] Denote the error terms in (3.30) by \[\sigma_{1}:=\sum_{n=2}^{\infty}n^{4}e^{-\alpha\pi y(n^{2}-1)}\cdot\frac{1+\mu (\frac{y}{\alpha})}{1-\mu(\frac{y}{\alpha})}),\;\sigma_{2}:=\sum_{n=2}^{ \infty}n^{2}e^{-\alpha\pi y(n^{2}-1)}\cdot\frac{1+\nu(\frac{y}{\alpha})}{1-\mu (\frac{y}{\alpha})}.\] Then by (3.26), \[\begin{split}\sigma_{1}&\leq\frac{1+\mu(\frac{1}{2 })}{1-\mu(\frac{1}{2})}\sum_{n=2}^{\infty}n^{4}e^{-\alpha\pi y(n^{2}-1)}& \leq\frac{1+\mu(\frac{1}{2})}{1-\mu(\frac{1}{2})}\sum_{n=2}^{ \infty}n^{4}e^{-1.1\cdot\pi\cdot\frac{\sqrt{3}}{2}(n^{2}-1)}&\leq 2.169\cdot 1 0^{-3},\\ \sigma_{2}&\leq\frac{1+\nu(\frac{1}{2})}{1-\mu(\frac {1}{2})}\sum_{n=2}^{\infty}n^{2}e^{-\alpha\pi y(n^{2}-1)}&\leq \frac{1+\nu(\frac{1}{2})}{1-\mu(\frac{1}{2})}\sum_{n=2}^{\infty}n^{2}e^{-1.1 \cdot\pi\cdot\frac{\sqrt{3}}{2}(n^{2}-1)}&\leq 6.75\cdot 10^{-4}.\end{split} \tag{3.29}\] Therefore by (3.30), (3.28) and (3.29), \[\begin{split}-\frac{\partial}{\partial x}\mathcal{W}_{\frac{1}{2 \pi}}(\alpha;z)&\geq\pi\cdot\mathcal{C}(\alpha,x,y)\cdot\Big{(} \alpha^{2}\cdot\big{(}1-\sigma_{1}\big{)}-\frac{1+\nu(\frac{y}{\alpha})}{1+ \mu(\frac{y}{\alpha})}-\sigma_{2}\Big{)}\\ &\geq\pi\cdot\mathcal{C}(\alpha,x,y)\cdot\Big{(}1.1^{2}\cdot(1-2.169\cdot 10^{-3})-1.105-6.75\cdot 10^{-4}\Big{)}\\ &\geq\pi\cdot\mathcal{C}(\alpha,x,y)\cdot 0.1017005100>0.\end{split} \tag{3.30}\] This completes the proof of **case \(b_{1}\)**. It remains to prove **case \(b_{2}\)**. **case \(b_{2}:\frac{y}{\alpha}\in(0,\frac{1}{2})\).** By Lemmas 3.7, 2.3 and 2.5, we have \[\begin{split}-\frac{\partial}{\partial x}\mathcal{W}_{\frac{1}{2 \pi}}(\alpha;z)&=\mathcal{C}(\alpha,x,y)\cdot\Big{(}\pi\alpha^{2} \cdot\big{(}1+\sum_{n=2}^{\infty}n^{3}e^{-\alpha\pi y(n^{2}-1)}\cdot\frac{ \vartheta_{Y}(\frac{y}{\alpha};nx)}{\vartheta_{Y}(\frac{y}{\alpha};x)}\\ &\quad+\frac{\vartheta_{XY}(\frac{y}{\alpha};x)}{\vartheta_{Y}( \frac{y}{\alpha};x)}+\sum_{n=2}^{\infty}ne^{-\alpha\pi y(n^{2}-1)}\frac{ \vartheta_{XY}(\frac{y}{\alpha};nx)}{\vartheta_{Y}(\frac{y}{\alpha};x)}\Big{)} \\ &\geq\mathcal{C}(\alpha,x,y)\cdot\Big{(}\pi\alpha^{2}\cdot(1- \frac{1}{\pi}\sum_{n=2}^{\infty}n^{4}e^{-\alpha\pi\big{(}(n^{2}-1)y-\frac{1}{ 4y}\big{)}})\\ &\quad-\frac{3}{2}\frac{\alpha}{y}(1+\frac{\pi}{6}\frac{\alpha}{ y})-\frac{3}{2\pi}\frac{\alpha}{y}(1+\frac{\pi}{6}\frac{\alpha}{y})\cdot\sum_{n=2} ^{\infty}n^{2}e^{-\alpha\pi\big{(}(n^{2}-1)y-\frac{1}{4y}\big{)}}\Big{)}.\end{split} \tag{3.31}\] Since \(\frac{y}{\alpha}\in(0,\frac{1}{2})\), then \(\frac{\alpha}{y}>2\) and \(\alpha>2y\geq\sqrt{3}\). Denote that the error terms in (3.34) \[\sigma_{3}:=\frac{1}{\pi}\sum_{n=2}^{\infty}n^{4}e^{-\alpha\pi\big{(}(n^{2}-1 )y-\frac{1}{4y}\big{)}},\ \ \sigma_{4}:=\frac{3}{2\pi}\frac{\alpha}{y}(1+\frac{\pi}{6}\frac{\alpha}{y}) \cdot\sum_{n=2}^{\infty}n^{2}e^{-\alpha\pi\big{(}(n^{2}-1)y-\frac{1}{4y}\big{)} }). \tag{3.32}\] Then \[\begin{split}\sigma_{3}&\leq\frac{1}{\pi}\sum_{n=2 }^{\infty}n^{4}e^{-\sqrt{3}\pi\big{(}(n^{2}-1)\frac{\sqrt{3}}{2}-\frac{1}{2 \sqrt{3}}\big{)}}\leq 1.777\cdot 10^{-6},\\ \sigma_{4}&\leq\frac{3}{\pi}(1+\frac{\pi}{3}) \cdot\sum_{n=2}^{\infty}n^{2}e^{-\sqrt{3}\pi\big{(}(n^{2}-1)\frac{\sqrt{3}}{2 }-\frac{1}{2\sqrt{3}}\big{)}})\leq 2.727\cdot 10^{-5}.\end{split} \tag{3.33}\] By (3.34), (3.32) and (3.33), we have \[\begin{split}-\frac{\partial}{\partial x}\mathcal{W}_{\frac{1}{2 \pi}}(\alpha;z)&\geq\mathcal{C}(\alpha,x,y)\cdot\Big{(}\pi\alpha^{ 2}\cdot(1-\sigma_{3})-(\pi+3)-\sigma_{4}\Big{)}\\ &\geq\mathcal{C}(\alpha,x,y)\cdot\Big{(}3\pi\cdot(1-\sigma_{3}) -(\pi+3)-\sigma_{4}\Big{)}\\ &>0.\end{split} \tag{3.34}\] This proves **case \(b_{2}:\frac{y}{\alpha}\in(0,\frac{1}{2})\).** The proof is complete. Next we are going to prove **case a** of Theorem 3.4. Namely, **Proposition 3.2**.: _Assume that \(\alpha\in(1,1.1]\). Then for \(b=\frac{1}{2\pi}\),_ \[\frac{\partial}{\partial x}\mathcal{W}_{\frac{1}{2\pi}}(\alpha;z)<0,\ \text{ for }\ z\in\mathcal{D}_{\mathcal{G}}. \tag{3.35}\] We start with a lemma which is direct consequence of Lemma 3.6. **Lemma 3.8**.: \[\begin{split}-\frac{\partial}{\partial x}\mathcal{W}_{\frac{1}{2 \pi}}(\alpha;z)=& 8\pi\alpha^{-\frac{5}{2}}y^{\frac{3}{2}}\cdot\Big{(}\sum_{n=1}^{ \infty}\sum_{m=1}^{\infty}n^{3}m\big{(}\alpha^{2}e^{-\pi y(\alpha n^{2}+\frac{ 1}{\alpha}m^{2})}-e^{-\pi y(\alpha m^{2}+\frac{1}{\alpha}n^{2})}\big{)}\cdot \sin(2mn\pi x)\Big{)}.\end{split}\] Given by Lemma 3.8, we denote for convenience that \[\mathcal{A}_{n,m}(\alpha;y):=n^{3}m\cdot\big{(}\alpha^{2}e^{-\pi y(\alpha n^{2} +\frac{1}{\alpha}m^{2})}-e^{-\pi y(\alpha m^{2}+\frac{1}{\alpha}n^{2})}). \tag{3.36}\] Then by Lemma 3.8, one rewrites \(-\frac{\partial}{\partial x}\mathcal{W}_{\frac{1}{2\pi}}(\alpha;z)\) by \[\begin{split}-\frac{\partial}{\partial x}\mathcal{W}_{\frac{1}{2 \pi}}(\alpha;z)=& 8\pi\alpha^{-\frac{5}{2}}y^{\frac{3}{2}}\cdot\Big{(}\sum_{n=1}^{ \infty}\sum_{m=1}^{\infty}\mathcal{A}_{n,m}(\alpha;y)\cdot\sin(2mn\pi x)\Big{)}. \end{split}\] Therefore, Proposition 3.2 is equivalent to **Lemma 3.9**.: _Assume that \(\alpha\in(1,1.1]\). Then_ \[\sum_{n=1}^{\infty}\sum_{m=1}^{\infty}\mathcal{A}_{n,m}(\alpha;y)\cdot\sin(2mn \pi x)>0 \tag{3.37}\] _for \(z=x+iy\in\mathcal{D}_{\mathcal{G}}\). Here \(\mathcal{A}_{n,m}(\alpha;y)\) is defined in (3.36). In fact, we show that_ \[\sum_{n=1}^{\infty}\sum_{m=1}^{\infty}\mathcal{A}_{n,m}(\alpha;y)\cdot\sin(2mn \pi x)\geq\frac{1}{2}(\alpha^{2}-1)\sin(2\pi x)>0\;\;\text{for}\;\;z=x+iy\in \mathcal{D}_{\mathcal{G}}.\] To prove Lemma 3.9, we split the double infinite sum (3.37) into three parts as follows: \[\sum_{n=1}^{\infty}\sum_{m=1}^{\infty}=\sum_{m=n}+\sum_{m=1}^{\infty}\sum_{n= m+1}^{\infty}+\sum_{n=1}^{\infty}\sum_{m=n+1}^{\infty}. \tag{3.38}\] To estimate each part in (3.38), we establish the following two lemmas. **Lemma 3.10**.: \[|\sum_{n=m+1}^{\infty}\mathcal{A}_{n,m}(\alpha;y)\cdot\sin(2mn\pi x)\;|\leq B \cdot\mathcal{A}_{m,m}(\alpha;y)\cdot|\;\sin(2m^{2}\pi x)\;|,\] (3.39) where the constant \(B\) is defined by \[B:=\frac{2^{6}\alpha_{0}\pi ye^{-3\pi y\alpha_{0}}}{1-2^{6}e^{-5\pi y\alpha_{ 0}}}, \tag{3.40}\] where \(\alpha_{0}\) is an constant belonging to \((\frac{1}{\alpha},\alpha)\). Proof.: \[|\sum_{n=m+1}^{\infty}\frac{\mathcal{A}_{n,m}(\alpha;y)\cdot\sin( 2mn\pi x)}{\mathcal{A}_{m,m}(\alpha;y)\cdot\sin(2m^{2}\pi x)} |\leq\sum_{n=m+1}^{\infty}(\frac{n}{m})^{4}\;|\;\alpha\frac{ \alpha e^{-\pi y\alpha(n^{2}-m^{2})}-\frac{1}{\alpha}e^{-\pi y\frac{1}{ \alpha}(n^{2}-m^{2})}}{\alpha^{2}-1}\;|\] \[=\sum_{n=m+1}^{\infty}(\frac{n}{m})^{4}\big{(}\alpha_{0}\pi y(n^ {2}-m^{2})-1\big{)}e^{-\pi y\alpha_{0}(n^{2}-m^{2})},\;\;\alpha_{0}\in(\frac{1 }{\alpha},\alpha)\] \[\leq\sum_{n=m+1}^{\infty}(\frac{n}{m})^{4}\alpha_{0}\pi yn^{2}e^ {-\pi y\alpha_{0}(n^{2}-m^{2})}.\] Here we used the mean value Theorem to estimate \[\frac{\alpha e^{-\pi y\alpha(n^{2}-m^{2})}-\frac{1}{\alpha}e^{-\pi y\frac{1}{ \alpha}(n^{2}-m^{2})}}{\alpha^{2}-1}. \tag{3.42}\] Continuing by (3.41), we deform by letting \(k=n-m\) that \[\sum_{n=m+1}^{\infty}(\frac{n}{m})^{4}\alpha_{0}\pi yn^{2}e^{-\pi y\alpha_{0} (n^{2}-m^{2})}=\sum_{k=1}^{\infty}(\frac{m+k}{m})^{4}(m+k)^{2}\alpha_{0}\pi ye ^{-\alpha_{0}\pi y(2m+k)k}. \tag{3.43}\] To simplify the notations, we denote that \[b_{k}:=(\frac{m+k}{m})^{4}(m+k)^{2}\alpha_{0}\pi ye^{-\alpha_{0}\pi y(2m+k)k},m\geq 1. \tag{3.44}\] Then, by (3.41), (3.43) and (3.44), one has \[|\sum_{n=m+1}^{\infty}\frac{\mathcal{A}_{n,m}(\alpha;y)\cdot\sin(2mn\pi x)}{ \mathcal{A}_{m,m}(\alpha;y)\cdot\sin(2m^{2}\pi x)}\;|\leq\sum_{k=1}^{\infty}b_ {k}. \tag{3.45}\] To provide an upper bound of \(\sum_{k=1}^{\infty}b_{k}\), we estimate that \[\begin{split}\frac{b_{k+1}}{b_{k}}&=(1+\frac{1}{m+k})^{ 6}e^{-\pi y\alpha_{0}(1+2k+2m)}\\ &\leq 2^{6}e^{-5\pi y\alpha_{0}}:=q.\end{split} \tag{3.46}\] By (3.46), \(\sum_{k=1}^{\infty}b_{k}\) is controlled by a geometric sequence and then \[\sum_{k=1}^{\infty}b_{k}\leq\frac{b_{1}}{1-q}\leq\frac{2^{6}\alpha_{0}\pi ye^ {-3\pi y\alpha_{0}}}{1-2^{6}e^{-5\pi y\alpha_{0}}}. \tag{3.47}\] The desired result then follows by (3.45) and (3.47). Dual to Lemma 3.10, we have **Lemma 3.11**.: \[|\sum_{m=n+1}^{\infty}\mathcal{A}_{n,m}(\alpha;y)\cdot\sin(2mn\pi x)\mid\leq B \cdot\mathcal{A}_{n,n}(\alpha;y)\cdot\mid\sin(2n^{2}\pi x)\mid,\] _where the constant \(B\) is defined in (3.40)._ Proof.: The proof is similar to that of Lemma 3.10. \[\begin{split}|\sum_{m=n+1}^{\infty}\frac{\mathcal{A}_{n,m}( \alpha;y)\cdot\sin(2mn\pi x)}{\mathcal{A}_{m,m}(\alpha;y)\cdot\sin(2m^{2}\pi x )}\mid&\leq\sum_{m=n+1}^{\infty}(\frac{m}{n})^{4}\mid\alpha\frac{ \alpha e^{-\pi y\frac{1}{\alpha}(m^{2}-n^{2})}-\frac{1}{\alpha}e^{-\pi y\alpha (m^{2}-n^{2})}}{\alpha^{2}-1}\mid\\ &=\sum_{n=m+1}^{\infty}(\frac{n}{m})^{4}\big{(}\alpha_{0}\pi y(m^ {2}-n^{2})+1\big{)}e^{-\pi y\alpha_{0}(n^{2}-m^{2})},\ \ \alpha_{0}\in(\frac{1}{\alpha},\alpha)\\ &\leq\sum_{n=m+1}^{\infty}(\frac{n}{m})^{4}\alpha_{0}\pi ym^{2}e^ {-\pi y\alpha_{0}(n^{2}-m^{2})}.\end{split} \tag{3.48}\] Here we used the mean value Theorem for \[\frac{\alpha e^{-\pi y\frac{1}{\alpha}(m^{2}-n^{2})}-\frac{1}{\alpha}e^{-\pi y \alpha(m^{2}-n^{2})}}{\alpha^{2}-1}, \tag{3.49}\] which is a litter different from (3.42). Given by (3.48), the rest of the proof is the same to the proof of Lemma 3.10 by exchanging \(m\) and \(n\). We are ready to prove Lemma 3.9. Proof.: **Proof of Lemma 3.9.** As the splitting in (3.38), \[\begin{split}&\sum_{n=1}^{\infty}\sum_{m=1}^{\infty}\mathcal{A}_{n,m} (\alpha;y)\cdot\sin(2mn\pi x)=\sum_{k=1}^{\infty}\mathcal{A}_{k,k}(\alpha;y) \cdot\sin(2k^{2}\pi x)\\ &+\sum_{m=1}^{\infty}\sum_{n=m+1}^{\infty}\mathcal{A}_{n,m}( \alpha;y)\cdot\sin(2mn\pi x)+\sum_{n=1}^{\infty}\sum_{m=n+1}^{\infty}\mathcal{ A}_{n,m}(\alpha;y)\cdot\sin(2mn\pi x)\\ &\geq\sum_{k=1}^{\infty}\mathcal{A}_{k,k}(\alpha;y)\cdot\sin(2k^ {2}\pi x)-2B\sum_{m=1}^{\infty}\mathcal{A}_{m,m}(\alpha;y)\cdot\mid\sin(2m^{2} \pi x)\mid,\end{split} \tag{3.50}\] where the constant \(B\) is defined in (3.40). Note that \(z\in\mathcal{D}_{\mathcal{G}}\) implies that \(x\in(0,\frac{1}{2})\) and \(y>\frac{\sqrt{3}}{2}\). Then \(\sin(2\pi x)>0\). Therefore, by (3.50), \[\begin{split}&\sum_{n=1}^{\infty}\sum_{m=1}^{\infty}\mathcal{A}_{n,m}(\alpha;y)\cdot\sin(2mn\pi x)\geq(1-2B)\mathcal{A}_{1,1}(\alpha;y)\sin(2 \pi x)-(1+2B)\sum_{n=2}^{\infty}\mathcal{A}_{n,n}(\alpha;y)\cdot|\,\sin(2n^{2} \pi x)\mid\\ &\qquad\geq(1-2B)\mathcal{A}_{1,1}(\alpha;y)\sin(2\pi x)\cdot \Big{(}1-\frac{1+2B}{1-2B}\sum_{n=2}^{\infty}\mathcal{A}_{n,n}(\alpha;y)\cdot |\,\,\frac{\sin(2n^{2}\pi x)}{\sin(2\pi x)}\mid\Big{)}\\ &\qquad\geq(1-2B)\mathcal{A}_{1,1}(\alpha;y)\sin(2\pi x)\cdot \Big{(}1-\frac{1+2B}{1-2B}\sum_{n=2}^{\infty}n^{2}\mathcal{A}_{n,n}(\alpha;y) \Big{)}\end{split} \tag{3.51}\] Note that by (3.36), \[\mathcal{A}_{n,n}=(\alpha^{2}-1)n^{4}e^{-n\pi y(\alpha+\frac{1}{\alpha})}. \tag{3.52}\] Then \[\sum_{n=2}^{\infty}n^{2}\mathcal{A}_{n,n}=\sum_{n=2}^{\infty}n^{6}e^{-n\pi y( \alpha+\frac{1}{\alpha})}\leq\sum_{n=2}^{\infty}n^{6}e^{-2n\pi y}\leq\sum_{n= 2}^{\infty}n^{6}e^{-n\sqrt{3}\pi}\leq 1.27\cdot 10^{-3}. \tag{3.53}\] Therefore, by (3.51), (3.52) and (3.53), we have \[\sum_{n=1}^{\infty}\sum_{m=1}^{\infty}\mathcal{A}_{n,m}(\alpha;y)\cdot\sin(2 mn\pi x)\geq(\alpha^{2}-1)\frac{9}{10}(1-2B)\sin(2\pi x)>0. \tag{3.54}\] This completes the proof of Lemma 3.9. Therefore, Theorem 3.4 is proved by Propositions 3.1 and 3.2, and Proposition 3.2 is proved by Lemma 3.9. ## 4. Analysis on the vertical line \(\Gamma\) In Theorem 3.3, we have established that for \(\alpha\geq 1\), \[\min_{z\in\mathbb{H}}\mathcal{W}_{\frac{1}{2\pi}}(\alpha;z)=\min_{z\in\Gamma} \mathcal{W}_{\frac{1}{2\pi}}(\alpha;z), \tag{4.1}\] where the vertical line \(\Gamma\) is defined as \[\Gamma=\{z\in\mathbb{H}:\operatorname{Re}(z)=\frac{1}{2},\;\operatorname{Im} (z)\geq\frac{\sqrt{3}}{2}\}\] see (3.1). In this section, we aim to establish that **Theorem 4.1**.: _Assume that \(\alpha>1\). Then_ \[\min_{z\in\Gamma}\mathcal{W}_{\frac{1}{2\pi}}(\alpha;z)\,\text{ is achieved at }\,e^{i\frac{\pi}{3}}. \tag{4.2}\] The proof of Theorem 4.1 is based on the following Proposition **Proposition 4.1**.: _For \(\alpha\geq\)1 and \(y\geq\frac{\sqrt{3}}{2}\),_ \[\partial_{y}\mathcal{W}_{\frac{1}{2\pi}}(\alpha;\frac{1}{2}+iy)\geq 0.\] We first state a lemma on zeros of \(\partial_{y}\mathcal{W}_{\frac{1}{2\pi}}(\alpha;\frac{1}{2}+iy)\), which is deduced by Lemma 3.5 and Proposition 3.4([10]). **Lemma 4.1**.: _Zeros of of \(\partial_{y}\mathcal{W}_{\frac{1}{2\pi}}(\alpha;\frac{1}{2}+iy)\)._ * \(\alpha=1\) _is a first order zero of_ \(\partial_{y}\mathcal{W}_{\frac{1}{2\pi}}(\alpha;\frac{1}{2}+iy)\) _with respect to_ \(\alpha\) _for any_ \(y>0\)_;_ * \(y=\frac{\sqrt{3}}{2}\) _is a first order zero of_ \(\partial_{y}\mathcal{W}_{\frac{1}{2\pi}}(\alpha;\frac{1}{2}+iy)\) _with respect to_ \(y\) _for any_ \(\alpha>0\) _Qualitatively,_ \[\lim_{a\to 1,y\rightarrow\frac{\sqrt{3}}{2}}\frac{\partial_{y}\mathcal{W}_{\frac{1 }{2\pi}}(\alpha;\frac{1}{2}+iy)}{(a-1)(y-\frac{\sqrt{3}}{2})}=\partial_{yya} \mathcal{W}_{\frac{1}{2\pi}}(\alpha;\frac{1}{2}+iy)\mid_{a=1,y=\frac{\sqrt{3} }{2}}=1.127521373\cdots>0. \tag{4.3}\] Proof.: The zero points property in first and second item are deduced by Lemma 3.5 and Proposition 3.4 in [10] respectively. (4.3) is computed by L'Hopital's rule. The first order of the zeros is then followed by (4.3). To prove Proposition 4.1, based on Lemma 4.1, we divide its proof into four cases. For convenience for stating the strategy, we denote that \[\mathcal{R}_{a}: =\{(\alpha,y)\mid\alpha\in[1,1.2],\;y\in[\frac{\sqrt{3}}{2},1]\},\] \[\mathcal{R}_{b}: =\{(\alpha,y)\mid\alpha\in[1,1.2],\;y\geq 1\},\] \[\mathcal{R}_{c}: =\{(\alpha,y)\mid\alpha\geq 1.2,\;y\geq\frac{5}{6}\alpha\},\] \[\mathcal{R}_{d}: =\{(\alpha,y)\mid\alpha\geq 1.2,\;y\in[\frac{\sqrt{3}}{2}, \frac{5}{6}\alpha]\}.\] Then \[\{(\alpha,y)\mid\alpha\geq 1,\;y\geq\frac{\sqrt{3}}{2}\}=\mathcal{R}_{a} \cup\mathcal{R}_{b}\cup\mathcal{R}_{c}\cup\mathcal{R}_{d}.\] We shall prove that \(\partial_{y}\mathcal{W}_{\frac{1}{2\pi}}(\alpha;\frac{1}{2}+iy)\) is nonnegative on \(\mathcal{R}_{a}\), \(\mathcal{R}_{b}\), \(\mathcal{R}_{c}\) and \(\mathcal{R}_{d}\) respectively. In each region, we use different methods. In Regions \(\mathcal{R}_{b}\) and \(\mathcal{R}_{c}\), we estimate directly of \(\partial_{y}\mathcal{W}_{\frac{1}{2\pi}}(\alpha;\frac{1}{2}+iy)\) by its double sum and exponential expansion respectively. In Region \(\mathcal{R}_{d}\), we estimate \((\partial_{yy}+\frac{2}{y}\partial_{y})\mathcal{W}_{\frac{1}{2\pi}}(\alpha; \frac{1}{2}+iy)\). While in Region \(\mathcal{R}_{a}\), we estimate \(\partial_{yya}\mathcal{W}_{\frac{1}{2\pi}}(\alpha;\frac{1}{2}+iy)\). We prove the cases of \(\mathcal{R}_{b}\), \(\mathcal{R}_{c}\), \(\mathcal{R}_{d}\) and \(\mathcal{R}_{a}\) in the next four subsections respectively. Region \(\mathcal{R}_{b}\): estimate of \(\partial_{y}\mathcal{W}_{\frac{1}{2\pi}}(\alpha;\frac{1}{2}+iy)\) In this subsection, we shall prove that **Lemma 4.2**.: _For \((\alpha,y)\in\mathcal{R}_{b}\), then \(\partial_{y}\mathcal{W}_{\frac{1}{2\pi}}(\alpha;\frac{1}{2}+iy)\geq 0\)._ The proof of Lemma 4.2 is based on the following Lemmas 4.3 and 4.4. **Lemma 4.3**.: _For \((\alpha,y)\in\mathcal{R}_{b}\), then_ \[\partial_{y}\mathcal{W}_{\frac{1}{2\pi}}(\alpha;\frac{1}{2}+iy)\geq 2\pi( \alpha^{2}-1)y^{\frac{1}{2}}e^{-\pi\frac{\alpha}{\alpha}}\mathcal{L}_{b}( \alpha;y),\] _where_ \[\mathcal{L}_{b}(\alpha;y):=\frac{\pi y}{\alpha}-\frac{3}{2}-(\pi y\alpha- \frac{3}{2})\alpha^{2}e^{-\pi y(\alpha-\frac{1}{\alpha})}+(3\pi(1-B)-2(1+B) \frac{\alpha^{2}+1}{\alpha}y)(\alpha^{2}-1)e^{-\pi y\alpha}.\] _The constant \(B\) is very small is located in Lemma 4.8, i.e.,_ \[B=\frac{2^{6}\alpha_{0}\pi ye^{-3\pi y\alpha_{0}}}{1-2^{6}e^{-5\pi y\alpha_{0} }},\] _where \(\alpha_{0}\in(\frac{1}{\alpha},\alpha)\)._ **Lemma 4.4**.: _For \((\alpha,y)\in\mathcal{R}_{b}\), then_ \[\mathcal{L}_{b}(\alpha;y)\geq 0.316(\alpha^{2}-1)\geq 0.\] Proof.: We first claim that \(\partial_{y}\mathcal{L}_{b}(\alpha;y)>0\) for \((\alpha,y)\in\mathcal{R}_{b}\). In fact, \[\partial_{y}\mathcal{L}_{b}(\alpha;y) =\pi(\frac{1}{\alpha}-\alpha^{3}e^{-\pi y(\alpha-\frac{1}{\alpha} )})+\pi\alpha(\alpha^{2}-1)(\pi y\alpha-\frac{3}{2})e^{-\pi y(\alpha-\frac{1} {\alpha})}\] \[\geq\pi(\frac{1}{\alpha}-\alpha^{3}e^{-\pi y(\alpha-\frac{1}{ \alpha})}).\] Now \[\frac{1}{\alpha}-\alpha^{3}e^{-\pi y(\alpha-\frac{1}{\alpha})} =\alpha e^{-\frac{\pi y}{\alpha}}\big{(}\frac{1}{\alpha^{2}}e^{- \frac{\pi y}{\alpha}}-\alpha^{2}e^{-\pi y\alpha}\big{)}\] \[=e^{-\frac{\pi y}{\alpha}}(\alpha^{2}-1)e^{-\pi y\alpha_{b}}(\pi y \alpha_{b}-2),\alpha_{b}\in(\frac{1}{\alpha},\alpha)\] by mean value Theorem, which is positive since \(\pi y\alpha_{b}-2\geq\frac{\pi}{\alpha}-2>0\). Then it follows that \[\mathcal{L}_{b}(\alpha;y)\geq\mathcal{L}_{b}(\alpha;1) =(\alpha^{2}-1)\cdot\Big{(}\frac{\pi}{\alpha}-\frac{3}{2}-(\pi \alpha-\frac{3}{2})\alpha^{2}e^{-\pi(\alpha-\frac{1}{\alpha})}\] \[\qquad+(3\pi(1-B)-2(1+B)\frac{\alpha^{2}+1}{\alpha})e^{-\pi\alpha }\Big{)}\] The rest of the proof is based on elementary inequality \[\frac{\pi}{x}-\frac{3}{2}-(\pi x-\frac{3}{2})x^{2}e^{-\pi(x-\frac{1}{x})} \\ x^{2}-1\quad+(3\pi(1-B)-2(1+B)\frac{x^{2}+1}{x})e^{-\pi x}\geq 0.316\cdots,\ \ \text{for}\ \ x\in[1,1.2].\] Here \(\frac{\pi-\frac{3}{2}-(\pi x-\frac{3}{2})x^{2}e^{-\pi(x-\frac{1}{x})}}{x^{2}-1}\) has a removable singularity at \(x=1\) and \[\lim_{x\to 1}\frac{\pi}{x}-\frac{3}{2}-(\pi x-\frac{3}{2})x^{2}e^{-\pi(x- \frac{1}{x})}\\ x^{2}-1\quad=\pi^{2}-3.5\pi+1.5=0.374030114\cdots.\] It remains to prove Lemma 4.3. We use a deformation of \(\partial_{y}\mathcal{W}_{\frac{1}{2\pi}}(\alpha;\frac{1}{2}+iy)\). **Lemma 4.5**.: _A double sum expansion of \(\partial_{y}\mathcal{W}_{\frac{1}{2\pi}}(\alpha;\frac{1}{2}+iy)\)._ \[\partial_{y}\mathcal{W}_{\frac{1}{2\pi}}(\alpha;\frac{1}{2}+iy)= \frac{3}{2}y^{\frac{1}{2}}\Big{(}-2\pi\sum_{n=1}^{\infty}n^{2}(e^{-\pi n^{2} \frac{\pi}{\alpha}}-\alpha^{2}e^{-\pi n^{2}y\alpha})\] \[+4\pi\sum_{n=1}^{\infty}\sum_{m=1}^{\infty}(-1)^{mn}n^{2}\big{(} \alpha^{2}e^{-\pi y(n^{2}\alpha+\frac{m^{2}}{\alpha})}-e^{-\pi y(m^{2}\alpha +\frac{n^{2}}{\alpha})}\big{)}\Big{)}\] \[+y^{\frac{3}{2}}\Big{(}\frac{2\pi^{2}}{\alpha}\sum_{n=1}^{\infty} n^{4}(e^{-\pi n^{2}\frac{\pi}{\alpha}}-\alpha^{4}e^{-\pi n^{2}y\alpha})\] \[+\frac{4\pi^{2}}{\alpha}\sum_{n=1}^{\infty}\sum_{m=1}^{\infty}(- 1)^{mn}n^{4}\big{(}e^{-\pi y(m^{2}\alpha+\frac{n^{2}}{\alpha})}-\alpha^{4}e^{- \pi y(n^{2}\alpha+\frac{m^{2}}{\alpha})}\big{)}\Big{)}.\] Lemma 4.5 is based on the following Lemma **Lemma 4.6**.: _The theta functions expression of \(\partial_{y}\mathcal{W}_{\frac{1}{2\pi}}(\alpha;z)\)_ \[\partial_{y}\mathcal{W}_{\frac{1}{2\pi}}(\alpha;z) =\frac{3}{2}y^{\frac{1}{2}}\Big{(}\pi\alpha^{2}\sum_{n\in\mathbb{Z }}n^{2}e^{-\alpha\pi yn^{2}}\vartheta(\frac{y}{\alpha};nx)+\sum_{n\in\mathbb{Z }}e^{-\alpha\pi yn^{2}}\vartheta_{X}(\frac{y}{\alpha};nx)\Big{)}\] \[+y^{\frac{3}{2}}\Big{(}-\pi^{2}\alpha^{3}\sum_{n\in\mathbb{Z}}n^{4 }e^{-\alpha\pi yn^{2}}\vartheta(\frac{y}{\alpha};nx)+\frac{1}{\alpha}\sum_{n \in\mathbb{Z}}e^{-\alpha\pi yn^{2}}\vartheta_{XX}(\frac{y}{\alpha};nx)\Big{)}.\] Lemma 4.6 is a direct consequence of Lemma 3.2. Given by Lemma 4.5, We also need some auxiliary lemmas to prove Lemma 4.3. **Lemma 4.7**.: _For \(\alpha\in[1,7]\) and \(y\geq\frac{\sqrt{3}}{2}\), it holds that_ \[2y^{\frac{3}{2}}\frac{\pi^{2}}{\alpha}\sum_{n=2}^{\infty}n^{4}(e^{-\pi n^{2} \frac{y}{\alpha}}-\alpha^{4}e^{-\pi n^{2}y\alpha})\geq 3\pi y^{\frac{1}{2}} \sum_{n=2}^{\infty}n^{2}(e^{-\pi n^{2}\frac{y}{\alpha}}-\alpha^{2}e^{-\pi n^{2 }y\alpha})\] Proof.: Denote that \[\mathcal{B}_{n}(\alpha;y):=2y^{\frac{3}{2}}\frac{\pi^{2}}{\alpha}n^{4}(e^{-\pi n ^{2}\frac{y}{\alpha}}-\alpha^{4}e^{-\pi n^{2}y\alpha})-3\pi y^{\frac{1}{2}}n^ {2}(e^{-\pi n^{2}\frac{y}{\alpha}}-\alpha^{2}e^{-\pi n^{2}y\alpha}). \tag{4.4}\] Then it equivalents to prove that \[\sum_{n=2}^{\infty}\mathcal{B}_{n}(\alpha;y)>0.\] In fact, we shall show that \[\mathcal{B}_{n}(\alpha;y)>0,\ \ \text{for}\ \ n\geq 2,\alpha\in[1,7]\ \ \text{and}\ \ y\geq\frac{\sqrt{3}}{2}.\] By (4.4), one has \[\mathcal{B}_{n}(\alpha;y)=n^{2}e^{-\pi n^{2}\frac{y}{\alpha}}\Big{(}\frac{ \pi}{\alpha}yn^{2}-\frac{3}{2}-\alpha^{2}(\alpha\pi yn^{2}-\frac{3}{2})e^{-\pi y n ^{2}(\alpha-\frac{1}{\alpha})}\Big{)}\] To show \(\mathcal{B}_{n}(\alpha;y)\) is nonnegative in the desired region, it equivalents to show \(\frac{\pi}{\alpha}yn^{2}-\frac{3}{2}-\alpha^{2}(\alpha\pi yn^{2}-\frac{3}{2}) e^{-\pi yn^{2}(\alpha-\frac{1}{\alpha})}\) is nonnegative in the desired region. This is similar to the proof of Lemma 4.4. We consider a function \[\Big{(}\frac{\pi}{\alpha}x-\frac{3}{2}-\alpha^{2}(\alpha\pi x-\frac{3}{2})e^{ -\pi x(\alpha-\frac{1}{\alpha})}\Big{)},x=yn^{2}\geq 2\sqrt{3}.\] One has, since \(yn^{2}\geq 2\sqrt{3}\) \[\Big{(}\frac{\pi}{\alpha}-\frac{3}{2}-\alpha^{2}(\alpha\pi yn^{2 }-\frac{3}{2})e^{-\pi yn^{2}(\alpha-\frac{1}{\alpha})}\Big{)} \geq\Big{(}2\sqrt{3}\frac{\pi}{\alpha}-\frac{3}{2}-\alpha^{2}( \alpha 2\sqrt{3}\pi-\frac{3}{2})e^{-2\sqrt{3}\pi(\alpha-\frac{1}{\alpha})} \Big{)}\] \[\geq(\alpha^{2}-1)\frac{2\sqrt{3}\frac{\pi}{\alpha}-\frac{3}{2}- \alpha^{2}(\alpha 2\sqrt{3}\pi-\frac{3}{2})e^{-2\sqrt{3}\pi(\alpha-\frac{1}{\alpha})}}{ \alpha^{2}-1}\] \[\geq 0.00113927433(\alpha^{2}-1)\ \ \text{for}\ \ \alpha\in[1,7].\] Here \(\frac{2\sqrt{3}\frac{\pi}{\alpha}-\frac{3}{2}-\alpha^{2}(\alpha 2\sqrt{3}\pi-\frac{3}{2})e^{-2\sqrt{3}\pi( \alpha-\frac{1}{\alpha})}}{\alpha^{2}-1}\) has a removable singularity at \(\alpha=1\) and \[\lim_{\alpha\to 1}\frac{2\sqrt{3}\frac{\pi}{\alpha}-\frac{3}{2}-\alpha^{2}( \alpha 2\sqrt{3}\pi-\frac{3}{2})e^{-2\sqrt{3}\pi(\alpha-\frac{1}{\alpha})}}{\alpha^{2 }-1}=81.84546604\cdots.\] **Lemma 4.8**.: _For \(\alpha\leq[1,1.2]\) and \(y\geq\frac{\sqrt{3}}{2}\)_ \[\sum_{n=1}^{\infty}\sum_{m=1}^{\infty}(-1)^{mn}n^{4}\big{(}e^{-\pi y(m^{2} \alpha+\frac{n^{2}}{\alpha})}-\alpha^{4}e^{-\pi y(n^{2}\alpha+\frac{m^{2}}{ \alpha})}\big{)} \geq(1-B)(\alpha^{4}-1)e^{-\pi y(\alpha+\frac{1}{\alpha})},\] _Here the constant \(B\) is defined in (3.40) as_ \[B=\frac{2^{6}\alpha_{0}\pi ye^{-3\pi y\alpha_{0}}}{1-2^{6}e^{-5\pi y\alpha_{0} }},\] _where \(\alpha_{0}\in(\frac{1}{\alpha},\alpha)\)._ The proof of Lemma 4.8 is very similar to that of Lemmas 3.10 and 3.11, hence we omit the details here. Proof.: **Proof of Lemma 4.3.** It follows by Lemmas 4.5, 4.7 and 4.8. Therefore, the proof of Lemma 4.2 is complete. Region \(\mathcal{R}_{c}\): estimate of \(\partial_{y}\mathcal{W}_{\frac{1}{2\pi}}(\alpha;\frac{1}{2}+iy)\) In this subsection, we shall prove that **Lemma 4.9**.: _For \((\alpha,y)\in\mathcal{R}_{c}\), then \(\partial_{y}\mathcal{W}_{\frac{1}{2\pi}}(\alpha;\frac{1}{2}+iy)>0\)._ The proof of Lemma 4.9 does to the following Lemmas 4.10, 4.11 and 4.12. **Lemma 4.10**.: _For \((\alpha,y)\in\mathcal{R}_{c}\), then_ \[\partial_{y}\mathcal{W}_{\frac{1}{2\pi}}(\alpha;z)\geq \frac{3}{2}y^{\frac{1}{2}}\vartheta_{X}(\frac{y}{\alpha};0)+y^{ \frac{3}{2}}\frac{1}{\alpha}\vartheta_{XX}(\frac{y}{\alpha};0)-2\pi y^{\frac {3}{2}}\alpha^{3}\vartheta(\frac{y}{\alpha};\frac{1}{2})(1+\epsilon_{c,3})e^{ -\pi y}\] \[\quad+2y^{\frac{3}{2}}\frac{1}{\alpha}(1+\epsilon_{c,4})\vartheta _{XX}(\frac{y}{\alpha};\frac{1}{2})e^{-\alpha\pi y}.\] _Here \(\epsilon_{c,3}\) and \(\epsilon_{c,4}\) are very small and located in Lemmas 4.13 and 4.14 respectively._ The proof of 4.10 is based on Lemmas 4.6, 4.13 and 4.14. We further simplify the lower bound of \(\partial_{y}\mathcal{W}_{\frac{1}{2\pi}}(\alpha;z)\) in Lemma 4.10. **Lemma 4.11**.: _For \((\alpha,y)\in\mathcal{R}_{c}\), then_ \[\partial_{y}\mathcal{W}_{\frac{1}{2\pi}}(\alpha;z)\geq 2\pi y^{\frac{1}{2}}e^{ -\frac{\pi y}{\alpha}}\cdot\Big{(}\frac{\pi y}{\alpha}-\frac{3}{2}-(1+ \epsilon_{c,3})y\alpha^{3}e^{-\pi y(\alpha-\frac{1}{\alpha})}-2(1+\epsilon_{c,4})\frac{\pi y}{\alpha}e^{-\alpha\pi y}\Big{)}.\] _Here \(\epsilon_{c,3}\) and \(\epsilon_{c,4}\) are very small and located in Lemmas 4.13 and 4.14 respectively._ Now we can conclude that \(\partial_{y}\mathcal{W}_{\frac{1}{2\pi}}(\alpha;z)\) is positive if \((\alpha,y)\in\mathcal{R}_{c}\) by the following **Lemma 4.12**.: _For \((\alpha,y)\in\mathcal{R}_{c}\), then_ \[\frac{\pi y}{\alpha}-\frac{3}{2}-(1+\epsilon_{c,3})y\alpha^{3}e^{-\pi y( \alpha-\frac{1}{\alpha})}-2(1+\epsilon_{c,4})\frac{\pi y}{\alpha}e^{-\alpha \pi y}\geq\frac{1}{2}>0.\] _Here \(\epsilon_{c,3}\) and \(\epsilon_{c,4}\) are very small and located in Lemmas 4.13 and 4.14 respectively._ Proof.: Since \(\frac{y}{\alpha}\geq\frac{5}{6}\), \(\frac{\pi y}{\alpha},-ye^{-\pi y(\alpha-\frac{1}{\alpha})}\), \(-ye^{-\alpha\pi y}\) are monotonically decreasing on \(y\), one then has \[\frac{\pi y}{\alpha}-\frac{3}{2}-(1+\epsilon_{c,3})y\alpha^{3}e^{ -\pi y(\alpha-\frac{1}{\alpha})}-2(1+\epsilon_{c,4})\frac{\pi y}{\alpha}e^{- \alpha\pi y}\] \[\geq \frac{5\pi}{6}-\frac{3}{2}-(1+\epsilon_{c,3})\frac{5}{6}\alpha^{ 4}e^{-\frac{5\pi}{6}(\alpha^{2}-1)}-2(1+\epsilon_{c,4})\frac{5\pi}{6}e^{- \frac{5\pi}{6}\alpha^{2}}.\] The later is bigger than \(\frac{1}{2}\) by a basic estimate and we omit the details here. The proof is based on the expression in Lemma 4.6. We then estimate each part in Lemma 4.6 separately by Lemmas 4.13-4.16. **Lemma 4.13**.: _Assume that \((\alpha,y)\in\mathcal{R}_{c}\), then_ \[\sum_{n\in\mathbb{Z}}n^{2}e^{-\alpha\pi yn^{2}}\vartheta(\frac{y}{ \alpha};\frac{n}{2}) \geq 2e^{-\pi\alpha y}\vartheta(\frac{y}{\alpha};\frac{1}{2})(1- \epsilon_{c,1}),\] \[\sum_{n\in\mathbb{Z}}n^{4}e^{-\alpha\pi yn^{2}}\vartheta(\frac{y}{ \alpha};\frac{n}{2}) \leq 2e^{-\pi\alpha y}\vartheta(\frac{y}{\alpha};\frac{1}{2})(1+ \epsilon_{c,3}).\] _Here_ \[\epsilon_{c,1}: =\frac{1+\sum_{k=1}^{\infty}e^{-\pi k^{2}\frac{y}{\alpha}}}{1- \sum_{k=1}^{\infty}e^{-\pi k^{2}\frac{y}{\alpha}}}\cdot\sum_{n=2}^{\infty}n^{2 }e^{-\pi\alpha y(n^{2}-1)},\] \[\epsilon_{c,3}: =\frac{1+\sum_{k=1}^{\infty}e^{-\pi k^{2}\frac{y}{\alpha}}}{1- \sum_{k=1}^{\infty}e^{-\pi k^{2}\frac{y}{\alpha}}}\cdot\sum_{n=2}^{\infty}n^{4 }e^{-\pi\alpha y(n^{2}-1)}.\] _Numerically,_ \[\epsilon_{c,1}\leq 5.68\cdot 10^{-4},\ \ \epsilon_{c,3}\leq 2.27\cdot 10^{-3}.\] Proof.: We only prove the first one, the second one is much similar and we omit the details here. We start with the deformation \[\begin{split}\sum_{n\in\mathbb{Z}}n^{2}e^{-\alpha\pi yn^{2}} \vartheta(\frac{y}{\alpha};\frac{n}{2})&=2\sum_{n=1}n^{2}e^{- \alpha\pi yn^{2}}\vartheta(\frac{y}{\alpha};\frac{n}{2})\\ &=2e^{-\alpha\pi y}\vartheta(\frac{y}{\alpha};\frac{1}{2})\cdot \big{(}1+\sum_{n=2}^{\infty}n^{2}e^{-\alpha\pi y(n^{2}-1)}\frac{\vartheta( \frac{y}{\alpha};\frac{n}{2})}{\vartheta(\frac{y}{\alpha};\frac{1}{2})}\big{)} \end{split} \tag{4.5}\] For \(\frac{\vartheta(\frac{y}{\alpha};\frac{n}{2})}{\vartheta(\frac{y}{\alpha}; \frac{1}{2})}\), one has \[\frac{\vartheta(\frac{y}{\alpha};\frac{n}{2})}{\vartheta(\frac{y}{\alpha}; \frac{1}{2})}\leq\frac{1+\sum_{k=1}^{\infty}e^{-\pi k^{2}\frac{y}{\alpha}}}{1 -\sum_{k=1}^{\infty}e^{-\pi k^{2}\frac{y}{\alpha}}} \tag{4.6}\] since for any \(x\) \[1-\sum_{k=1}^{\infty}e^{-\pi k^{2}\frac{y}{\alpha}}\leq\vartheta(\frac{y}{ \alpha};x)\leq 1+\sum_{k=1}^{\infty}e^{-\pi k^{2}\frac{y}{\alpha}}.\] (4.5) and (4.6) yield the result. **Lemma 4.14**.: _If \(\frac{y}{\alpha}\geq\frac{5}{6}\), then_ \[\begin{split}\sum_{n\in\mathbb{Z}}e^{-\alpha\pi yn^{2}} \vartheta_{X}(\frac{y}{\alpha};\frac{n}{2})&\geq\vartheta_{X}( \frac{y}{\alpha};0)+2e^{-\pi\alpha y}\vartheta_{X}(\frac{y}{\alpha};\frac{1}{ 2})(1-\epsilon_{c,2}),\\ \sum_{n\in\mathbb{Z}}e^{-\alpha\pi yn^{2}}\vartheta_{XX}(\frac{ y}{\alpha};\frac{n}{2})&\geq\vartheta_{XX}(\frac{y}{\alpha};0)+2e^{- \pi\alpha y}\vartheta_{XX}(\frac{y}{\alpha};\frac{1}{2})(1+\epsilon_{c,4}). \end{split}\] _Here_ \[\begin{split}\epsilon_{c,2}:&=\frac{\sum_{k=1}^{ \infty}k^{2}e^{-\pi(k^{2}-1)\frac{y}{\alpha}}}{1-4e^{-3\pi\frac{y}{\alpha}}} \cdot\sum_{n=2}^{\infty}e^{-\pi\alpha y(n^{2}-1)},\\ \epsilon_{c,4}:&=\frac{\sum_{k=1}^{\infty}k^{4}e^{- \pi(k^{2}-1)\frac{y}{\alpha}}}{1-16e^{-3\pi\frac{y}{\alpha}}}\cdot\sum_{n=2}^{ \infty}e^{-\pi\alpha y(n^{2}-1)}.\end{split}\] _Numerically,_ \[\epsilon_{c,2}\leq 1.23\cdot 10^{-5},\ \ \epsilon_{c,4}\leq 1.24\cdot 10^{-5}.\] Proof.: The proof the second one is very similar to the first one and then we only provide the proof of the first one here. Deforming the expression, one has \[\begin{split}\sum_{n\in\mathbb{Z}}e^{-\alpha\pi yn^{2}} \vartheta_{X}(\frac{y}{\alpha};\frac{n}{2})&=\vartheta_{X}( \frac{y}{\alpha};0)+2\sum_{n=1}^{\infty}e^{-\alpha\pi yn^{2}}\vartheta_{X}( \frac{y}{\alpha};\frac{n}{2})\\ &=\vartheta_{X}(\frac{y}{\alpha};0)+2e^{-\frac{\pi y}{\alpha}} \vartheta_{X}(\frac{y}{\alpha};\frac{1}{2})\cdot\big{(}1+\sum_{n=2}^{\infty} e^{-\alpha\pi y(n^{2}-1)}\frac{\vartheta_{X}(\frac{y}{\alpha};\frac{n}{2})}{ \vartheta_{X}(\frac{y}{\alpha};\frac{1}{2})}\big{)}.\end{split} \tag{4.7}\] For \(\frac{\vartheta_{X}(\frac{y}{\alpha};\frac{n}{\alpha})}{\vartheta_{X}(\frac{y}{ \alpha};\frac{1}{2})}\), one has \[\begin{split}|\ \frac{\vartheta_{X}(\frac{y}{\alpha};\frac{n}{ \alpha})}{\vartheta_{X}(\frac{y}{\alpha};\frac{1}{2})}|&=\frac{| \sum_{k=1}^{\infty}(-1)^{kn}k^{2}e^{-k^{2}\frac{\pi y}{\alpha}}|}{\sum_{k=1}^{ \infty}(-1)^{k-1}k^{2}e^{-k^{2}\frac{\pi y}{\alpha}}}\\ &\leq\frac{\sum_{k=1}^{\infty}k^{2}e^{-(k^{2}-1)\frac{\pi y}{ \alpha}}}{1-\sum_{k=2}^{\infty}(-1)^{k-1}k^{2}e^{-(k^{2}-1)\frac{\pi y}{\alpha} }}\\ &\leq\frac{\sum_{k=1}^{\infty}k^{2}e^{-(k^{2}-1)\frac{\pi y}{ \alpha}}}{1-4e^{-3\frac{\pi y}{\alpha}}}.\end{split} \tag{4.8}\] The result follows by (4.7) and (4.8). **Lemma 4.15**.: _For \(\frac{y}{\alpha}>\frac{3}{2\pi}\),_ \[\frac{3}{2}y^{\frac{1}{2}}\vartheta_{X}(\frac{y}{\alpha};0)+y^{\frac{3}{2}} \frac{1}{\alpha}\vartheta_{XX}(\frac{y}{\alpha};0)\geq 2\pi y^{\frac{1}{2}}( \frac{\pi y}{\alpha}-\frac{3}{2})e^{-\frac{\pi y}{\alpha}}.\] Proof.: In using the explicit expression of \(\vartheta_{X},\vartheta_{XX}\), one has \[\frac{3}{2}y^{\frac{1}{2}}\vartheta_{X}(\frac{y}{\alpha};0)+y^{\frac{3}{2}} \frac{1}{\alpha}\vartheta_{XX}(\frac{y}{\alpha};0)=2\pi y^{\frac{1}{2}}\cdot \Big{(}(\frac{\pi y}{\alpha}-\frac{3}{2})e^{-\frac{\pi y}{\alpha}}+\sum_{n=2}^ {\infty}(\frac{\pi y}{\alpha}n^{4}-\frac{3}{2}n^{2})e^{-\pi n^{2}\frac{y}{ \alpha}}\Big{)}.\] The result then follows. Region \(\mathcal{R}_{d}\), estimate of \((\partial_{yy}+\frac{2}{y}\partial_{y})\mathcal{W}_{\frac{1}{2\pi}}(\alpha; \frac{1}{2}+iy)\) In this subsection, we aim to prove that **Lemma 4.17**.: _Assume that \((\alpha,y)\in\mathcal{R}_{d}\), then \((\partial_{yy}+\frac{2}{y}\partial_{y})\mathcal{W}_{\frac{1}{2\pi}}(\alpha; \frac{1}{2}+iy)>0\)._ We postpone the proof of Lemma 4.17 and give the desired estimate we need as follows. **Lemma 4.18**.: _Assume that \((\alpha,y)\in\mathcal{R}_{d}\), then \(\partial_{y}\mathcal{W}_{\frac{1}{2\pi}}(\alpha;\frac{1}{2}+iy)\geq 0\)._ Proof.: Notice that \[\partial_{yy}+\frac{2}{y}\partial_{y}=y^{-2}\partial_{y}(y^{2}\partial_{y}).\] Then by Lemma 4.17, one has \[\partial_{y}(y^{2}\partial_{y})\mathcal{W}_{\frac{1}{2\pi}}(\alpha;\frac{1}{2 }+iy)>0\ \ \text{for}\ \ (\alpha,y)\in\mathcal{R}_{d}. \tag{4.9}\] On the other hand, by Proposition 3.4 of Betermin [10], it holds that \[(y^{2}\partial_{y})\mathcal{W}_{\frac{1}{2\pi}}(\alpha;\frac{1}{2}+iy)\mid_{y= \frac{\sqrt{3}}{2}}=0\ \ \text{for}\ \ a>0. \tag{4.10}\] (4.9) and (4.10) yield the result. In the rest of this subsection, we aim to prove Lemma 4.17. We first have an identity for \(\theta(\alpha;z)\), see [37, 34]. **Lemma 4.19**.: _It holds that_ \[(\partial_{yy}+\frac{2}{y}\partial_{y})\theta(\alpha;z)= (\pi\alpha)^{2}\sum_{n,m}(n^{2}-\frac{(m+nx)^{2}}{y^{2}})^{2}e^{- \pi\alpha(yn^{2}+\frac{(m+nx)^{2}}{y})}\] \[-\frac{2\pi\alpha}{y}\sum_{n,m}n^{2}e^{-\pi\alpha(yn^{2}+\frac{( m+nx)^{2}}{y})}.\] The following Lemma is deduced by Lemma 4.19. **Lemma 4.20**.: _We have the differential identity for \(\mathcal{W}_{\frac{1}{2\pi}}(\alpha;\frac{1}{2}+iy)\)_ \[(\partial_{yy}+\frac{2}{y}\partial_{y})\mathcal{W}_{\frac{1}{2 \pi}}(\alpha;\frac{1}{2}+iy)\] \[= (\pi\alpha)^{2}\sum_{n,m}(n^{2}-\frac{(m+\frac{n}{2})^{2}}{y^{2} })^{2}(yn^{2}+\frac{(m+\frac{n}{2})^{2}}{y})e^{-\pi\alpha(yn^{2}+\frac{(m+ \frac{n}{2})^{2}}{y})}\] \[+\frac{3}{y}\sum_{n,m}n^{2}e^{-\pi\alpha(yn^{2}+\frac{(m+\frac{n} {2})^{2}}{y})}-\frac{5}{2}\pi\alpha\sum_{n,m}(n^{2}-\frac{(m+\frac{n}{2})^{2}} {y^{2}})^{2}e^{-\pi\alpha(yn^{2}+\frac{(m+\frac{n}{2})^{2}}{y})}\] \[-\frac{2\pi\alpha}{y}\sum_{n,m}n^{2}(yn^{2}+\frac{(m+\frac{n}{2})^ {2}}{y})e^{-\pi\alpha(yn^{2}+\frac{(m+\frac{n}{2})^{2}}{y})}\] Proceed by Lemma 4.20, we deduce the lower bound of \((\partial_{yy}+\frac{2}{y}\partial_{y})\mathcal{W}_{\frac{1}{2\pi}}(\alpha; \frac{1}{2}+iy)\). **Lemma 4.21** (The lower bound of \((\partial_{yy}+\frac{2}{y}\partial_{y})\mathcal{W}_{\frac{1}{2\pi}}(\alpha; \frac{1}{2}+iy)\)).: _Assume that \((\alpha,y)\in\mathcal{R}_{d}\), then_ \[(\partial_{yy}+\frac{2}{y}\partial_{y})\mathcal{W}_{\frac{1}{2\pi}}(\alpha; \frac{1}{2}+iy)\geq\pi\alpha y^{-4}e^{-\frac{\pi\alpha}{y}}\mathcal{L}_{d}( \alpha;y),\] _where_ \[\mathcal{L}_{d}(\alpha;y):=\frac{2\pi\alpha}{y}-5(1+\epsilon_{d,1})+4\pi \alpha(y^{2}-\frac{1}{4})^{2}(y+\frac{1}{4y})e^{-\pi\alpha(y-\frac{3}{4y})}- 8(1+\epsilon_{d,2})y^{3}(y+\frac{1}{4y})e^{-\pi\alpha(y-\frac{3}{4y})}\] Lemma 4.17 is then proved by Lemma 4.21 and following Lemma 4.22. Lemma 4.21 is proved by Lemma 4.20 and Lemmas 4.23-4.26. **Lemma 4.22** (The positiveness of lower bound function in Lemma 4.21).: _Assume that \((\alpha,y)\in\mathcal{R}_{d}\), then_ \[\mathcal{L}_{d}(\alpha;y)>0.\] Proof.: We divide the proof into two cases, case a: \(y\geq 1\) and case b: \(y\in[\frac{\sqrt{3}}{2},1]\). For case a: \(y\geq 1\), then \(\left(4\pi\alpha(y^{2}-\frac{1}{4})^{2}-8(1+\epsilon_{d,2})y^{3}\right)\geq 0\) since \(\alpha\geq 1.2\), then \(\mathcal{L}_{d}(\alpha;y)>0\) immediately since \(\frac{2\pi\alpha}{y}-5(1+\epsilon_{d,1})>0\). For case b: \(y\in[\frac{\sqrt{3}}{2},1]\), it is checked that \(\frac{\partial}{\partial\alpha}\mathcal{L}_{d}(\alpha;y)>0\) for \(\alpha\geq 1.2\). Then \[\begin{split}\mathcal{L}_{d}(\alpha;y)\geq&\frac{2. 4\pi}{y}-5(1+\epsilon_{d,1})+4.8(y^{2}-\frac{1}{4})^{2}(y+\frac{1}{4y})e^{-1.2 \pi(y-\frac{3}{4y})}\\ &-8(1+\epsilon_{d,2})y^{3}(y+\frac{1}{4y})e^{-1.2\pi(y-\frac{3}{4 y})}.\end{split} \tag{4.11}\] The later explicit function in (4.11) has lower bound \(7\) for \(y\in[\frac{\sqrt{3}}{2},1]\), the result then follows. In the following Lemmas 4.23-4.26, we shall analyze each part of the identity in Lemma 4.20. **Lemma 4.23** (A lower bound of double sum: first kind).: \[\sum_{n,m}(n^{2}-\frac{(m+\frac{n}{2})^{2}}{y^{2}})^{2}(yn^{2}+ \frac{(m+\frac{n}{2})^{2}}{y})e^{-\pi\alpha(yn^{2}+\frac{(m+\frac{n}{2})^{2}}{ y})}\] \[\geq \frac{2}{y^{5}}e^{-\frac{\pi\alpha}{y}}+4(1-\frac{1}{4y^{2}})^{2} (y+\frac{1}{4y})e^{-\pi\alpha(y+\frac{1}{4y})}.\] Proof.: The double sum evaluates at \[(m,n)=\{(1,0),(-1,0)\}\ \text{ contributing }\ \frac{1}{y^{5}}e^{-\pi\frac{ \alpha}{y}}\ \text{ each}\] and \[(m,n)=\{(0,1),(0,-1),(1,-1),(-1,1)\}\ \text{ contributing }\ (1-\frac{1}{4y^{2}})^{2}(y+\frac{1}{4y})e^{-\pi\alpha(y+\frac{1}{4y})}\ \text{ each}.\] The rest of other terms in the double sum all are positive and hence the result follows. **Lemma 4.24** (A lower bound of double sum: second kind).: \[\sum_{n,m}n^{2}e^{-\pi\alpha(yn^{2}+\frac{(m+\frac{n}{2})^{2}}{y})}\geq 4e^{- \pi\alpha(y+\frac{1}{4y})}.\] Proof.: The double sum can be evaluated at \[(m,n)=\{(0,1),(0,-1),(1,-1),(-1,1)\}\ \text{ contributing }\ e^{-\pi\alpha(y+ \frac{1}{4y})}\ \text{ each}.\] The rest of other terms in the double sum all are positive and hence the result follows. **Lemma 4.25** (An upper bound of double sum: third kind).: \[\sum_{n,m}(n^{2}-\frac{(m+\frac{n}{2})^{2}}{y^{2}})^{2}e^{-\pi\alpha(yn^{2}+ \frac{(m+\frac{n}{2})^{2}}{y})}\leq(1+\epsilon_{d,1})\frac{2}{y^{4}}e^{-\frac{ \pi\alpha}{y}}+4(1-\frac{1}{4y^{2}})^{2}e^{-\pi\alpha(y+\frac{1}{4y})},\] _where_ \[\epsilon_{d,1}\leq 4y^{4}e^{-\pi\alpha(4y-\frac{1}{y})}+16y^{4}e^{-4\pi\alpha y }\leq 3.92\cdot 10^{-4}.\] Proof.: We deform the double sum as \[\sum_{n,m}(n^{2}-\frac{(m+\frac{n}{2})^{2}}{y^{2}})^{2}e^{-\pi\alpha(yn^{2}+ \frac{(m+\frac{n}{2})^{2}}{y})}=\sum_{p\equiv q(\mod 2)}(p^{2}-\frac{q^{2}}{4y^{2}})^{ 2}e^{-\pi\alpha(yp^{2}+\frac{q^{2}}{4y})}. \tag{4.12}\] One then splits the double sum into four parts as \[(p,q)\in(a):p=\pm 1,q=\pm 1;(b):p=0,q=\pm 2;(c):p=\pm 2,q=0\ \text{ and }\ (d):p\geq 2,q\geq 2.\] Continuing with (4.12), one has \[\sum_{n,m}(n^{2}-\frac{(m+\frac{n}{2})^{2}}{y^{2}})^{2}e^{-\pi \alpha(yn^{2}+\frac{(m+\frac{n}{2})^{2}}{y})}= 4(1-\frac{1}{4y^{2}})^{2}e^{-\pi\alpha(y+\frac{1}{4y})}+\frac{2} {y^{4}}e^{-\frac{\pi\alpha}{y}}+8e^{-4\pi\alpha y} \tag{4.13}\] \[+ \sum_{p\equiv q(\mod 2),p\geq 2,q\geq 2}(p^{2}-\frac{q^{2}}{4y ^{2}})^{2}e^{-\pi\alpha(yp^{2}+\frac{q^{2}}{4y})}.\] The last term in (4.13) is very small and can be controlled by \[\begin{split}&\sum_{p\equiv q(\mod 2),p\geq 2,q\geq 2}(p^{2}-\frac{q^{2 }}{4y^{2}})^{2}e^{-\pi\alpha(yp^{2}+\frac{q^{2}}{4y})}\\ \leq&\sum_{p\equiv q(\mod 2),p\geq 2,q\geq 2}(p^{4}+ \frac{q^{4}}{16y^{4}})e^{-\pi\alpha(yp^{2}+\frac{q^{2}}{4y})}\\ \leq&\sum_{p\geq 2}p^{4}e^{-\pi\alphayp^{2}}\sum_{q \geq 2}e^{-\frac{\pi\alpha}{4y}q^{2}}+\frac{1}{16y^{4}}\sum_{p\geq 2}e^{-\pi \alphayp^{2}}\sum_{q\geq 2}q^{4}e^{-\frac{\pi\alpha}{4y}q^{2}}\\ \leq& 16e^{-\pi\alpha(4y+\frac{1}{y})}\cdot d( \alpha;y).\end{split} \tag{4.14}\] Here \(d(\alpha;y)\) is bounded by some constant and has the following expression \[d(\alpha;y):=\sum_{p\geq 2}(\frac{p}{2})^{4}e^{-\pi\alpha y(p^{2}-4)}\sum_{q \geq 2}e^{-\frac{\pi\alpha}{4y}(q^{2}-4)}+\frac{1}{16y^{4}}\sum_{p\geq 2}e^{- \pi\alpha y(p^{2}-4)}\sum_{q\geq 2}(\frac{q}{2})^{4}e^{-\frac{\pi\alpha}{4y}(q^{2}-4)}. \tag{4.15}\] Roughly, one has \[d(\alpha;y)\leq 2. \tag{4.16}\] Therefore, by (4.13), (4.14) and (4.16), \[\begin{split}\sum_{n,m}(n^{2}-\frac{(m+\frac{n}{2})^{2}}{y^{2}} )^{2}e^{-\pi\alpha(yn^{2}+\frac{(m+\frac{n}{2})^{2}}{y})}\leq& 4(1-\frac{1}{4y^{2}})^{2}e^{-\pi\alpha(y+\frac{1}{4y})}+ \frac{2}{y^{4}}e^{-\frac{\pi\alpha}{y}}\\ &+8e^{-4\pi\alpha y}+32e^{-\pi\alpha(4y+\frac{1}{y})}.\end{split} \tag{4.17}\] The proof is complete. **Lemma 4.26** (An upper bound of double sum: fourth kind).: \[\sum_{n,m}n^{2}(yn^{2}+\frac{(m+\frac{n}{2})^{2}}{y})e^{-\pi\alpha(yn^{2}+ \frac{(m+\frac{n}{2})^{2}}{y})}\leq 4(1+\epsilon_{d,2})(y+\frac{1}{4y})e^{-\pi \alpha(y+\frac{1}{4y})},\] _where_ \[\epsilon_{d,2}\leq 16e^{-3\pi\alpha y}(1+e^{-\frac{3\pi\alpha}{4y}})\leq 9.27 \cdot 10^{-4}.\] Proof.: We first deform the double sum as \[\sum_{n,m}n^{2}(yn^{2}+\frac{(m+\frac{n}{2})^{2}}{y})e^{-\pi\alpha(yn^{2}+ \frac{(m+\frac{n}{2})^{2}}{y})}=\sum_{p\equiv q(\mod 2)}p^{2}(yp^{2}+\frac{q^{2}}{4y})e^{ -\pi\alpha(yp^{2}+\frac{q^{2}}{4y})}. \tag{4.18}\] One then splits the double sum into four parts as \[(p,q)\in(a):p=\pm 1,q=\pm 1;(b):p=0,q=\pm 2;(c):p=\pm 2,q=0\;\;\text{and}\;\;(d):p \geq 2,q\geq 2.\] Then by (4.18), \[\begin{split}\sum_{n,m}n^{2}(yn^{2}+\frac{(m+\frac{n}{2})^{2}}{ y})e^{-\pi\alpha(yn^{2}+\frac{(m+\frac{n}{2})^{2}}{y})}=& 4(y+\frac{1}{4y})e^{-\pi\alpha(y+\frac{1}{4y})}+16ye^{-4 \pi\alpha y}\\ &+\sum_{p\equiv q(\mod 2),p\geq 2,q\geq 2}p^{2}(yp^{2}+\frac{q^{2}}{4 y})e^{-\pi\alpha(yp^{2}+\frac{q^{2}}{4y})}\end{split} \tag{4.19}\] The last term in (4.19) is very small and can be controlled by \[\sum_{p\equiv q(\mod 2),p\geq 2,q\geq 2}p^{2}(yp^{2}+\frac{q^{2}}{4y})e^{ -\pi\alpha(yp^{2}+\frac{q^{2}}{3y})} \tag{4.20}\] \[\leq y\sum_{p\geq 2}p^{4}e^{-\pi\alpha p^{2}}\sum_{q\geq 2}e^{-\frac{ \pi\alpha}{4y}q^{2}}+\frac{1}{4y}\sum_{p\geq 2}p^{2}e^{-\pi\alpha p^{2}}\sum_{q \geq 2}q^{2}e^{-\frac{\pi\alpha}{4y}q^{2}}.\] The result then follows by (4.19) and (4.20) after some simple deformations and we omit the details here. Region \(\mathcal{R}_{a}\), estimate of \((\partial_{yy\alpha}+\frac{2}{y}\partial_{y\alpha})\mathcal{W}_{\frac{1}{2\pi }}(\alpha;\frac{1}{2}+iy)\) In this section, we aim to establish that **Lemma 4.27**.: _Assume that \((\alpha,y)\in\mathcal{R}_{a}\), then \((\partial_{yy\alpha}+\frac{2}{y}\partial_{y\alpha})\mathcal{W}_{\frac{1}{2 \pi}}(\alpha;\frac{1}{2}+iy)>0\)._ With Lemma 4.27, one has **Lemma 4.28**.: _Assume that \((\alpha,y)\in\mathcal{R}_{a}\), then \(\partial_{y}\mathcal{W}_{\frac{1}{2\pi}}(\alpha;\frac{1}{2}+iy)\geq 0\)._ Proof.: Notice that \[\partial_{yy\alpha}+\frac{2}{y}\partial_{y\alpha}=\partial_{\alpha}(y^{-2} \partial_{y}(y^{2}\partial_{y})). \tag{4.21}\] By Lemma 3.5, one has \[y^{-2}\partial_{y}(y^{2}\partial_{y})\mathcal{W}_{\frac{1}{2\pi}}(\alpha;\frac {1}{2}+iy)\mid_{a=1}=0\;\;\text{for}\;\;y>0. \tag{4.22}\] Then by Lemma 4.27, (4.21) and (4.22) \[\partial_{y}(y^{2}\partial_{y})\mathcal{W}_{\frac{1}{2\pi}}(\alpha;\frac{1}{2 }+iy)\geq 0\;\;\text{for}\;\;(\alpha,y)\in\mathcal{R}_{a}. \tag{4.23}\] On the other hand, by Proposition 3.4 of Betermin [10], it holds that \[(y^{2}\partial_{y})\mathcal{W}_{\frac{1}{2\pi}}(\alpha;\frac{1}{2}+iy)\mid_{y =\frac{\sqrt{3}}{2}}=0\;\;\text{for}\;\;a>0. \tag{4.24}\] Then by (4.23) and (4.24), \[(y^{2}\partial_{y})\mathcal{W}_{\frac{1}{2\pi}}(\alpha;\frac{1}{2}+iy)\geq 0 \;\;\text{for}\;\;(\alpha,y)\in\mathcal{R}_{a}. \tag{4.25}\] (4.25) yields the result. It remains to prove Lemma 4.27. We start from Lemma 4.20. After simple computation, one has **Lemma 4.29** (An identity for \((\partial_{yy\alpha}+\frac{2}{y}\partial_{y\alpha})\mathcal{W}_{\frac{1}{2 \pi}}(\alpha;\frac{1}{2}+iy)\)).: \[(\partial_{yy\alpha}+\frac{2}{y}\partial_{y\alpha})\mathcal{W}_{ \frac{1}{2\pi}}(\alpha;\frac{1}{2}+iy)= \frac{9}{2}\pi^{2}\alpha\sum_{n,m}(n^{2}-\frac{(m+\frac{n}{2})^{2}} {y^{2}})^{2}(yn^{2}+\frac{(m+\frac{n}{2})^{2}}{y})e^{-\pi\alpha(yn^{2}+\frac{( m+\frac{n}{2})^{2}}{y})}\] \[+\frac{2\pi^{2}\alpha}{y}\sum_{n,m}n^{2}(yn^{2}+\frac{(m+\frac{n }{2})^{2}}{y})^{2}e^{-\pi\alpha(yn^{2}+\frac{(m+\frac{n}{2})^{2}}{y})}\] \[-\frac{5\pi}{2}\sum_{n,m}(n^{2}-\frac{(m+\frac{n}{2})^{2}}{y^{2}} )^{2}e^{-\pi\alpha(yn^{2}+\frac{(m+\frac{n}{2})^{2}}{y})}\] \[-\frac{5\pi}{y}\sum_{n,m}n^{2}(yn^{2}+\frac{(m+\frac{n}{2})^{2}} {y})e^{-\pi\alpha(yn^{2}+\frac{(m+\frac{n}{2})^{2}}{y})}\] \[-\pi^{3}\alpha^{2}\sum_{n,m}(n^{2}-\frac{(m+\frac{n}{2})^{2}}{y^{2 }})^{2}(yn^{2}+\frac{(m+\frac{n}{2})^{2}}{y})^{2}e^{-\pi\alpha(yn^{2}+\frac{( m+\frac{n}{2})^{2}}{y})}\] Based on Lemma 4.29, we then state the following Lemma and postpone its proof to the late part of this subsection. **Lemma 4.30** (A lower bound function of \((\partial_{yy\alpha}+\frac{2}{y}\partial_{y\alpha})\mathcal{W}_{\frac{1}{2\pi}}( \alpha;\frac{1}{2}+iy)\)).: _Assume that \((\alpha,y)\in\mathcal{R}_{a}\), then_ \[(\partial_{yy\alpha}+\frac{2}{y}\partial_{y\alpha})\mathcal{W}_{\frac{1}{2\pi }}(\alpha;\frac{1}{2}+iy)\geq\frac{\pi}{y^{4}}e^{-\frac{\pi\alpha}{y}}\cdot \mathcal{L}_{a}(\alpha;y).\] _Here_ \[\mathcal{L}_{a}(\alpha;y)=\frac{9\pi\alpha}{y}-5-\frac{2\pi^{2}\alpha^{2}}{y ^{2}}+H(\alpha;y)e^{-\pi\alpha(y-\frac{3}{4y})},\] _and_ \[\begin{split} H(\alpha;y)=& 18\pi\alpha(y^{2}-\frac{1}{ 4})^{2}(y+\frac{1}{4y})+8\pi\alpha y^{3}(y+\frac{1}{4y})^{2}\\ &-10(y^{2}-\frac{1}{4})^{2}-20y^{3}(y+\frac{1}{4y})-4\pi^{2} \alpha^{2}(y^{2}-\frac{1}{4})^{2}(y+\frac{1}{4y})^{2}.\end{split}\] **Lemma 4.31** (The positiveness of the lower bound function in Lemma 4.30).: _Assume that \((\alpha,y)\in\mathcal{R}_{a}\), then_ \[\mathcal{L}_{a}(\alpha;y)\geq\frac{1}{2}>0.\] Proof.: Since \(\mathcal{R}_{a}\) is a small finite region, we split it into 14 subregions to get the result. By Lemmas 4.30 and 4.31, one gets Lemma 4.27. It remains to prove Lemma 4.30. We start from Lemma 4.29. There are five types double sum in Lemma 4.29, three of them are estimated in Lemmas 4.23, 4.25-4.26. We shall estimate the left two of them in the late part of this subsection. Lemma 4.30 then follows from Lemmas 4.23, 4.25-4.26 and 4.32-4.33. **Lemma 4.32** (A lower bound of double sum).: \[\sum_{n,m}n^{2}(yn^{2}+\frac{(m+\frac{n}{2})^{2}}{y})^{2}e^{-\pi\alpha(yn^{2}+ \frac{(m+\frac{n}{2})^{2}}{y})}\geq 4(y+\frac{1}{4y}))^{2}e^{-\pi\alpha(y+ \frac{1}{4y})}.\] Proof.: The double sum can be evaluated at \[(m,n)=\{(0,1),(0,-1),(1,-1),(-1,1)\}\ \text{ contributing }\ (y+\frac{1}{4y}))^{2}e^{-\pi \alpha(y+\frac{1}{4y})}\ \text{ each}.\] The rest of other terms in the double sum all are positive and hence the result follows. **Lemma 4.33** (An upper bound of double sum).: \[\begin{split}&\sum_{n,m}(n^{2}-\frac{(m+\frac{n}{2})^{2}}{y^{2}})^{ 2}(yn^{2}+\frac{(m+\frac{n}{2})^{2}}{y})^{2}e^{-\pi\alpha(yn^{2}+\frac{(m+ \frac{n}{2})^{2}}{y})}\\ \leq&\frac{2}{y^{6}}e^{-\frac{\pi\alpha}{y}}+4(1- \frac{1}{4y^{2}})^{2}(y+\frac{1}{4y})^{2}e^{-\pi\alpha(y+\frac{1}{4y})}+3\cdot 1 6^{2}e^{-4\pi\alpha y}.\end{split}\] Proof.: We deform the double sum as \[\begin{split}&\sum_{n,m}(n^{2}-\frac{(m+\frac{n}{2})^{2}}{y^{2}})^{ 2}(yn^{2}+\frac{(m+\frac{n}{2})^{2}}{y})^{2}e^{-\pi\alpha(yn^{2}+\frac{(m+ \frac{n}{2})^{2}}{y})}\\ &=\sum_{p\equiv q(\mod 2)}(p^{2}-\frac{q^{2}}{4y^{2}})^{2}(yp ^{2}+\frac{q^{2}}{4y})^{2}e^{-\pi\alpha(yp^{2}+\frac{q^{2}}{4y})}.\end{split} \tag{4.26}\] One then splits the double sum into four parts as \[(p,q)\in(a):p=\pm 1,q=\pm 1;(b):p=0,q=\pm 2;(c):p=\pm 2,q=0\ \text{ and }\ (d):p \geq 2,q\geq 2.\] It follows that \[\sum_{p\equiv q(\mod 2)}(p^{2}-\frac{q^{2}}{4y^{2}})^{2}(yp^{2}+ \frac{q^{2}}{4y})^{2}e^{-\pi\alpha(yp^{2}+\frac{q^{2}}{4y})} \tag{4.27}\] \[= 4(1-\frac{1}{4y^{2}})^{2}(y+\frac{1}{4y})^{2}e^{-\pi\alpha(y+ \frac{1}{4y})}+\frac{2}{y^{6}}e^{-\frac{\pi\alpha}{y}}+2\cdot 16^{2}e^{-4\pi\alpha y}\] \[+ \sum_{p\equiv q(\mod 2),p\geq 2,q\geq 2}(p^{2}-\frac{q^{2}}{4y^{2}} )^{2}(yp^{2}+\frac{q^{2}}{4y})^{2}e^{-\pi\alpha(yp^{2}+\frac{q^{2}}{4y})}.\] The last term in (4.27) is very small and can be controlled by \[\sum_{p\equiv q(\mod 2),p\geq 2,q\geq 2}(p^{2}-\frac{q^{2}}{4y^{2}} )^{2}(yp^{2}+\frac{q^{2}}{4y})^{2}e^{-\pi\alpha(yp^{2}+\frac{q^{2}}{4y})} \tag{4.28}\] \[= \sum_{p\equiv q(\mod 2),p\geq 2,q\geq 2}y^{2}(p^{4}-\frac{q^{4}} {16y^{4}})^{2}e^{-\pi\alpha(yp^{2}+\frac{q^{2}}{4y})}\] \[\leq \sum_{p\equiv q(\mod 2),p\geq 2,q\geq 2}y^{2}(p^{8}+\frac{q^{8}} {16^{2}y^{8}})e^{-\pi\alpha(yp^{2}+\frac{q^{2}}{4y})}\] \[\leq y^{2}\sum_{p\geq 2}p^{8}e^{-\pi\alpha up^{2}}\sum_{q\geq 2}e^{- \frac{\pi\alpha}{4y}q^{2}}+\frac{1}{16^{2}y^{6}}\sum_{p\geq 2}e^{-\pi\alpha up ^{2}}\sum_{q\geq 2}q^{8}e^{-\frac{\pi\alpha}{4y}q^{2}}\] \[\leq 16^{2}y^{2}e^{-\pi\alpha(4y+\frac{1}{y})}\cdot d_{2}(\alpha;y)\] Here \(d_{2}(\alpha;y)\) is bounded by some constant and has the following expression \[d_{2}(\alpha;y):=\sum_{p\geq 2}(\frac{p}{2})^{4}e^{-\pi\alpha y(p^{2}-4)} \sum_{q\geq 2}e^{-\frac{\pi\alpha}{4y}(q^{2}-4)}+\frac{1}{16y^{4}}\sum_{p\geq 2 }e^{-\pi\alpha y(p^{2}-4)}\sum_{q\geq 2}(\frac{q}{2})^{4}e^{-\frac{\pi\alpha}{4y}(q^{2}-4)}. \tag{4.29}\] Roughly, one has \[d(\alpha;y)\leq 2. \tag{4.30}\] Therefore, by (4.27), (4.28) and (4.30), \[\sum_{n,m}(n^{2}-\frac{(m+\frac{n}{2})^{2}}{y^{2}})^{2}e^{-\pi \alpha(yn^{2}+\frac{(m+\frac{n}{2})^{2}}{y})} \tag{4.31}\] \[\leq 4(1-\frac{1}{4y^{2}})^{2}(y+\frac{1}{4y})^{2}e^{-\pi\alpha(y+ \frac{1}{4y})}+\frac{2}{y^{6}}e^{-\frac{\pi\alpha}{y}}+2\cdot 16^{2}e^{-4\pi \alpha y}\] \[\quad+2\cdot 16^{2}y^{2}e^{-\pi\alpha(4y+\frac{1}{y})}.\] Then the desired result follows. ## 5. Proof of Theorems 1.1-1.2 **Proof of Theorem 1.1.** Case 1: \(b=\frac{1}{2\pi}\). This follows from Theorems 3.3 and 4.1. Case 2: \(b<\frac{1}{2\pi}\). It is proved by Lemma 3.1 and Case 1. Case 3: \(b>\frac{1}{2\pi}\). It follows by Lemma 3.2. Indeed, by Lemma 3.2, one has \[\mathcal{W}_{b}(\alpha;z) =\alpha^{-\frac{3}{2}}\sqrt{y}\cdot\Big{(}\frac{1}{2\pi}-b+o(1) \Big{)}\] \[\mapsto-\infty,\ \ \text{if}\ \ b>\frac{1}{2\pi},\ \ \text{as}\ \ y\to+\infty\] proves the nonexistence result. **Proof of Theorem 1.2.** Case 1: \(b=\sqrt{a}\). By simple observation, one has the connection between the functional \(\theta(\alpha;z)-\sqrt{a}\theta(a\alpha;z)\) and \(\mathcal{W}_{\frac{1}{2\pi}}(x\alpha;z)\). Indeed, applying the fundamental Theorem of calculus on a parameter \(t\), we have \[\theta(\alpha;z)-\sqrt{a}\theta(a\alpha;z)=-\int_{1}^{a}\partial_{t}(\sqrt{t} \theta(t\alpha;z))dt=\pi\int_{1}^{a}\mathcal{W}_{\frac{1}{2\pi}}(t\alpha;z)dt. \tag{5.1}\] See \(\mathcal{W}_{\frac{1}{2\pi}}(t\alpha;z)\) in (3.2). The proof then follows by Theorem 1.1(or Theorem 3.3) and (5.1). Case 2: \(b<\sqrt{a}\). \[\Big{(}\theta(\alpha;z)-b\theta(a\alpha;z)\Big{)}=\Big{(}\theta(\alpha;z)- \sqrt{a}\theta(a\alpha;z)\Big{)}+(\sqrt{a}-b)\theta(a\alpha;z). \tag{5.2}\] Then it follows by (5.2), Case 1 and \[\min_{z\in\mathbb{H}}\theta(\alpha;z)\ \ \text{is achieved at}\ \ e^{i\frac{\pi}{3}} \tag{5.3}\] by [37]. Case 3: \(b>\sqrt{a}\). By Lemma 3.4, for \(\forall\alpha>0\), \[\theta(\alpha;z)-b\theta(a\alpha;z) =\sqrt{\frac{y}{a\alpha}}\cdot\Big{(}\sqrt{a}-b+o(1)\Big{)}\] \[\mapsto-\infty,\ \ \text{if}\ \ b>\sqrt{a},\ \ \text{as}\ \ y\to+\infty\] which proves the nonexistence result. **Acknowledgements.** The research of S. Luo is partially supported by NSFC(Nos. 12261045, 12001253) and double thousands plan of Jiangxi(jxsq2019101048). The research of J. Wei is partially supported by NSERC of Canada. **Statements and Declarations: there is no conflict of interest.** **Data availability: the manuscript has no associated data.**
2308.06510
Evaluation of cinematic volume rendering open-source and commercial solutions for the exploration of congenital heart data
Detailed anatomical information is essential to optimize medical decisions for surgical and pre-operative planning in patients with congenital heart disease. The visualization techniques commonly used in clinical routine for the exploration of complex cardiac data are based on multi-planar reformations, maximum intensity projection, and volume rendering, which rely on basic lighting models prone to image distortion. On the other hand, cinematic rendering (CR), a three-dimensional visualization technique based on physically-based rendering methods, can create volumetric images with high fidelity. However, there are a lot of parameters involved in CR that affect the visualization results, thus being dependent on the user's experience and requiring detailed evaluation protocols to compare available solutions. In this study, we have analyzed the impact of the most relevant parameters in a CR pipeline developed in the open-source version of the MeVisLab framework for the visualization of the heart anatomy of three congenital patients and two adults from CT images. The resulting visualizations were compared to a commercial tool used in the clinics with a questionnaire filled in by clinical users, providing similar definitions of structures, depth perception, texture appearance, realism, and diagnostic ability.
Irum Baseer, Israel Valverde, Abdel H. Moustafa, Josep Blat, Oscar Camara
2023-08-12T09:10:07Z
http://arxiv.org/abs/2308.06510v1
Evaluation of cinematic volume rendering open-source and commercial solutions for the exploration of congenital heart data ###### Abstract Detailed anatomical information is essential to optimize medical decisions for surgical and pre-operative planning in patients with congenital heart disease. The visualization techniques commonly used in clinical routine for the exploration of complex cardiac data are based on multi-planar reformations, maximum intensity projection, and volume rendering, which rely on basic lighting models prone to image distortion. On the other hand, cinematic rendering (CR), a three-dimensional visualization technique based on physically-based rendering methods, can create volumetric images with high fidelity. However, there are a lot of parameters involved in CR that affect the visualization results, thus being dependent on the user's experience and requiring detailed evaluation protocols to compare available solutions. In this study, we have analyzed the impact of the most relevant parameters in a CR pipeline developed in the open-source version of the MeVisLab framework for the visualization of the heart anatomy of three congenital patients and two adults from CT images. The resulting visualizations were compared to a commercial tool used in the clinics with a questionnaire filled in by clinical users, providing similar definitions of structures, depth perception, texture appearance, realism, and diagnostic ability. C 1 Footnote 1: [https://www.siemenshealthlineers.com/digital-health-solutions/cincincin-rending](https://www.siemenshealthlineers.com/digital-health-solutions/cincincin-rending) **Index Terms:** **C**inematic rendering--open-source--commercial tool--congenital heart data ## 1 Introduction Congenital heart disease (CHD) is one of the most frequently diagnosed defects afflicting approximately 0.8% to 1.2% of live births worldwide [2]. As the cardiovascular morphology varies greatly between individual patients, it is important for clinicians to have a comprehensive understanding of the spatial relationship between the cardiac structures, in order to make optimal medical decisions. As a result, there is a growing number of computer-aided software tools available to assist radiologists in this process. One of the widely used methods is volume rendering, which was found to represent human structures in an artificial way, also being prone to image distortions [8] due to their reliance on basic lighting models. On the other hand, cinematic rendering (CR) is a novel post-processing tool [10] that renders the volumetric medical data using physically-based advanced lighting models [12]. CR resembles the casting of billions of light rays from all possible directions to create volumetric images with a remarkable level of realism [6]. Benefits of cinematic vs volume rendering are increasingly being reported in medical applications, such as for faster comprehension of anatomy [9] and for pre-operative planning [17]. Additionally, studies have shown that more accurate visualization of medical data benefits imaging tasks including delineating complex congenital heart pathologies [15]. Apart from its clinical utility, CR also has the potential to be useful in patient communication [7] and education [1]. The most widely used commercial implementation of CR [5] is offered by Siemens Healthiness as part of their syngo, via platform1. Several other license-limited solutions include Global Illumination Vitera by Canon Medical Informatics2 and MeVisLab3 are also available. Vitrea offers global illumination methods for rendering volumetric data in a photo-realistic manner, while MeVisLab offers a path tracer module, which is a significantly enhanced version of the ExposureRender [12] framework by Thomas Kroes. For instance, MeVisLab has recently been used for post-surgical assessment in oncologic head and neck reconstructive surgery comparing path tracing and volume rendering techniques [4]. While these vendor-provided solutions often have high rendering capabilities and are utilized in advanced healthcare centers, their cost can be prohibitive for smaller institutions or individual researchers. However, there are some open-source alternatives. Voreen4 and Inviwo5 offer better volumetric rendering capabilities by implementing ray casting with global illumination. Yet, neither of these applications utilizes volumetric path tracing or equivalent state-of-the-art volumetric rendering techniques. Another open-source and freely available solution supporting CR in web browsers is VolView [18]. These solutions are typically free to use, customizable to meet specific needs, and can be used by anyone with an internet connection, regardless of their location or financial resources. While these solutions are affordable and flexible, there is a lack of research comparing different solutions in cardiac applications, to identify the strengths and weaknesses of open-source software tools as compared to commercial solutions. Footnote 1: [https://www.siemenshealthiness.com/digital-health-solutions/cincincin-rending](https://www.siemenshealthiness.com/digital-health-solutions/cincincin-rending) Footnote 2: [https://www.vitatimages.com/global-illumination/](https://www.vitatimages.com/global-illumination/) Footnote 3: [https://www.mevislab.de/download/](https://www.mevislab.de/download/) Footnote 4: [https://www.uni-muenster.de/Voreen](https://www.uni-muenster.de/Voreen) Footnote 5: [http://www.inviwo.org](http://www.inviwo.org) In this study, we utilized a free version of MeVisLab to design a pipeline for cinematically rendering a CHD dataset. We conducted a detailed evaluation of several critical parameters to enhance the shape and depth perception of the heart anatomy. Furthermore, we assessed the performance of the developed open-source rendering pipeline by comparing it with a commercial solution available in clinics. This evaluation was conducted using a questionnaire filled out by cardiology experts. ## 2 Materials and methods ### Patient cases and reconstructions We included clinical data of 3 congenital heart disease patients who underwent CT imaging for diagnosis or treatment, along with CT data of two normal adult hearts. A brief summary of patient data is provided in Table I. For creating CR visualizations from CT data, anonymized DICOM reconstructions were transferred to both used environments: MeVisLab and a workstation with prototype commercial software (syngo.via cinematic VRT, Siemens Healthiners). Each case was carefully displayed using the same zooming and rotation features in both software; the cutting tool was applied to eliminate any bones and remaining lines that could obscure the view. To ensure comparability, the generated reconstructions were captured using the same angle of view, color, and opacity settings (see Figure 1). ### Cinematic rendering pipeline and sensitivity analysis To build the cinematic rendering pipeline, the MeVisLab software (version 3.5.0) was installed on a personal computer (AMD FX (tm), eight-core processor, 3.50GHz, 32 GB RAM, 64-bits operating system). The pipeline involved a visual programming approach, combining various modules to load imaging data, perform pre-processing, design transfer functions, set material and lighting properties, and render the volume followed by post-processing. These steps are briefly explained below. **Data loading and pre-processing.** The first step of the rendering involved loading patient data, usually in the form of a series of slices. MeVisLab's DirectDicomImport module was used to directly import the image files. To reduce noise in the acquired data, the GaussSmoothing module in MeVisLab was applied. **Transfer function.** After pre-processing, the next step involved designing a transfer function. A transfer function maps voxel values to visual properties like color and opacity, enabling the distinction of anatomical structures in the image. The SoLUTEEditor module allowed interactive editing of RGBA lookup tables to design transfer functions. We generated a range of preset transfer functions, saving them in CSV format for easy loading via a Python script. The module's window level and width option allowed for setting a color range for displaying specific anatomical structures, similar to conventional CT reconstructions. **Shading and lighting.** Once the data was mapped to the transfer function, lighting and shading were applied to the volume using a physically-based rendering (PBR) workflow. PBR has two principal workflows [14]: metal/roughness and specular/glossiness. In PBR, shading is achieved using various Bidirectional Reflectance Distribution Function (BRDF), which are mathematical models that describe how light reflects off surfaces based on their physical properties. The SoPathTracerMaterial module in MeVisLab provides a range of different materials from which Material_Microfacet is based on the specular/glossiness workflow of PBR, whereas Material_Principled is based on the metal/roughness workflow. We used "Material_Principled", a physically-based material whose parameters are based on the Disney BRDF model [3]. We adjusted parameters like base color, metallic, roughness, and specular properties to achieve realistic material appearances. For lighting, the pipeline included the SoPathTracerAreal_Light and SoPathTracerBackground-Light modules to simulate realistic lighting effects in the 3D scene. The intensity, color, and position of both lights were carefully adjusted to create the desired lighting effect. In order to determine which parameters should be adjusted and which values achieved the most realistic appearance, a sensitivity analysis was performed. Specifically, the roughness parameter was tested with constant metallic and specular values of 0.5, producing three images with different roughness levels, as shown in Figure 2. The impact of lighting on the final image was also examined by varying the number and position of lights. For instance, adding two light sources in the same position tends to create blurry reflections (over-exposure) as compared to a single light source, which focuses better on the details of the image as can be seen in Figure 3. Moreover, adjusting the position of light sources improved the visualization of specific regions, including shadows and depth. We also tested the effects of area and background lighting, where area lighting is positioned on the top right and the background light source is posi \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline **Case** & **MF** & **Age** & **Condition** & **Manufacturer** & **Voxel spacing** \\ \hline 1 & F & 3 versus & Pulmonary atresia & GF Medical & 0.273 \(\leq\) 0.73 \(\leq\) 0.625 \\ \hline 2 & M & 4 days & Ventricular regular defect & Cannot Medical & 0.163 \(\leq\) 0.163 \(\leq\) 0.250 \\ \hline 3 & M & 2 years & Occluded arterial duct & Siemens & 0.246 \(\pm\) 0.246 \(\pm\) 0.400 \\ \hline 4 & F & 46 years & Normal heart & GF Medical & 0.559 \(\leq\) 0.625 \\ \hline 5 & M & 63 years & Normal heart & Philips & 0.576 \(\pm\) 0.576 \(\pm\) 0.329 \\ \hline \end{tabular} \end{table} Table 1: Patient cases and reconstructions. M/F: male/female. Figure 1: Comparison of volume rendering (VR) and cinematic rendering (CR) visualizations for congenital heart disease patients (Cases 1-3) and normal adults (Cases 4-5). Top row: VR. Middle row: CR from commercial solution. Bottom row: CR from open-source solution. tioned behind the objects being lit, providing overall illumination of the scene. As illustrated in Figure 4, the use of only area lighting tended to overexpose certain areas and obscure details. On the other hand, the use of background white lighting alone resulted in a more diffused and natural-looking, while combining area and background lighting produced dynamic effects, highlighting specific features of the volume with area light, and creating a sense of depth and dimensionality using the background light. _Image-based lighting (IBL)_. IBL is a computer graphics technique that employs a high-dynamic-range image (HDRI) as a light source. In PBR, background light refers to the light coming from the environmental surrounding the rendered scene. We utilized the SoPath-TracerBackgroundLight module to support IBL with cubemaps for rendering the volume (Figure 5), demonstrating how different lighting strategies can influence the overall appearance of the rendered image. For instance, the use of two area light sources in the first image provides a more focused and detailed illumination, while IBL with HDRI creates a more realistic and immersive illumination by capturing lighting and reflection information from the surroundings. **Rendering**. Once color, material, and lighting are adjusted, we integrate the SoPathTracer module for rendering. This module utilizes a Monte Carlo path tracing method to simulate light transport through the anatomy and create photo-realistic images. Path tracing is a common technique in computer graphics [11], generating paths of scattering events from the camera to light sources, resulting in multi-scattering. The Monte Carlo integration method solves the following multi-dimensional and non-continuous rendering equation 1, considering the properties of the scene and the physical interactions of light with the materials: \[L_{o}(x,\omega_{o})=L_{e}(x,\omega_{o})+\int_{\Omega}f_{r}(x,\omega_{i},\omega _{o})L_{i}(x,\omega_{i})|\cos\theta|\mathrm{d}\omega_{i}, \tag{1}\] where \(L_{o}(\mathbf{x},\omega_{o})\) and \(L_{e}(\mathbf{x},\omega_{o})\) are the emitted (from the surface) and outgoing radiances, respectively, at point \(\mathbf{x}\) in direction \(\omega_{o}\), \(f_{r}(\mathbf{x},\omega_{i},\omega_{o})\) is the bidirectional reflectance distribution function (BRDF) that describes how much light is reflected in different directions \(\omega_{o}\) from direction \(\omega_{i}\), \(L_{i}(\mathbf{x},\omega_{i})\) is the incoming radiance at point \(\mathbf{x}\) from direction \(\omega_{i}\), and \(|\cos\theta|\) where theta is the angle between the surface normal and \(\omega_{i}\). After setting up all the light sources and material properties, the user can interact with the rendering and adjust the camera projection type. Furthermore, post-processing tools such as the SoPostEffectAmbientOcclusion module were applied to improve the shadowing effect and depth perception. Moreover, the SoVolumeCutting and clip plane modules were added to allow image editing, enabling the exposure and display of specific regions of interest, as well as isolating the heart from adjacent structures such as bones and vessels. ## 3 Evaluation of cinematic rendering ### Assessment protocol The overall evaluation consisted of subjective assessments of photo-realistic static snapshots. Three independent domain-expert cardiologists, two cardiac radiologists with 8 years of experience, and one pediatric cardiologist with 15 years of experience, conducted the evaluations. The snapshots were acquired in such a way as to enable independent ratings for various anatomical structures, including the atria, ventricles, great arteries, and coronary arteries. A score was required for the snapshots on a Likert scale from 1 to 5 (e.g., very unsatisfied to very satisfied). Specifically, five questions were asked per case, related to the following visual characteristics, adapted from [16, 13]: * _Definition of structure_ describes the sharpness of the edges, e.g., for performing an anatomical measurement. * _Depth perception_ describes the ability to perceive spatial relationships in 3D (e.g., anterior/posterior). * _Texture appearance_ refers to the appearance of the surfaces in terms of their degree of roughness and metalless. * _Fidelity_ is a characteristic analyzing the sensation of resembling real cardiac tissue on screen. Figure 4: Effects of area and background lighting. Volume is illuminated with: (a) area lighting positioned on the top right; (b) background white lighting; and (c) combination of area and background lighting. Figure 5: Cinematic renderings using different lighting strategies: (a) use of two area light sources, (b-c) use of image-based lighting with two different high-dynamic range images; cloudy sky and a Vasamu-sum Humans6 Figure 3: Effects of single and multiple lights at the same position. Volume is illuminated with (a) single light, and (b) two lights. Figure 2: Effects of different values of roughness (r): (a) r = 0, results in a completely shiny and metallic surface (b) r = 0.5, introduces some subtle variations in the surface texture, resulting in a more natural appearance. and (c) r = 1, shows a highly rough surface, leading to a matte surface from left to right, keeping metallic and specular equal to 0.5. * _Diagnostic ability_ refers to the effectiveness of the rendered images in supporting clinical diagnosis. The questionnaire also included questions about using open-source and commercial solutions for CR. Two questions used a Likert scale to assess the reliability of open-source tools and whether the investigators would recommend them to others. The third question asked for an open-ended opinion about the preference for using open-source or commercial tools for advanced rendering. ### Results Figure 1 presents the conventional volume alongside cinematic renderings of all five analyzed cases. The top row showcases the volume rendering, while the middle and bottom rows display cinematic renderings created using the commercial solution and open-source framework. In the open-source framework, all cases underwent CR with two area light sources, background white light, and distinct material properties, followed by a post-processing step. Furthermore, noise removal was applied during preprocessing for the congenital cases, as they often had higher noise levels compared to normal adult hearts. Alongside these visual results, table 2 complements these visual results by providing a comprehensive overview of our evaluation findings. The analysis was performed by taking the average of all anatomical structure scores across all characteristics. All investigators found the renderings from the commercial solution to be superior or equivalent to the open-source alternative reconstructions, for all analyzed anatomical structures and visual characteristics. However, both solutions obtained satisfaction scores of a similar scale, with no significant differences overall. Case 3 notably highlights the importance of CR. Here, the occluded arterial duct (indicated by the red arrow) is challenging to identify using only VR; however, both alternative methods offer comparable visual results. The analysis further reveals minor distinctions in each visual characteristic assessment for the Great Arteries, with variances of approximately 0.33 and 0.34. Interestingly, the texture appearance exhibits remarkable equivalence, emphasizing the effectiveness of both solutions in capturing texture details. Moreover, case 2, involving a ventricular septal defect, exhibited enhanced visual appeal when rendered using an open-source solution; however, the analysis indicates that commercial CRs outperformed across all visual characteristics. Furthermore, in perceiving the depth of anatomical structures both CRs performed equally. Additionally, the evaluators found that the edges of anatomical structures were better visualized with the commercial tool, with a mean difference in the definition of structure characteristic of 0.34 with respect to the open-source renderings. We can also observe that the lowest scores for both CRs solutions corresponded to the definition of the coronary arteries, due to their complex anatomy. ## 4 Discussion and conclusions Cinematic rendering for complex cardiac data is a promising 3D visualization tool, which is mainly performed on commercial tools in clinical environments. These tools have high-performance capabilities but are relatively costly, thus not being accessible to everyone. In the present study, we have designed a photo-realistic rendering pipeline using the open-source SDK version of MeVisLab as an alternative. To achieve the best possible cinematic rendering visualizations, we performed a sensitivity analysis to identify the most relevant parameters (e.g., material properties) and their effect on the final results. A limitation of fixing values for material properties is that the same value will not result in the same texture appearance for each case since materials behave differently for each anatomy which also depends on the image quality [9]. We also analyzed the effects of lighting and found that the number of light sources and their positions are crucial for creating visually compelling and informative images with higher depth perception. Moreover, area lights offer focused and detailed lighting, while background lighting with HDRI provides more realistic and immersive illumination by capturing lighting and reflection information from the surrounding environment. An evaluation protocol was jointly designed with cardiologists to compare the CRs provided by the developed open-source pipeline and from a commercial solution available in a hospital environment, assessing several visual characteristics of different cardiac structures. Results obtained by three independent cardiologists were consistent in recognizing an overall superior performance of the commercial solution, with slightly higher scores on the Likert scale, especially for the definition of structures, texture appearance, and diagnostic ability. However, the open-source solution had several visual characteristics with a 4.00 or beyond, being similar to the commercial alternative for fidelity and depth perception. Both types of CRs performed worst for the definition of structures due to the low scores given to the coronary arteries. The cardiologists also expressed their satisfaction with the reliability of open-source rendering tools and their recommendation to others for advanced rendering purposes. Their overall preference was for open-source tools, mainly due to their cost-effectiveness. Computationally speaking, both solutions were highly dependent on suitable hardware. The quality of the renderings was dependent on the number of rays to be traced, which in turn depended on the computational power. For example, on a workstation with a GPU RTX 1050, the image was completely rendered in 29 sec (300 iterations) with the open-source solution. However, with a GPU NVIDIA RTX A6000, it only took 3 seconds to completely render the same image. The main strength of the open-source CR pipeline developed in the study is that it allows continuous improvement (GitHub Project), being a "white box" tool enabling to tune parameters such as transfer functions, materials, and lighting to obtain better visualizations, as per clinician's requirement on specific patients. Considering the accessibility of the open-source solution and the similarity of the corresponding satisfaction scores to the commercial tool, the developed pipeline is an exciting alternative, mainly for educational purposes and to support medical diagnosis as a pre-operative planning tool. Future work will be devoted to a more extensive evaluation with more analyzed cases, evaluators, and used software tools, including web-based solutions democtatizing the use of open-source CR in hospitals independently of their resources. ## Acknowledgments This project has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 101016496 (SimCardioTest). \begin{table} \begin{tabular}{|c|c|c||c|c||c|c||c|c||c|} \hline \multicolumn{1}{|c|}{**Characteristics \(\longrightarrow\)**} & \multicolumn{2}{c||}{**Definition of structure**} & \multicolumn{2}{c||}{**Depth perception**} & \multicolumn{2}{c||}{**Texture appearance**} & \multicolumn{2}{c||}{**Fidelity**} & \multicolumn{2}{c||}{**Diagnostic ability**} \\ \cline{2-10} \multicolumn{1}{|c|}{**Anatomical structure \(\downarrow\)**} & **Commercial** & **Open** & **Commercial** & **Open** & **Commercial** & **Open** & **Commercial** & **Open** & **Commercial** & **Open** \\ \hline \hline **Artia** & 4.00 & 3.67 & 4.67 & 4.33 & 3.67 & 3.33 & 3.67 & 3.33 & 4.67 & 4.00 \\ \hline **Vertifacts** & 4.00 & 3.33 & 4.67 & 4.33 & 4.00 & 3.67 & 4.00 & 3.67 & 4.67 & 4.00 \\ \hline **Great Arteries** & 4.33 & 4.00 & 4.67 & 4.33 & 4.67 & 4.67 & 4.00 & 3.67 & 4.33 & 4.00 \\ \hline **Coronary Arteries** & 2.33 & 2.33 & 4.67 & 4.33 & 4.67 & 4.33 & 3.67 & 3.33 & 4.33 & 4.00 \\ \hline \hline **Average** & **3.67** & **3.33** & **4.67** & **4.33** & **4.25** & **4.00** & **3.83** & **3.50** & **4.50** & **4.00** \\ \hline \end{tabular} \end{table} Table 2: Likert’s scale evaluation of cinematic renderings across all cardiac structures and characteristics from the commercial and open-source solutions.
2305.09807
On Dataset Transferability in Active Learning for Transformers
Active learning (AL) aims to reduce labeling costs by querying the examples most beneficial for model learning. While the effectiveness of AL for fine-tuning transformer-based pre-trained language models (PLMs) has been demonstrated, it is less clear to what extent the AL gains obtained with one model transfer to others. We consider the problem of transferability of actively acquired datasets in text classification and investigate whether AL gains persist when a dataset built using AL coupled with a specific PLM is used to train a different PLM. We link the AL dataset transferability to the similarity of instances queried by the different PLMs and show that AL methods with similar acquisition sequences produce highly transferable datasets regardless of the models used. Additionally, we show that the similarity of acquisition sequences is influenced more by the choice of the AL method than the choice of the model.
Fran Jelenić, Josip Jukić, Nina Drobac, Jan Šnajder
2023-05-16T21:10:54Z
http://arxiv.org/abs/2305.09807v2
# On Dataset Transferability in Active Learning for Transformers ###### Abstract Active learning (AL) aims to reduce labeling costs by querying the examples most beneficial for model learning. While the effectiveness of AL for fine-tuning transformer-based pre-trained language models (PLMs) has been demonstrated, it is less clear to what extent the AL gains obtained with one model transfer to others. We consider the problem of transferability of actively acquired datasets in text classification and investigate whether AL gains persist when a dataset built using AL coupled with a specific PLM is used to train a different PLM. We link the AL dataset transferability to the similarity of instances queried by the different PLMs and show that AL methods with similar acquisition sequences produce highly transferable datasets regardless of the models used. Additionally, we show that the similarity of acquisition sequences is influenced more by the choice of the AL method than the choice of the model. ## 1 Introduction Pre-trained language models (PLMs) - large over-parameterized models based on the transformer architecture (Vaswani et al., 2017) and trained on large corpora - are the leading paradigm in modern NLP, yielding state-of-the-art results on a wide range of NLP tasks. However, large models require large amounts of data. _Active learning_(**AL**; Settles, 2009) addresses the data bottleneck problem by improving data labeling efficiency. It employs human-in-the-loop labeling with the model iteratively selecting data points most informative for labeling. Recent work has demonstrated the effectiveness of AL for fine-tuning PLMs (Dor et al., 2020; Griesshaber et al., 2020; Margatina et al., 2022; Yuan et al., 2020; Shelmanov et al., 2021). While AL may considerably reduce model development costs, it also potentially limits the scope of use of the actively acquired datasets. Since data sampling in AL is guided by the inductive bias of the acquisition model, the dataset will typically not represent the original population's distribution (Attenberg and Provost, 2011). This is troublesome if one wishes to use the actively acquired dataset to train a different model (_consumer model_) from the one used for AL (_acquisition model_). If the two models' inductive biases differ, the AL gains can cancel or even revert: the consumer model may perform worse when trained on the actively acquired dataset than on a randomly sampled one. However, the robustness of the actively acquired dataset to the choice of the consumer model is obviously highly desirable, as the acquisition model may become unavailable or dated. The latter is common in NLP, where new and better models are being developed faster than new datasets. However, most AL studies use the same acquisition and consumer models, and dataset transferability is seldom mentioned in AL literature. A notable exception is the work of Lowell et al. (2018), who showed the unreliability of dataset transfer on standard NLP tasks. In this work, we examine the problem of AL dataset transferability for transformer-based PLMs and conduct a preliminary empirical study on text classification datasets. We first probe whether AL gains persist between different transformer-based PLMs, considering several AL methods and datasets. Observing that on most datasets, the transfer works in some cases but fails in others, we investigate the mechanisms underlying transferability. We hypothesize a link between AL dataset transferability and how the acquisition and consumer models sample instances. To probe this, we introduce _acquisition sequence mismatch_ (ASM) to characterize to what extent the two models differ in how they sample instances throughout AL iterations. We investigate how ASM affects dataset transferability and how ASM is affected by other AL variables. We show that, while it is generally reasonable to transfer actively acquired datasets between transformer-based PLMs, AL methods that retain low ASM produce more transferable datasets. We also show that the choice of the AL method affects ASM more than the choice of models. To summarize our contributions: we (1) conduct an empirical study on the transferability of actively acquired datasets between transformer-based PLMs, (2) propose a measure to quantify the mismatch in the acquisition sequences of AL models and link this to dataset transferability, and (3) analyze what design choices affect this mismatch. We provide code for the experiments1 with the hope that our results will encourage NLP practitioners to use AL when fine-tuning PLMs and motivate further research into the AL dataset's transferability. Footnote 1: [https://github.com/fjelenic/al-transfer](https://github.com/fjelenic/al-transfer) ## 2 Related Work Although AL has been extensively studied for shallow and standard neural models (without pretraining), research on combining AL and PLMs lags behind. The initial studies showed promise, with AL methods outperforming random sampling for text classification (Dor et al., 2020; Griesshaber et al., 2020). The field is gradually gaining traction with studies demonstrating AL effectiveness even with simple uncertainty-based methods (Gonsior et al., 2022; Schroder et al., 2022). Moreover, PLMs open up new possibilities, such as complementing AL with model adaptation using unlabeled data (Yuan et al., 2020; Margatina et al., 2022). While there is much research on AL for standard scenarios where the acquisition and consumer models are the same, there is little research on AL dataset transfer. Prabhu et al. (2019) demonstrated that combining uncertainty AL strategies with deep models produces sampled datasets with good sampling properties that have a large overlap with support vectors of SVM trained on the entire dataset. Likewise, Farquhar et al. (2021) showed that deep neural models benefit from the sample bias induced by the acquisition model (the opposite is true for shallow models). However, the jury is still out on the effects of sample bias on the consumer model. The most prominent empirical study on AL transfer with neural models (Lowell et al., 2018) predates PLMs. Tsvigun et al. (2022) focused on alleviating the effects of acquisition-consumer mismatch in PLMs by using lightweight distilled models for acquisition and larger versions of the models as consumer models. Even though the study focuses on improving the transferability of actively acquired datasets, the reasons behind the successful transfer are yet to be explored. An older study of AL dataset transferability for text classification and shallow models by Tomanek and Morik (2011) showed that transfer works in most cases but that neither sample nor model similarity explains transferability. Our study explores these characteristics for acquisition-consumer pairings of different PLMs. ## 3 Experimental Setup Our study used four datasets, three models, and three AL methods (cf. Appendix B for details). The datasets we used are Subjectivity (**SUBJ;**Pang and Lee, 2004), CoLA (**COLA;**Warstadt et al., 2018), AG-News (**AGN;**Zhang et al., 2015), and TREC (**trac;**Li and Roth, 2002)). The three transformer models we used are BERT (Devlin et al., 2018), RoBERTa (Liu et al., 2019), and ELECTRA (Clark et al., 2020). The AL methods we considered are entropy (**ENT;**Settles, 2009), core-set (**CS;**Sener and Savarese, 2017), and BADGE (**BA;**Ash et al., 2019)). This gives 108 AL configurations (72 transfer and 36 no-transfer configurations). Furthermore, we ran each configuration with 20 different warm-start sets to account for stochasticity. The AL acquisition was simulated until the budget of 1500 labeled data points was exhausted (model performance for all datasets reached a plateau), labeling 50 data points per step. We assessed dataset transferability using the difference in the area under the \(F_{1}\) curve of the model trained on the actively acquired dataset and the same model trained on a randomly sampled dataset (\(\Delta\mathrm{AUC}\)). We deem the AL dataset transfer successful if \(\Delta\mathrm{AUC}\) is not significantly less than zero and unsuccessful otherwise. We chose \(\Delta\mathrm{AUC}\) to make the notion of transferability independent of when the AL acquisition terminates. On the other hand, as terminating the AL after acquiring too few labeled data is unrealistic, we also report \(\Delta\mathrm{AUC}_{10}\), which is \(\Delta\mathrm{AUC}\) calculated with an offset of 10 iterations (500 labeled instances) of the AL loop. Comparing \(\Delta\mathrm{AUC}_{10}\) to \(\Delta\mathrm{AUC}\) provides insights into how transferability changes through time. ## 4 Results ### Dataset transferability We grouped the 108 AL configurations into three groups based on the sign of the mean \(\Delta\mathrm{AUC}\) value and the p-value of the difference between AUC scores of transfer and random sampling:2 negative (\(\Delta\mathrm{AUC}<\) 0 and p\(<\).05), neutral (p\(\geq\).05), and positive (\(\Delta\mathrm{AUC}\geq\) 0 and p\(<\).05) transfer. The no-transfer AL configurations (where the acquisition and consumer models are the same) are generally successful (25 positive, 9 neutral, and 2 negative configurations as per \(\Delta\mathrm{AUC}\); 33 positive, 2 neutral, and 1 negative configuration as per \(\Delta\mathrm{AUC}_{10}\)). The grouping of the remaining 72 configurations with AL dataset transfer is given in Table 1. We observe that the dataset, the acquisition-consumer model pairing, and the AL method all affect transfer success. Footnote 2: We used either the paired t-test or Wilcoxon signed-rank test, depending on the results of Lilliefors’ test for normality. Evidently, transferability differs across datasets: the transfer is always positive on subj (which is the simplest task we considered in terms of the number of labels, the balance of classes, and the MDL task complexity measure; cf. Appendix B), while most neutral transfers occur on cola. A more interesting picture emerges from the different acquisition-consumer model pairings and AL methods. Most negative transfers are transfers to ELECTRA, while most neutral transfers are those to RoBERTa (perhaps due to it being optimized for robustness). On the other hand, transfer to BERT is positive in most cases, perhaps because BERT's pre-training regime is most similar to that of the other two models. Among the AL methods, entropy mostly makes the transfer negative, most neutral transfers occur with core-set, and BADGE is the best choice for ensuring positive transferability. However, when looking at the later steps of the AL loop, differences between entropy and BADGE vanish, while the core-set lags slightly behind. Thus, \(\Delta\mathrm{AUC}\) tends to increase throughout the AL process, suggesting that increasing the amount of sampled data lowers the risk of unsuccessful transfer (cf. Appendix C for additional \(F_{1}\) scores analysis). ### Acquisition sequence mismatch We hypothesize there is a link between dataset transferability and the sequence in which data points are acquired for labeling by AL. In particular, we posit that dataset transferability will be successful when the acquisition sequence of the acquisition model does not differ from what the acquisition sequence of a consumer model would be if that model had access to the original dataset. We introduce the _acquisition sequence mismatch_ (ASM) to measure the differences in acquisition sequences. To compute the ASM between two acquisition sequences, we pair the corresponding batches of the two sequences and average their pairwise differences. To measure the difference between a pair of batches, we take the average of the distances of best-matched examples between the batches. To account for the fact that AL methods may choose numerically different yet semantically similar data points, we measure the similarity of acquired instances in representation space. We use GloVe embeddings Pennington et al. (2014) as a common representation space independent of the choice of acquisition and consumer models and compute the cosine distance between averaged word embeddings. Lastly, we use the Hungarian algorithm Kuhn (1955) to construct a bipartite graph between two batches with distance-weighted edges to find the best-matching examples. Formally, we define ASM as follows: \[\frac{1}{T}\sum_{t=1}^{T}\frac{1}{|B_{t}|}\min_{S(B_{A}^{t}),S(B_{B}^{t})} \left(\sum_{i=1}^{|B_{t}|}d(x_{A}^{i},x_{B}^{i})\right) \tag{1}\] where \(T\) is the length of the sequence (the number of steps of the AL loop), \(S(B^{t})\) is the set of all of the permutations of instances in the selected batch at step \(t\), and \(d(x_{A}^{i},x_{B}^{i})\) is the cosine distance between instance representations from sequences \(A\) and \(B\) for a batch at position \(i\) of a given batch permutation. Intuitively, ASM assumes that both \begin{table} \begin{tabular}{l|c c c c|c c c|c} \hline \hline & \(\Delta^{-}\) & \(\Delta^{0}\) & \(\Delta^{+}\) & \(\Delta_{10}^{-}\) & \(\Delta_{10}^{0}\) & \(\Delta_{10}^{+}\) & \(\Sigma\) \\ \hline subj & 0 & 0 & 18 & 0 & 0 & 18 & 18 \\ cola & 2 & 8 & 8 & 2 & 7 & 9 & 18 \\ agn & 7 & 4 & 7 & 3 & 2 & 13 & 18 \\ trec & 8 & 3 & 7 & 0 & 2 & 16 & 18 \\ \hline R\(\rightarrow\)B & 2 & 2 & 8 & 0 & 1 & 11 & 12 \\ E\(\rightarrow\)B & 2 & 2 & 8 & 0 & 2 & 10 & 12 \\ B\(\rightarrow\)R & 2 & 4 & 6 & 0 & 1 & 11 & 12 \\ E\(\rightarrow\)R & 2 & 4 & 6 & 1 & 2 & 9 & 12 \\ B\(\rightarrow\)E & 5 & 1 & 6 & 2 & 2 & 8 & 12 \\ R\(\rightarrow\)E & 4 & 2 & 6 & 2 & 3 & 7 & 12 \\ \hline ent & 11 & 3 & 10 & 3 & 2 & 19 & 24 \\ cs & 4 & 10 & 10 & 2 & 6 & 16 & 24 \\ ba & 2 & 2 & 20 & 0 & 3 & 21 & 24 \\ \hline \(\Sigma\) & 17 & 15 & 40 & 5 & 11 & 56 & \\ \hline \hline \end{tabular} \end{table} Table 1: Breakdown of datasets, acquisition\(\rightarrow\)consumer model pairs (denoted by initial letters), and AL methods by transferability: negative (\(-\)), neutral (0), and positive (\(+\)) transfer. \(\Delta\mathrm{AUC}\) is shown as \(\Delta\). batches cater to the same informational need of the model, so it calculates how much the instances that should carry out the same role in the batch differ. Given a dataset, we hypothesize ASM may be affected by both the choice of the models and the choice of the AL method. Figure 1 shows that the distributions of ASM values are more alike when grouped by the AL methods than when grouped by the model pairings. To verify this observation, we conducted two Kruskal-Wallis H-tests for each dataset: in the first, populations were determined by the AL method, and we concluded that there was a significant difference in ASM (p\(<\).05); in the second, the populations were determined by the model pairing, and there was no significant difference in ASM (p\(>\).05). This suggests that the choice of AL method affects ASM more than the choice of acquisition-consumer model pairing. ### Acquisition mismatch analysis We found a statistically significant negative correlation between \(\Delta\mathrm{AUC}\) and ASM for each dataset.3 This supports our hypothesis that the lower the mismatch between acquisition sequences of the two models, the higher the transferability of a dataset from one model to the other. Besides ASM, we use another measure for analyzing dataset transferability: the difference between the dataset acquired with AL using the acquisition model and the dataset acquired with AL using the consumer model. We call this measure the _acquired dataset mismatch_ (ADM). Essentially, ADM computes the mismatch between samples similarly to ASM but between entire datasets obtained after the last sampling step. Footnote 3: Spearman correlation coefficients are \(-0.11\) for subj, \(-0.19\) for cola, \(-0.27\) for agn, and \(-0.38\) for trec, all significant with p\(<\).05. Above we showed that the choice of the AL method affects the ASM. Figure 2 shows that BADGE gives smaller ASM than the other two methods, whereas core-set gives larger ASM than the other two methods.4 However, the intriguing effect emerges when comparing the difference in batches through time and differences in the entire acquired datasets through time. In the early steps, BADGE gives the highest similarity of acquired datasets among the considered methods, which leads to it having the lowest ASM. However, in later steps, entropy dominates the similarity of acquired datasets.5 It seems as if entropy acquired similar datasets for different models by taking those models through different sequences of the population distribution. This effect is seen in Table 1, where entropy is the worst method when using \(\Delta\mathrm{AUC}\) to measure transfer success while managing to party BADGE when using \(\Delta\mathrm{AUC}_{10}\). The difference in transferability between entropy and BADGE completely vanishes when looking at the Figure 1: Distributions of ASM values for combinations of AL methods and acquisition+consumer model pairs (denoted by initial letters). last step of the AL loop (cf. Appendix, Table 3). It is clear that entropy can produce transferable datasets, but it requires more time to do so. We speculate that the effect of BADGE having the lowest ASM yet entropy achieving the lowest ADM could emerge due to the interaction between the AL method and the model's decision boundary. Namely, uncertainty AL methods sample data points on the decision boundary with high overlap with support vectors of the SVM trained on the whole dataset, as pointed out by Prabhu et al. (2019). Since BADGE combines uncertainty and diversity, i.e., it samples data points the model is uncertain about for diverse reasons, it samples along the entire decision boundary at each step, and since decision boundaries of the models are roughly the same, so are the sampled data points. Entropy, on the other hand, relies solely on uncertainty. Due to its greedy nature, entropy tends to sample similar points because if one data point has high uncertainty, data points similar to it are also going to have high uncertainty (Zhdanov, 2019). This may manifest as sampling local patches of space on the decision boundary. Therefore, entropy may take more time to define the boundary than BADGE because it is forming the boundary from patches of space with the highest uncertainty at a given AL step rather than holistically sampling along the boundary at each step. Since the shape of the decision boundary is more similar between different models than the local interactions along the boundary, entropy has a higher batch mismatch in the early steps. However, once more data is labeled and the boundary becomes stable, both entropy and BADGE start to have a low batch mismatch, as seen in Figure 2. Since entropy is deterministic and never strays from the decision boundary, it ends up having a lower ADM than BADGE. Lastly, we believe that the core-set method has the highest ASM and ADM because it selects data based on diversity in the model's representation space, which is more model-specific than the decision boundary. Further exploring the described interaction is a compelling direction for future work. It may be that AL methods with different acquisition sequences end up acquiring a similar dataset and have high transferability, as in the case of entropy, an uncertainty-based acquisition function. It is also possible that acquired datasets differ between models but that the transfer remains successful because it tang into some other essential aspect of a transferable dataset, as is the case with core-set, a diversity-based acquisition function. However, the best strategy to ensure dataset transferability appears to be a mixture of uncertainty and diversity, as provided by BADGE. This appears to minimize ASM between models, making datasets transferable regardless of the number of AL steps. ## 5 Conclusion We presented an empirical study on the transferability of actively acquired text classification datasets for transformer-based PLMs. Our results indicate no significant risk in transferring datasets, especially for larger amounts of data. We also showed that transfer is largely successful when preserving the sequence and similarity of acquired instances between the models, which is what methods combining uncertainty and diversity acquisition functions seem to do. Transferability appears to differ considerably across datasets, so future work should examine what dataset characteristics are predictive of transfer success. Figure 2: The mismatch between acquired batches (top) and ADM at each step of the AL loop (bottom) for different AL methods. ## Limitations Our study revealed considerable differences in transferability and other measures we considered across different datasets. Nonetheless, the study focused on the differences in transferability arising from the choice of the models and the AL methods rather than the dataset. To eliminate confounding due to datasets, we grouped the results by datasets and analyzed each group separately. Despite this, the scope of our results is limited by the fact that all datasets used are in English and possibly contain their own biases. Even though we showed that it could still be useful to transfer actively acquired datasets between transformer-based PLMs, it is important to keep in mind that actively acquired datasets are not representative of the original data distribution due to the sampling bias introduced by active learning. ## Acknowledgments This research was supported by the AIDWAS KK.01.2.1.02.0285 grant. We thank the anonymous reviewers for their insightful comments and suggestions.
2307.06395
Accretion Properties and Estimation of Spin of Galactic Black Hole Candidate Swift J1728.9-3613 with NuSTAR during its 2019 outburst
Black hole X-ray binaries (BHXRBs) play a crucial role in understanding the accretion of matter onto a black hole. Here, we focus on exploring the transient BHXRB \source~discovered by Swift/BAT and MAXI/GSC during its January 2019 outburst. We present measurements on its accretion properties, long time-scale variability, and spin. To probe these properties we make use of several NICER observations and an unexplored data set from NuSTAR, as well as long term light curves from MAXI/GSC. In our timing analysis we provide estimates of the cross-correlation functions between light curves in various energy bands. In our spectral analysis we employ numerous phenomenological models to constrain the parameters of the system, including flavours of the relativistic reflection model Relxill to model the Fe K$\alpha$ line and the $>15$ keV reflection hump. Our analysis reveals that: (i) Over the course of the outburst the total energy released was $\sim 5.2 \times 10^{44}$~ergs, corresponding to roughly 90\% the mass of Mars being devoured. (ii) We find a continuum lag of $8.4 \pm 1.9$ days between light curves in the $2-4$ keV and $10-20$ keV bands which could be related to the viscous inflow time-scale of matter in the standard disc. (iii) Spectral analysis reveals a spin parameter of $\sim 0.6 - 0.7$ with an inclination angle of $\sim 45^{\circ}-70^{\circ}$, and an accretion rate during the NuSTAR observation of $\sim 17\% ~L_{\rm Edd}$.
Skye R. Heiland, Arka Chatterjee, Samar Safi-Harb, Arghajit Jana, Jeremy Heyl
2023-07-12T18:26:39Z
http://arxiv.org/abs/2307.06395v1
Accretion Properties and Estimation of Spin of Galactic Black Hole Candidate Swift J1728.9-3613 with _NuSTAR_ during its 2019 outburst ###### Abstract Black hole X-ray binaries (BHXRBs) play a crucial role in understanding the accretion of matter onto a black hole. Here, we focus on exploring the transient BHXRB Swift J1728.9-3613 discovered by Swift/BAT and MAXI/GSC during its January 2019 outburst. We present measurements on its accretion properties, long time-scale variability, and spin. To probe these properties we make use of several NICER observations and an unexplored data set from NuSTAR, as well as long term light curves from MAXI/GSC. In our timing analysis we provide estimates of the cross-correlation functions between light curves in various energy bands. In our spectral analysis we employ numerous phenomenological models to constrain the parameters of the system, including flavours of the relativistic reflection model Relxill to model the Fe K\(\alpha\) line and the \(>15\) keV reflection hump. Our analysis reveals that: (i) Over the course of the outburst the total energy released was \(\sim 5.2\times 10^{44}\) ergs, corresponding to roughly 90% the mass of Mars being devoured. (ii) We find a continuum lag of \(8.4\pm 1.9\) days between light curves in the \(2-4\) keV and \(10-20\) keV bands which could be related to the viscous inflow time-scale of matter in the standard disc. (iii) Spectral analysis reveals a spin parameter of \(\sim 0.6-0.7\) with an inclination angle of \(\sim 45^{\circ}-70^{\circ}\), and an accretion rate during the NuSTAR observation of \(\sim 17\%\)\(L_{\rm Edd}\). keywords: accretion, accretion discs - black hole physics - relativistic processes - X-rays: individual: Swift J1728.9-3613 ## 1 Introduction Black hole X-ray binaries (BHXRBs) are a class of astrophysical objects in which a black hole and a main sequence star are gravitationally bound. These systems make ideal laboratories for studying the physics of accretion and matter in the presence of an extreme gravitational well, in addition to testing the predictions of general relativity. BHXRBs are further categorized into transient and persistent classes. The transient class remains dormant for most of its lifetime, except for occasional outbursts where the luminosities (\(L_{\rm X}\)) reach beyond \(10^{35}\) erg s\({}^{-1}\)(e.g., Tatenko et al., 2016). These outbursts and their spectral evolution are traced through the so-called 'q' or hardness-intensity diagram (e.g., Homan et al., 2001; Remillard and McClintock, 2006). A full outburst starts with a low/hard state followed by an intermediate state reaching to the high/soft state while rising to the peak luminosity. During the declining phase, the path reverses ending with the hard state. 'Failed' or hard/low state only outbursts are those where high/soft state remains absent (see Alabaria et al., 2021, and references therein). Energy dependent variations in count rates are a universal feature of outbursting BHXRBs, which are usually attributed to their accretion properties. A few major physical drivers of accretion are viscosity (e.g., Shakura and Sunyaev, 1973; Smith et al., 2002; Chatterjee et al., 2020), electron cloud temperature and optical depth (Sunyaev and Titarchuk, 1980, 1985), the Compton cooling rate of the cloud (e.g., Chakrabarti and Titarchuk, 1995; Garain et al., 2012, 2014). In general, measurements of such parameters are performed using long term light-curve variations and detailed spectral studies using various phenomenological or physical models. Apart from accretion properties, outbursting BHXRBs also provide a chance to measure the intrinsic properties of the black hole itself, e.g. mass (\(M_{\rm BH}\)) and spin (\(a^{\rm z}\)). The dynamical method, wherein mass is obtained using radial velocity measured from absorption lines in the spectrum from the companion star, provides the most accurate measure of the mass (Casares and Jonker, 2014). One can also use dips and eclipses in the light curve to predict the mass ratio and inclination of the binary system (Horne, 1985). However, only around 20 BHXRBs are measured this way (e.g., Corral-Santana et al., 2016). Apart from the dynamical method, one can measure the mass using spectral fitting (e.g., Kreidberg et al., 2012; Torres et al., 2019; Kubota et al., 1998; Shaposhnikov and Titarchuk, 2007; Jana et al., 2022) or the X-ray reverberation method (Mastroserio et al., 2019). Direct measurement of the spin could be carried out by imaging the shape of the photon sphere or radius of the innermost stable circular orbit (ISCO) (Chan et al., 2013). Using radio interferometry, the EHT-collaboration directly captured images of the supermassive black holes M 87* and our own Sgr A* (see Event Horizon Telescope Collaboration et al. (2019, 2022) and references therein). For galactic BHXRBs, spin is usually measured by X-ray spectroscopy (e.g Draghis et al. (2022)), though in principle X-ray interferometry could be used to produce EHT-style observations that directly image the shape of the BH shadow in some Seyfert galaxies (Uttley et al., 2020; den Hartog et al., 2020). Within X-ray spectroscopy there are two primary methods that have proved successful in measuring black hole spin, namely continuum fitting (e.g., Zhang et al., 1997; McClintock et al., 2006; Steiner et al., 2014) and relativistic reflection (e.g., Fabian et al., 1989; Miller et al., 2012; Reynolds, 2021). In continuum fitting, the inner edge of the accretion disc or ISCO (\(r_{\rm inh}\)) is measured through the inner disc temperature. The optically thick and geometrically thin accretion disc considers the relativistic form prescribed by (Novikov & Thorne, 1973) where the spin influences \(r_{\rm inh}\). For extreme prograde motion (i.e. \(a\to 1\)), \(r_{\rm in}\), marginally stable and bound orbits and, the photon sphere merge into the surface of the event horizon. As the inner radius becomes smaller, the excess binding energy loss, spent into heating the accretion disc, becomes increasingly larger while approaching \(r_{\rm inh}\). The corresponding temperature \(T_{\rm inh}\) can then be measured using relativistic models like Kerrbb(Zhang et al., 1997; Li et al., 2005). This method was employed to measure the spins of LMC X-1 (Gou et al., 2009; Mudambi et al., 2020), H 1743-322 (Steiner et al., 2009), GRS 1915+105 (Sreheari et al., 2020), LMC X-1 (Jana et al., 2021), LMC X-3 (Bhuvana et al., 2021), and MAXI J1820+070 (Zhao et al., 2021). In contrast to continuum fitting, the relativistic reflection method instead models the spectral shape using the blurred iron K\(\alpha\) line at 6.4 keV and associated reflection hump above 15 keV (Fabian et al., 1989; George & Fabian, 1991). This fluorescent line distorts due to the gravitational redshift experienced by the reflected spectra of a spinning black hole, with maximal distortion occurring for a maximally spinning prograde orbit. The'reflection hump' arises due to the hard X-ray reflection from the accretion disc. Often the irradiation of the disc shows a profile steeper than \(r^{-5}\), invoking a compact corona along the rotation axis of the black hole. This is the so-called 'lamp-post' model of the coronal geometry (Miniutti & Fabian, 2004), and the height and compactness of this corona change alongside other spectral properties (Fabian et al., 2015). As the Doppler effect significantly modifies the observed spectrum with changing inclination (Luminet, 1979; Viergutz, 1993), these models calculate the parameters of the system and modify the iron line profile accordingly. A central assumption invoked in this method is that the inner radius of the accretion disk extends all the way to the inner most stable circular orbit or ISCO. This need not be the case, especially in the low/hard state (e.g. Zdziarski & Marco, 2020; Done et al., 2007). The spectroscopic region model Relaxill(Garcia et al., 2013; Dauser et al., 2014; Garcia et al., 2014) provides a measure of the iron abundance (\(A_{\rm Fe}\)) within the disc, and various flavours of disc/emissivity profiles can be assumed. Within the past few years, Relaxill has successfully measured the spin of many galactic black holes, such as MAXI J1535-571 (Miller et al., 2018), Cygnus X-1 (Tomsick et al., 2018), XTE J1752\(-\)223 (Garcia et al., 2018), GX 339-4 (Garcia et al., 2019), MAXI J1631\(-\)479 (Xu et al., 2020), XTE J1908\(-\)094 (Draghis et al., 2021), MAXI J1813\(-\)095 (Jana et al., 2021). Most recently, Draghis et al. (2022) reported spins of ten new Galactic black hole candidate using the Relaxill model. Apart from these two, one can measure spin using timing properties where low-frequency Quasi Periodic Oscillations (QPOs) are assumed to originate from _Lense-Thirring_ precession (Ingram et al., 2009). Swift J1728.9-3613 or MAXI J1728-36 went into outburst on 17 January 2019 and continued until June 2019. The source was reported by Swift/BAT on 28 January 2019 (Barthelmy et al., 2019). However, it was initially discovered by MAXI/GSC on 26 January 2019 (Negoro et al., 2019). On 31 January 2019, MeerKAT detected the radio counterpart of the source with a magnitude of \(11.2\pm 0.6\) mJy at 1.28 GHz (Bright et al., 2019). Integral observed the source on 19 February 2019 and detected the source flux as \(103\pm 4\) mCrab in the \(20-40\) keV range (Ducci et al., 2019). Following the MAXI and Swift detection, NICER has been monitoring the source since 29 January 2019 (Enoto et al., 2019), producing a total of 103 observations. Out of these, \(\sim 70\) of them lie within the outbursting period. An optical counterpart with a magnitude of 16.7 was observed on 28 January 2019 by the 60 m BOOTES-3/YA robotic telescope (Hu et al., 2019). Recently, Saha et al. (2022) analyzed NICER spectra of Swift J1728.9-3613 during the outburst and obtained a lower mass limit for the compact object of 4.6 \(M_{\odot}\), obtained assuming \(a^{*}=0\) from the inner radius of the disk and a distance of \(\sim 10\) kpc. This estimate further strengthens Swift J1728.9-3613's candidacy as a BHXRB. NuSTAR observed Swift J1728.9-3613 on 3 February 2019 but has thus far remained unexplored. This is the observation we utilized to estimate the spin and accretion properties. In this paper, we concentrate on the MAXI/GSC light curve to explore the long time scale variability. We explored the first two observations of NICER to understand the beginning of the outburst. Later, we systematically examine the NuSTAR data to extract accretion disc properties and spin using numerous models such as Diskxb, Cutoffp, Nthcomp, and Relaxill. The paper is structured as follows. The data analysis process is briefly described in Section 2. Results obtained from MAXI/GSC light curve are presented in Section 3.1. Spectral analysis with NICER is presented in Section 3.2. Spectral properties are analyzed and constrained in Section 3.3, while Relaxill is employed in Section 3.3.3. Finally, we draw our conclusions and discuss our results in Section 4. All errors associated with model parameters are quoted at the 90% confidence level unless otherwise stated. ## 2 Observations and Data Reduction NuSTAR observed Swift J1728.9-3613 on 3 February 2019 for a total exposure of 55.9 ks (see Table 1). NuSTAR is a hard X-ray focusing telescope, consisting of two identical modules: FPMA and FPMB (Harrison et al., 2013). The raw data were reprocessed with the NuSTAR Data Analysis Software (NuSTARDAS, version 1.4.1). Cleaned event files were generated and calibrated by using the standard filtering criteria in the nupipeline task and the latest calibration data files available in the NuSTAR calibration database (CALDB)1. The source and background products were extracted by considering circular regions with radii 60 arcsec and 90 arcsec, at the source coordinates and away from the source, respectively. The spectra and light curves were extracted using the unproduct task. We re-binned the spectra with 25 counts per bin by using the grppha task. Additionally, we divided the light-curves in two segments using xselect and run nuproduct using user defined gti files. Footnote 1: [http://heasarc.gsfc.nasa.gov/FTP/caldb/data/unstar/fpm/](http://heasarc.gsfc.nasa.gov/FTP/caldb/data/unstar/fpm/) Following the discovery of Swift J1728.9-3613, NICER began to monitor the source. We analysed the first two of the 70 NICER observations. NICER is a soft X-ray telescope whose primary X-ray Timing Instrument (XTI) consists of 56 identical FPMs (50 of which are functional) that record the energies of arriving photons, as well as their time of arrival to within 300 ns of UTC (Gendreau et al., 2012). Raw data were processed with the NICER Data Analysis Software (NICERDAS, version 9)2. Cleaned event files were generated with standard filtering criteria in the nicerl2 task, while background files were generated with the nibackgen3C50 task. Response files were generated with the nicerarf and nicerrmf tasks. Spectra and light curves were extracted with xselect and spectra were rebinned with 25 counts per bin using the grprpha task. We have not applied any systematic on NuSTAR or NICER data. ## 3 Results and Discussion ### Long Term Delay MAXI/GSC (Matsuoka et al., 2009) has monitored the source daily and provided one day binned light curves at various energy ranges. From these curves, as presented in the top panel of Fig. 1, we observed that the peak of the outburst occurred within 10-12 days of the beginning. Comparable to other canonical outbursts, rate variations were observed in various energy ranges. During the peak luminosity in the high/soft state, the hard 10-20 keV counts were substantially less than the softer 2-4 keV counts. In total, we found the outburst lasted for about 150 days starting from MJD 58500 to 58150. The MAXI/GSC light curves can be used to obtain an estimate for the amount of energy released by the outburst in each energy band as \[E_{t}=4\pi D^{2}\tilde{E}_{t}\int_{\rm MJD_{start}}^{\rm MID_{stop}}L_{t}(t)dt\] where \(D\) is the distance to the source, \(\tilde{E}_{t}\) is the average photon energy in the \(i^{\rm th}\) band, and \(L_{t}\) is the corresponding MAXI/GSC light curve. Assuming a distance of 10 kpc and an average photon energy of 3.0 keV, 7.0 keV, and 15 keV in each energy band respectively, we calculate an energy output of \(\sim 2.1\times 10^{44}\) ergs in the \(2-4\) keV band, \(\sim 2.2\times 10^{44}\) ergs in the \(4-10\) keV band, and \(\sim 9.0\times 10^{43}\) ergs in the \(10-20\) keV band, all over the course of the entire outburst. Considering all energy bands, the total energy released is \(\sim 5.2\times 10^{44}\) ergs, which translates to \(5.8\times 10^{26}\) g of mass assuming a mass-to-radiation conversion efficiency of 0.1 (Reynolds, 2021). The converted matter is equivalent to \(2.9\times 10^{-7}M_{\odot}\) or 91% of the mass of Mars. However, the bolometric luminosity should be higher than what we have calculated from X-rays. Thus, it is expected that our estimation of the matter conversion is moderately reserved. We inspected the time delays among the energy bands using the \(\zeta\)-DCF3 algorithm (Alexander, 2014), presented in the bottom panels of Fig. 1. A positive delay refers to the softer component arriving later, while the reverse indicates delayed arrival of the harder component. The \(2-4\) keV light curve correlates with both \(4-10\) and \(10-20\) keV light curves, having maximum correlation coefficients (\(\rho\)) of 0.95 and 0.72 respectively. The peak delays (\(\ddagger\)) between the corresponding light curves were found to be \((3.7\pm 0.8)\) and \((8.4\pm 1.9)\) days, with standard deviations 11.1 and 9.9 days respectively. A similar correlated pattern was also observed between \(4-10\) and \(10-20\) keV light curves with maximum coefficient, delay, and width \(\rho=0.77\), \(\ddagger=(4.7\pm 1.1)\) days, \(\sigma=10.0\) days respectively. This delay is consistent with what we would expect examining the other two light curves, i.e. 8.4 days\(-3.7\) days \(=4.7\) days. We should note that \(\sigma\) here refers to the estimated width of the cross-correlation function itself, whereas the errors associated with the true delays are estimated from the width of the corresponding likelihood distribution. These distributions were calculated with the PLIKE algorithm, also presented by Alexander (2014). Our analysis showed that harder (\(10-20\) keV) radiation as a whole arrived before its softer counterpart. The magnitude of the delay increased with increasing differences in X-ray energy, exhibiting a maximum delay between the \(2-4\) and \(10-20\) keV photons. An accreting BHXRB spectrum is typically approximated by the sum of a hard power-law and soft blackbody disk component, and from the light curve analysis we can infer that the harder spectral component dominated the rising phase before giving way to more prominent soft disk emission. This scenario is similar to what was observed earlier in the outburst profiles of GX 339-4, XTE J1650-500 (Smith et al., 2002; Chatterjee et al., 2020). A possible reason for this delayed peak in soft emission could be the viscous delay with which **the** standard disc spirals inward before heating enough to glow in the X-ray. Influenced by the viscous delay, the disc gradually modifies the spectral and temporal properties, triggering the outburst profile of the BHXRB (Smith et al., 2002). The left panel of Fig. 2 shows the variation in hardness ratio (HR) vs. intensity (in terms of the accretion rate \(\dot{M}=L/L_{\rm Edd}\)) over the full outburst as observed by MAXI/GSC, and illustrates a typical 'q' diagram for an outbursting BHXRB. Footnote 3: [https://www.weirmann.ac.il/particle/tal/research-activities/software](https://www.weirmann.ac.il/particle/tal/research-activities/software) ### Nicer As mentioned previously, the X-ray spectrum of a BHXRB is typically approximated with the sum of a multi-colour disc blackbody (MCD) and power-law component. Reprocessed or reflected emission may also be observed, such as a Fe K\(\alpha\) line at \(\sim 6.4\) keV, a reflection hump at \(\sim 15-40\) keV (see bottom panel of Fig. 4), and in our case, an apparent Ni emission line at \(\sim 8\) keV (Corliss and Sugar, 1981; Molendi et al., 2003; Medvedev et al., 2018). Spectral analysis was carried out in HEASARC's spectral analysis package XSPEC version 12.12.1 (Arnaud, 1996). We used the Taabs model to account for interstellar absorption with the WILM abundance (Wilms et al., 2000) and the cross-section of Verner et al. (1996). The MCD component was handled with the DiskB model, and is included in all spectral models presented in this paper. The first two NICER observations of Swift J1728.9-3613 were taken on MJD 58512 and MJD 58513 respectively, which correspond to the beginning of the rising phase of the outburst - or the low/hard state. We fit the data for the first two observations in the \(0.3-10.0\) keV range with an absorbed MCD and power law model [Taabs*(DiskB + Powerlaw)]. This returned an inner disc \begin{table} \begin{tabular}{l c c c c} \hline Instrument & Date (UT) & Obs ID & Exposure (ks) & Count s\({}^{-1}\) \\ \hline \hline NICER/XRT & 2019-01-29 & 1200550101 & 1.86 & 581.7 \(\pm\) 0.7 \\ \hline NICER/XRT & 2019-01-30 & 1200550102 & 13.7 & 992.0 \(\pm\) 0.4 \\ \hline NuSTAR & 2019-02-03 & 90501303002 & 55.9 & 144.7 \(\pm\) 0.02 \\ \hline \end{tabular} \end{table} Table 1: Log of Observations of Swift J1728.9–3613 temperature of \(T_{\rm in}=1.05^{+0.03}_{-0.02}\) keV for the first observation and \(T_{\rm in}=1.229^{+0.006}_{-0.006}\) keV for the second. We find a reduction in photon index from \(\Gamma=2.02^{+0.06}_{-0.07}\) to \(1.72^{+0.08}_{-0.08}\) with normalization varying from \(\sim 1.7\) to \(\sim 0.9\) ph keV\({}^{-1}\) cm\({}^{-2}\); hydrogen column density stayed approximately constant (\(n_{\rm H}=4.67^{+0.07}_{-0.07}\times 10^{-22}\)cm\({}^{-2}\) to \(4.58^{+0.04}_{-0.04}\times 10^{-22}\) cm\({}^{-2}\)). Table 2 contains detailed results for each fit, along with their \(\chi^{2}\) value per degree of freedom (dof) and model flux. Removing the MCD component from the model substantially worsens both fits, giving an F-statistic of \(\sim 200\) and \(\sim 3000\) for the first two observations respectively. This indicates the presence of a rapidly growing standard accretion disc at the beginning of the outburst. ### NuSTAR We performed the spectral analysis of the NuSTAR data in the \(3-78\) keV energy range. The right panel of Fig. 2 shows the variation in HR vs. count rate throughout the primary NuSTAR observation, with points binned at 100s. We observe that during the observation HR and intensity were positively correlated, lying along the line with slope, intercept, correlation coefficient, and null hypothesis probability (\(278.2\pm 10.5\)), (\(65.2\pm 6.0\)) counts/s, \(r=0.877\), and \(p<10^{-5}\) respectively. Correlation between these two parameters is typical of BHXRBs during intermediate spectral states where disc-corona coupling becomes stronger (Churazov et al., 2001; Taylor et al., 2003). In Section 3.3.1 we resolve the spectrum between the two regions in the HR diagram with unfolded spectra from both regions presented in Fig. 3. Section 3.3.2 is dedicated to examining the full time-averaged spectrum of the observation using several non-relativistic models, and in Section 3.3.3 we explore the same spectrum with several flavours of the RelXML model family. Gaussian components are included to model the Fe and Ni emission lines where necessary in all non-relativistic models. Fig. 4 shows the full spectrum of the NuSTAR observation fit with a simple MCD and power-law model, which disregards all reflected emission. The bottom panel of Fig. 4 displays the residuals of that fit and showcases the reflected emission, i.e. the 6.4 keV Fe K\(\alpha\) line and reflection hump above \(\sim 15\) keV. Figure 1: Top panel: MAXI/GSC light curves of Swift J1728.9–3613 during the outburst in three different energy bands, binned at 1 day. The blue vertical line represents the date of the primary NuSTAR observation analyzed in Section 3.3, while the orange lines show when the two NICER observations analyzed in Section 3.2 were taken. Bottom panels: Estimations of the cross–correlation functions between light curves of the indicated energy bands and likelihood distributions of the true time lags. Estimation was performed with the \(\zeta\)-transformed discrete correlation function (\(\zeta\)-DCF) and its associated likelihood (PLIKE) algorithm from Alexander (2014). \(\bar{\tau}\) and \(\sigma\) are the mean and standard deviation of the Gaussian fits, while \(\rho\) is the maximum DCF value. #### 3.3.1 Hardness-Resolved Spectra The 'gap' between data points that occurs at \(\sim 45,000\)s shows a sudden softening of Swift J1728.9-3613's spectrum, alongside a reduction in intensity during the observation. We extracted spectra for both regions of the HID (denoting the earlier region i and the later region ii) and fit them with an absorbed MCD and cutoff power law model Cutoffpl, which very roughly approximates the continuum emission of a thermally Comptonized medium. Fig. 3 shows the resulting spectral fits for both regions. Between regions i and ii the inner disc temperature, photon index, and cutoff energy did not appreciably vary, returning values of \(T_{\rm in}=1.195^{+0.010}_{-0.011}\) keV, \(\Gamma=2.27^{+0.07}_{-0.07}\), and \(E_{\rm cut}=94^{+78}_{-34}\) keV in region i; and \(T_{\rm in}=1.216^{+0.005}_{-0.005}\) keV, \(\Gamma=2.13^{+0.03}_{-0.03}\), and \(E_{\rm cut}=79^{+28}_{-18}\) keV in region ii. In region i we also detected signatures of both the Fe K\(\alpha\) line at \(E_{\rm Fe}=6.34^{+0.11}_{-0.14}\) keV and the Ni XXVIII line at \(E_{\rm Ni}=8.06^{+0.10}_{-0.28}\) keV. In region ii we only detected the Fe K\(\alpha\) line (with greater uncertainty) at \(E_{\rm Fe}=6.2^{+0.4}_{-0.3}\) keV, and the apparent hydrogen column density changed from \(m_{\rm H}=4.19^{+0.25}_{-0.12}\times 10^{-22}\)cm\({}^{-2}\) to a more uncertain \(n_{\rm H}=3.14^{+0.7}_{-0.7}\times 10^{-22}\)cm\({}^{-2}\). Between regions we found a reduction in the harder power law component, with nor \begin{table} \begin{tabular}{l c c c c c c c} \hline Observation ID & \(n_{\rm H}\) (\(10^{-22}\)cm\({}^{-2}\)) & \(T_{\rm in}\) (keV) & norm\({}_{\rm diskbb}\) & \(\Gamma\) & norm\({}_{\rm PL}\) & \(\chi^{2}/{\rm dof}\) & \(F_{2-10}\) (\(10^{-9}\) ergs cm\({}^{-2}\)) \\ \hline \hline 1200550101 & \(4.67^{+0.07}_{-0.07}\) & \(1.05^{+0.01}_{-0.02}\) & \(109^{+30}_{-20}\) & \(2.02^{+0.06}_{-0.07}\) & \(1.7^{+0.3}_{-0.3}\) & 891/902 & \(4.498^{+0.010}_{-0.010}\) \\ \hline 1200550102 & \(4.58^{+0.04}_{-0.04}\) & \(1.229^{+0.006}_{-0.006}\) & \(215^{+8}_{-8}\) & \(1.72^{+0.08}_{-0.08}\) & \(0.9^{+0.2}_{-0.2}\) & 1008/944 & \(6.177^{+0.014}_{-0.014}\) \\ \hline \end{tabular} \(F_{2-10}\) is the total model flux in the \(2-10\) keV range. \end{table} Table 2: NICER Spectral Analysis Results: Diskbb+Powerlaw Figure 2: Left Panel (a): Hardness–accretion rate diagram of Swift J1728.9–3613 over the course of its outburst, produced with MAXI/GSC observations. The MAXI hardness ratio (HR) is defined as the ratio of count rates in the \(4-20\) keV and \(2-4\) keV bands. Colour represents progression of the outburst through time (MJD-58500 days) as indicated at the top. The maximum disc accretion rate was \(\dot{M}_{\rm max}=0.28\)\(L_{\rm Edd}\) during peak luminosity. The primary NuSTAR observation is marked with \(\odot\). Right Panel (b): Hardness–intensity diagram (HID) for the first NuSTAR observation of the Swift J1728.9–3613 outburst. Points are binned at 100s with the NuSTAR HR defined as the ratio of count rates in the \(6-78\) keV and \(3-6\) keV bands. Colour represents progression through time as indicated at the top, in seconds. The dashed line through the data is the best fit having a slope and intercept of \(278.2\pm 10.5\) and \(65.2\pm 6.0\) counts/s, respectively. malization varying from \(\sim 3.3\) to \(\sim 1.5\) ph keV\({}^{-1}\) cm\({}^{-2}\) (illustrated in Fig. 3). This explains the softening of the spectrum as well as the disappearance of the reprocessed Ni emission line. Variations on this timescale (\(\sim 100\)s) being primarily influenced by the power-law component are consistent with the findings of Churazov et al. (2001) in the soft state of Cyg X-1. Detailed results of both fits are presented in Table 3. #### 3.3.2 Time averaged spectrum: Phenomenological Models Examining the entire spectrum of the observation, we first fit the data with an absorbed power-law and MCD model. This model returned a reasonable fit with \(\chi^{2}=950\) for 956 degrees of freedom (dof) with estimates of the inner disc temperature \(T_{\rm in}=1.185^{+0.015}_{-0.010}\) keV, a photon index of \(\Gamma=2.46^{+0.02}_{-0.02}\), and a column density of \(n_{\rm H}=4.63^{+0.28}_{-0.28}\times 10^{-22}\) cm\({}^{-2}\). We find signatures of the Fe and Ni emission lines at \(E_{\rm Fe}=6.48^{+0.02}_{-0.02}\) keV and \(E_{\rm Ni}=7.99^{+0.04}_{-0.03}\) keV. The reflection hump was clearly visible in the residuals at energies above \(\sim 15\) keV (see Fig. 5). This was repeated with an exponential cutoff power-law component Cutoffpl. This provided no significant improvement to the fit, returning an inner disc temperature of \(T_{\rm in}=1.16^{+0.03}_{-0.03}\) keV, a slightly softer photon index of \(\Gamma=2.24^{+0.08}_{-0.08}\) cutoff energy \(kT_{e}=94^{+93}_{-39}\) keV, and column density \(n_{\rm H}=4.32^{+0.27}_{-0.23}\times 10^{-22}\) cm\({}^{-2}\); we again find signatures of the Fe and Ni emission lines at \(E_{\rm Fe}=6.27^{+0.16}_{-0.13}\) keV and \(E_{\rm Ni}=8.0^{+0.3}_{-0.2}\) keV. The reflection hump was still clearly visible in the residuals. A much better description of continuum emission due to thermal Comptonization is given by the model Nthcomp (Zdziarski et al., 1996), which attempts to simulate the upscattering of photons through the corona from a seed spectrum parameterized by the inner temperature of the accretion disc. The high energy cutoff is parameterized by the electron temperature of the medium. Nthcomp is not a power law, but can still be parameterized with an asymptotic photon index (Zycki et al., 1999). Using the inner disc temperature returned by Diskbb, \(T_{\rm in}=1.149^{+0.006}_{-0.009}\) keV, as the low energy rollover in Nthcomp, we estimate the hot electron temperature to be \(kT_{e}=132^{+476}_{-123}\) keV with photon index \(\Gamma=2.36^{+0.02}_{-0.01}\). This model also returned the lowest hydrogen column density of all of the presented models with \(n_{\rm H}=3.27^{+0.91}_{-0.35}\times 10^{-22}\) cm\({}^{-2}\). For completeness, we may also calculate the optical depth (\(\tau\)) of the corona as given by Zdziarski et al. (1996), \[\tau=\sqrt{\frac{9}{4}+\frac{m_{e}c^{2}}{kT_{e}}\frac{3}{(\Gamma-1)(\Gamma+2) }-\frac{3}{2}},\] returning a value of \(\tau=0.55^{+3.54}_{-0.42}\) for the Nthcomp model. The optically thin corona was also proposed for Cyg X-1 (Churazov et al., 2001) during its high/soft state. We also attempted to fit the spectrum with the non-relativistic reflected power-law model Pexrav from Magdziarz & Zdziarski (1995), alongside a regular power-law component to account for direct coronal emission. Since this model accounts for the reflection and reprocessing of X-rays from the corona by the accretion disc, it can be used to estimate the reflection fraction (\(R_{\rm refl}\)), defined as the fraction of X-rays reprocessed by the disc vs. those received via direct emission. It also estimates abundances of elements heavier than He, including iron (\(A_{\rm Fe}\)), as well as the inclination angle \(i\). This model returned a close fit with the reflection hump reduced in the residuals and a \(\chi^{2}=916\) for 949 dof. However, most parameters were unable to be constrained. Hence, we opted for relativistic model to estimate those parameters. One of the few parameters that remained well-behaved was the reflection fraction, \(R_{\rm refl}=0.48^{+0.06}_{-0.06}\), and we should expect RelXML to estimate reflection fractions of the same order. Finally, we should note explicitly that in all the aforementioned models, two Gaussian model components were used to model the Fe K\(\alpha\) and Ni emission lines. We observed two **more** emission lines at 27.9 keV and 30.8 keV having widths of 0.07 keV and 0.06 keV respectively; the normalization of both lines was around \(10^{-4}\) ph cm\({}^{-2}\) s\({}^{-1}\). The origin of these emission lines could be attributed to the radioactive isotope \({}^{241}\)Am, as reported by Garcia et al. (2018); Connors et al. (2022). Similar lines were also observed in AstroSat/LAXPC spectra, which are an instrumental feature (Srechavi et al., 2019). To fit those lines, we added two additional Gaussian components to each model and froze the line energy, width, and normalization as obtained from the Powerlaw model. Detailed results of all spectral fits for this section are presented in the first three columns of Table 4, and the residuals for each model are presented in Fig. 5. #### 3.3.3 Time averaged spectrum: Relxill We used different flavors of the relativistic reflection model Relxill (Garcia et al., 2013, 2014; Dauser et al., 2014, 2016) to probe the reprocessed emission more accurately. In this model, the reflection fraction (\(R_{\rm refl}\)) is given by the ratio between the Comptonized emission to the disc and to the infinity. A broken power-law emission profile is assumed with \(E(r)\sim R^{-q_{\rm He}}\) for \(r>R_{\rm br}\) and \(E(r)\sim R^{-q_{\rm He}}\) for \(r<R_{\rm br}\), where E(r), \(q_{\rm inj}\), \(q_{\rm out}\) and \(R_{\rm br}\) are the emissivity, inner emissivity index, outer emissivity index, and break radius respectively. The other free parameters in this model are the ionization parameter (\(\xi\)), iron abundance (\(A_{\rm Fe}\)) and inclination angle (\(i\)). We started our analysis with the RelxillCp model, with the final model reading in XSPEC as TBabs*(diskbb+RelxillCp). The primary emission for this model is given by the Comptonized model Nthcomp (Zdziarski et al., 1996; Zycki et al., 1999). To verify that Relxill does indeed present an advantage over our non-relativistic models, we compared Nthcomp with no additional Gaussian components to model the Fe/Ni emission lines to the RelxillCp model through an F-test, returning an F-statistic of \(\sim 15\) (probability \(3.1\times 10^{-25}\)). This confirms a statistical need to include modeling \begin{table} \begin{tabular}{l c c} \hline & i & ii \\ \hline \hline \(n_{\rm H}\) (\(10^{22}\)cm\({}^{-2}\)) & \(4.19^{+0.25}_{-0.12}\) & \(3.8^{+0.7}_{-0.7}\) \\ \(T_{\rm in}\) (keV) & \(1.195^{+0.010}_{-0.011}\) & \(1.216^{+0.005}_{-0.005}\) \\ norm\({}_{\rm diskbb}\) & \(313^{+13}_{-13}\) & \(303^{+15}_{-13}\) \\ \(\Gamma\) & \(2.27^{+0.27}_{-0.07}\) & \(2.13^{+0.03}_{-0.03}\) \\ \(kT_{e}\) (keV) & \(94^{+78}_{-34}\) & \(79^{+28}_{-18}\) \\ normr\({}_{\rm br}\) (ph keV\({}^{-1}\)cm\({}^{-2}\)) & \(3.3^{+0.6}_{-0.5}\) & \(1.54^{+0.07}_{-0.07}\) \\ \(E_{\rm Fe}\) (keV) & \(6.34^{+0.11}_{-0.14}\) & \(6.2^{+0.4}_{-0.3}\) \\ \(\sigma_{\rm Fe}\) (keV) & \(0.80^{+0.08}_{-0.07}\) & \(0.3^{+0.5}_{-0.3}\) \\ normr\({}_{\rm Fe}\) (\(10^{-3}\)) & \(6.5^{+0.5}_{-1.4}\) & \(1.1^{+1.0}_{-0.9}\) \\ \(E_{\rm Ni}\) (keV) & \(8.06^{+0.10}_{-0.10}\) & - \\ \(\sigma_{\rm Ni}\) (keV) & \(0.37^{+0.19}_{-0.13}\) & - \\ normr\({}_{\rm Ni}\) (\(10^{-3}\)) & \(1.7^{+1.2}_{-0.3}\) & - \\ \hline \(\chi^{2}\)/dof & 869/887 & 604/599 \\ \(F_{2-10}\) (\(10^{-9}\) ergs cm\({}^{-2}\)) & \(9.952^{+0.012}_{-0.012}\) & \(8.862^{+0.022}_{-0.022}\) \\ \hline \hline \end{tabular} \end{table} Table 3: NuSTAR Hardness Resolved Spectral Results of reflected emission in the NuSTAR spectrum. RelxillCp directly estimates the coronal properties in terms of the \(\Gamma\) and \(kT_{\rm e}\). During fitting, we linked the seed photon temperature (\(T_{\rm S}\)) with the inner disc temperature (\(T_{\rm in}\)). We fixed the outer radius of the disc at \(R_{\rm out}=1000\)\(r_{g}\). Analysis with RelxillCp returned a good fit with \(\chi^{2}=916\) for 952 dof. We obtained an inner temperature of \(T_{\rm in}=1.2358^{+0.0007}_{-0.0007}\) keV, photon index \(\Gamma=2.381^{+0.003}_{-0.003}\), iron abundance \(A_{\rm Fe}=0.47^{+0.06}_{-0.06}\)\(A_{\odot}\), and ionization parameter \(\log(\xi)=3.39^{+0.04}_{-0.04}\). The ISCO was extended up to \(R_{\rm in}=2.2^{+0.4}_{-0.6}\)\(R_{\rm ISCO}\), with the break radius extending out to \(R_{\rm br}=14.3^{+0.6}_{-0.6}\)\(r_{g}\). The inner emissivity index appears steep with \(q_{1}=8.6^{+0.04}_{-0.4}\), and we find a much flatter outer emissivity index of \(q_{2}=0.35^{+0.10}_{-0.11}\). We estimate a reflection fraction of \(R_{\rm refl}=0.67^{+0.06}_{-0.06}\) and a BH spin parameter of \(a^{*}=0.65^{+0.04}_{-0.06}\), with inclination angle \(i=69.8^{+0.5}_{-0.5}\) degrees. While the RelxillCp model does not assume any particular coronal geometry, the RelxillCp and RelxillLCp flavors of the Relxill family assume a jump-post geometry where the corona is assumed to be a compact source located above the BH (Garcia & Kallman, 2010; Dauser et al., 2016). The incident primary emission is given by either Cutoffpl. (RelxillCp) or nthcomp (RelxillLCp). The height of corona (\(h\)) is an input parameter in this model. Analysis with the RelxillLlp and RelxillLCp models both returned good fits with \(\chi^{2}/{\rm dof}=940/954\) and \(\chi^{2}/{\rm dof}=933/954\) respectively. Both returned similar inner temperatures of \(T_{\rm in}=1.220^{+0.004}_{-0.004}\) keV and \(T_{\rm in}=1.211^{+0.005}_{-0.005}\) keV respectively. RelxillLp returned a photon index of \(\Gamma=2.421^{+0.003}_{-0.003}\) while RelxillLCp returned a slightly harder photon index of \(\Gamma=2.244^{+0.002}_{-0.002}\). RelxillLlp also returned a reflection fraction of \(R_{\rm refl}=0.76^{+0.04}_{-0.04}\), almost two times more than the \(R_{\rm refl}=0.470^{+0.014}_{-0.014}\) given by RelxillLCp. Both models returned similar values of the coronal height, with \(h=3.59^{+0.15}_{-0.11}\)\(r_{g}\) for RelxillLlp and \(h=5.484^{+0.009}_{-0.009}\)\(r_{g}\) for RelxillCp. Otherwise, RelxillLp returned an iron abundance \(A_{\rm Fe}=1.50^{+0.12}_{-0.12}\)\(A_{\odot}\), ionization parameter \(\log(\xi)=4.48^{+0.05}_{-0.05}\)\(R_{\rm in}=2.5^{+0.5}_{-0.5}\)\(R_{\rm in}\), inclination angle \(i=58^{+2}_{-2}\) degrees, and spin parameter \(a^{*}=0.70^{+0.15}_{-0.31}\). RelxillCp returned an iron abundance \(A_{\rm Fe}=0.44^{+0.04}_{-0.04}\)\(A_{\odot}\), ionization parameter \(\log(\xi)=3.01^{+0.02}_{-0.02}\), \(R_{\rm in}=1.7^{+1.4}_{-0.5}\)\(R_{\rm ISCO}\), inclination angle \(i=37^{+3}_{-3}\)\({}^{\circ}\), and spin parameter \(a^{*}=0.63^{+0.07}_{-0.07}\). Detailed results of all spectral fits for this section are presented in the last three columns of Table 4. ### Error Estimation While fitting the NuSTAR spectrum, we found that some of the parameters were degenerate. To remove this degeneracy and find the global minimum, we employed Markov Chain Monte Carlo (MCMC) in XSPEC4, which constrains the uncertainty range of the parameters. We used the RelxillLlp model to estimate the errors using MCMC, as the model provides a physical picture of the corona. As a caveat, it should be noted that the coronal geometry influences the emissivity profile of the reflection spectrum. Thus, the parameters could modify if coronal geometry varies. We used 1,000,000 steps with 8 walkers using the Goodman-Weare algorithm. We chose to discard the first 10,000 steps, i.e. the 'transient' or 'burn-in' phase. The posterior Figure 3: Models (both total and their individual components) for both spectra extracted from region i and region ii of the HID presented in the right panel of Fig. 2. distribution of the RelxillLp model fitted parameters are plotted in Figure 6, with errors quoted at the 1 \(\sigma\). ## 4 Concluding Remarks We analyzed the Swift J1728.9-3613 data obtained from MAXI/GSC in the form of one-day binned lightcurves, two NICER/XTI observations taken during the rising phase of the outburst, and one NuSTAR/FPMB observation taken near the peak of the outburst. Spectral analysis was performed on the NICER observations in the \(0.3-10\) keV range in order to understand the parameters of the system at the beginning of the outburst. We then examined the NuSTAR observation in the \(3-78\) keV range to more closely probe the accretion process and to measure parameters such as spin, inner disc radius, and inclination angle. The long term lightcurve analysis revealed that, in total, Swift J1728.9-3613 devoured roughly 90% of the mass of Mars and released \(\sim 5.2\times 10^{44}\) ergs of energy in \(2-20\) keV energy band. We also find that lightcurves in various energy bands correlate with each other, where softer photons are delayed compared to their harder counterparts. The maximum delay was observed between the \(2-4\) and \(10-20\) keV photons with a mean lag of \(\bar{\tau}=(8.4\pm 1.9)\) days. Production of such a delay is likely related to the viscous delay of matter spiraling through the accretion disc during the outburst. This is supported by the presence of a strong multi-color disc component in the NICER spectra during the rising phase, with an increase normalization between the first and second observations. Various models were used to understand the key parameters involved in the accretion process. In resolving the spectrum of Swift J1728.9-3613 during the primary NuSTAR observation we revealed a sudden decrease in hard emission, with power-law normalization decreasing from \(3.3^{+0.6}_{-0.5}\) to \(1.54^{+0.07}_{-0.07}\) ph keV\({}^{-1}\) cm\({}^{-2}\). Such a re Figure 4: Top panel: Time-averaged unfolded spectrum from FPMB (green) fit with an absorbed MCD and power law model, with no Gaussian components. Both the individual model components as well as their sum are presented. Bottom Panel: Residuals in terms of (data\(-\)model)/error obtained from spectral analysis for the same model. duction could have have resulted from large amounts of ejecta being carried away from the corona in a relatively short amount of time, so-called _blobby_ jets. These ejections can appear to move at superluminal speeds, and have been definitively reported on in several other outbursting black holes (e.g. Fender et al., 1998; Hjellming & Rupen, 1995). Turning to the time-averaged spectrum, we obtained a hydrogen column density of \(4.63^{+0.28}_{-0.28}\times 10^{-22}\) cm\({}^{-2}\) using the WILM (Wilms et al., 2000) abundance from the Tabbs*[Diskbb+Powerlaw] model. Using the ANGR (Anders & Grevesse, 1989) abundance, we found a lower (\(2.99^{+0.11}_{-0.11}\times 10^{-22}\) cm\({}^{-2}\)) column density for the same model. The disc temperature varied from 1.15 to 1.23 keV across all the models, with Powerlaw yielding an inner temperature of \(T_{in}=1.18^{+0.01}_{-0.010}\) keV. From the NiHcomp model, we estimated a coronal electron temperature of \(kT_{e}=132^{+476}_{-123}\) keV. The coronal temperature remains unconstrained from our time-averaged spectral analysis. However, when we examined the hardness-resolved spectra using the Cutoffpl model, we found the cutoff temperatures were better constrained, having values Figure 5: Residuals in terms of (data-model)/error obtained from spectral analysis for the Powerlaw, Cutoffpl, Nhtcomp, RelxillCp, RelxillCp, RelxillCp models. Reduced \(\chi^{2}\) values (\(\tilde{\chi}^{2}\)) for each model are quoted in each panel. The first three models (i.e., the non-relativistic ones) include two Gaussian components for the Fe K\(\alpha\) and Ni emission lines. \(E_{\rm cut}=94^{+78}_{-34}\) keV in region i and \(E_{\rm cut}=79^{+28}_{-18}\) keV in region ii. We found signatures of the Fe K\(\alpha\) line around (\(6.48\pm 0.02\)) keV with an equivalent width of 53 eV. Apart from the Fe K\(\alpha\) line, we also observed an Ni emission line around (\(8.0\pm 0.3\)) keV with an equivalent width of 60 eV. This Ni line has previously been observed in several BLXRBs and AGNs (Corliss & Sugar, 1981; Molendi et al., 2003; Fukazawa et al., 2016; Medvedev et al., 2018). It is possible that the \(8.06^{+0.10}_{-0.28}\) keV line is observed due to the blue-shifted wing of the Fe K\(\alpha\). However, if this were the case, we would also expect to observe a corresponding redshifted wing at roughly 6.4 - (8 - 6.4) = 4.8 keV (Cui et al., 2000). Given that we detect strong signatures of the emission line at 6.4 keV in the first place and the 8 keV Ni line has been previously identified, we believe that Ni emission line could be the more plausible explanation. Future generation satellites, such as _Collibri_, (Helyl et al., 2019; Caiazzo et al., 2019) would be capable of resolving these lines more accurately and locating line emitting regions. We applied phenomenological models, like Cutoffpl to fit the NuSTAR data and estimate the luminosity of the accretion disc during that observation. Using \(L_{2-10}=4\pi D^{2}F_{2-10}\), assuming a distance of 10 kpc, we find \(L_{2-10}=1.09\times 10^{38}\) erg s\({}^{-1}\). Assuming a mass-to-radiation conversion efficiency of \(\eta=0.1\) based on our measured spin value (Reynolds, 2021), we find a mass accretion rate of \(\dot{M}=L/\eta c^{2}=1.2\times 10^{18}\) g s\({}^{-1}\). Using the lower mass limit of Swift J1728.9-3613 reported by Saha et al. (2022), \(\sim 5M_{\odot}\) accounting for spin, we estimate \(L/L_{\rm Edd}\sim 0.17\) during the observation. Combining this mass with the MAXI/GSC light curves, we find a maximum accretion rate of \(L/L_{\rm Edd}\sim 0.28\) during the peak of the outburst. Three models from the Relaxli family were selected for analysis with the NuSTAR data in order to estimate the more complex prop Figure 6: Posterior distribution of the spectral parameters obtained from the MCMC analysis with the RelaxliLPD model. Plotting was performed using corner (Foreman-Mackey, 2016). Central dashed lines correspond to the peak values whereas \(1\sigma\) confidence levels are represented by dashed lines on either side. erties of Swift J1728.9-3613, namely RelxillCp, RelxillLP, and RelxillLPCp. Since RelxillCp assumes that the irradiation of the disc behaves like a broken power-law, this model estimates a steep inner emissivity index of \(q_{1}=8.61^{+0.37}_{-0.41}\) that becomes much flatter (\(q_{2}=0.35^{+0.10}_{-0.11}\)) at the break radius \(R_{\rm br}=14.3^{+0.6}_{-0.6}\)\(r_{g}\). Otherwise, the three models estimate spin parameters of \(0.65^{+0.04}_{-0.06}\), \(0.70^{+0.15}_{-0.31}\), and \(0.63^{+0.07}_{-0.07}\); inner disc radii of \(2.2^{+0.4}_{-0.6}\)\(R_{\rm ISCO}\), \(2.5^{+0.5}_{-0.5}\)\(R_{\rm ISCO}\)- and \(1.7^{+1.4}_{-0.5}\)\(R_{\rm ISCO}\); and inclination angles of \(69.8^{+0.5}_{-0.5}\) degrees, \(58^{+2}_{-2}\) degrees, and \(37^{+3}_{-3}\) degrees, respectively. ## Acknowledgements We acknowledge the anonymous reviewer for the helpful comments and suggestions which improved the paper. SH, AC, SSH and JH are supported by the Canadian Space Agency (CSA) and the Natural Sciences and Engineering Research Council of Canada (NSERC) through the Discovery Grants and the Canada Research Chairs programs. AJ acknowledge the support of the grant from the Ministry of Science and Technology of Taiwan with the grand number MOST 110-2811-M-007-500 and MOST 111-2811-M-007-002. This research made use of the _NuSTAR_ Data Analysis Software (NuSTARDAS) jointly developed by the ASI Space Science Data Center (ASSDC, Italy) and the California Institute of Technology (Caltech, USA). ## Data Availability We used publicly available archival data of MAXI, NICER and NuSTAR observatories for this work. All the models used in this work, are publicly available. Appropriate links are provided in the text.
2309.01156
Advances in machine-learning-based sampling motivated by lattice quantum chromodynamics
Sampling from known probability distributions is a ubiquitous task in computational science, underlying calculations in domains from linguistics to biology and physics. Generative machine-learning (ML) models have emerged as a promising tool in this space, building on the success of this approach in applications such as image, text, and audio generation. Often, however, generative tasks in scientific domains have unique structures and features -- such as complex symmetries and the requirement of exactness guarantees -- that present both challenges and opportunities for ML. This Perspective outlines the advances in ML-based sampling motivated by lattice quantum field theory, in particular for the theory of quantum chromodynamics. Enabling calculations of the structure and interactions of matter from our most fundamental understanding of particle physics, lattice quantum chromodynamics is one of the main consumers of open-science supercomputing worldwide. The design of ML algorithms for this application faces profound challenges, including the necessity of scaling custom ML architectures to the largest supercomputers, but also promises immense benefits, and is spurring a wave of development in ML-based sampling more broadly. In lattice field theory, if this approach can realize its early promise it will be a transformative step towards first-principles physics calculations in particle, nuclear and condensed matter physics that are intractable with traditional approaches.
Kyle Cranmer, Gurtej Kanwar, Sébastien Racanière, Danilo J. Rezende, Phiala E. Shanahan
2023-09-03T12:25:59Z
http://arxiv.org/abs/2309.01156v1
# Advances in machine-learning-based sampling ###### Abstract Sampling from known probability distributions is a ubiquitous task in computational science, underlying calculations in domains from linguistics to biology and physics. Generative machine-learning (ML) models have emerged as a promising tool in this space, building on the success of this approach in applications such as image, text, and audio generation. Often, however, generative tasks in scientific domains have unique structures and features--such as complex symmetries and the requirement of exactness guarantees--that present both challenges and opportunities for ML. This Perspective outlines the advances in ML-based sampling motivated by lattice quantum field theory, in particular for the theory of quantum chromodynamics. Enabling calculations of the structure and interactions of matter from our most fundamental understanding of particle physics, lattice quantum chromodynamics is one of the main consumers of open-science supercomputing worldwide. The design of ML algorithms for this application faces profound challenges, including the necessity of scaling custom ML architectures to the largest supercomputers, but also promises immense benefits, and is spurring a wave of development in ML-based sampling more broadly. In lattice field theory, if this approach can realize its early promise it will be a transformative step towards first-principles physics calculations in particle, nuclear and condensed matter physics that are intractable with traditional approaches. ## 1 Introduction Theoretical nuclear physics has the ironic feature that although the fundamental laws are well understood, the computations required to make quantitative, first-principles predictions are in many cases currently infeasible. The strong nuclear force is fundamentally described by the quantum field theory known as Quantum Chromodynamics (QCD), which details the dynamics of constituent particles--quarks and gluons--that arise as excitations of underlying quantum fields. This theory successfully predicts a wide range of phenomena that occur at different energy scales, ranging from the high-energy collisions at the Large Hadron Collider to the properties and interactions of composite particles such as the proton and neutron, as well as the nuclei they form. At high energies, the interactions between quarks and gluons are weak, and accurate QCD calculations can be made using a perturbative expansion, which is often represented with Feynman diagrams. At the lower energies relevant for much of nuclear physics, the interactions between quarks and gluons are strong and the perturbative approach breaks down. In this regime, quantitative predictions can be achieved through a computational approach known as lattice QCD, in which the quark and gluon fields are represented on a discrete spacetime lattice. Many key aspects of nuclear physics can be computed precisely in this framework. For example, such calculations reveal how the masses of the proton and neutron arise from the fundamental quarks and gluons[1], and they have been used to make predictions of the masses of new composite particles later discovered by experiments at CERN[2; 3; 4]. However, the reach of this approach is limited by its computational cost, and controlled first-principles QCD calculations of nuclear structure and reactions, for example, would require a scale of computational resources that is currently infeasible[5]. Without breakthrough developments, many important studies will remain impossible even with the world's next generation of exascale supercomputers (quintilitions (\(10^{18}\)) of operations per second, or the equivalent of 50 million laptops working in concert). If the computational cost of lattice field theory can be greatly reduced, fundamental questions in particle, nuclear and condensed matter physics will be answered. For example, first-principles calculations can probe the fine-tunings in nuclear physics that are deeply important for understanding our existence, by revealing how sensitive the production of carbon in the Universe via the triple-\(\alpha\) process is to the free parameters of the theory, explaining why protons and neutrons cluster inside nuclei, and elucidating how the lightest elements formed in the first minutes of the Universe's existence via Big Bang nucleosynthesis[6]. Calculations in lattice QCD are cast in the form of statistical averages with respect to a distribution of quark and gluon field configurations. A major component of the computational cost of lattice QCD calculations is the estimation of these averages by Monte Carlo sampling techniques. (Sampling is one of several computationally-intensive steps in lattice QCD calculations. Others, such as the inversion of Dirac operators for the calculation of physical observables, may also be accelerated using machine learning (ML) approaches [7, 8, 9]). Sampling representative configurations of a system to quantitatively evaluate its properties is ubiquitous in physics, being used in fields spanning from ab-initio molecular dynamics and statistical physics to astrophysics, and many others. However, sampling from the highly-structured, high-dimensional, and multi-modal distribution of configurations in lattice QCD presents an extraordinarily difficult computational challenge. This problem has historically been the impetus for the development of what have become foundational techniques in computational statistics and high-performance computing, with far-reaching implications within and beyond physics. For example, both the classic Metropolis-Hastings Markov chain Monte Carlo algorithm [10] and Hamiltonian/hybrid Monte Carlo (HMC) [11] were first developed in the context of theoretical nuclear physics, with the latter conceived specifically for lattice QCD. Similarly, the IBM Blue Gene series of supercomputers trace their origins back to the QCDOC (quantum chromodynamics on a chip) computer built specifically for this particular application [12]. The rapid advance of ML over the past few years has spurred the emergence of a new class of algorithms that are revolutionizing computing for both science and industry applications. For example, the extraordinary success of the ML tool AlphaFold [13] in protein folding took the world of biology by surprise, redefining the pace of progress in a field where algorithmic developments had been slow for decades. For lattice QCD, which has historically driven a virtuous cycle of innovations in scientific computing, these advances promise a new chapter. In particular, the rise of generative modelling with ML [14, 15] suggests the particular application of sampling algorithms for lattice QCD. The sampling problem in lattice QCD has several key features that present both challenges and opportunities to ML. On the one hand, any algorithm must be asymptotically exact, preventing the direct application of certain generative ML approaches such as generative adversarial networks or variational autoencoders (VAEs). A practical challenge is also presented by the extreme scale of lattice QCD samples used in state-of-the-art calculations, each of the order of several terabytes at the current time. On the other hand, the forms of the relevant probability distributions are exactly known, which can inform the design and training of sampling architectures. In particular, these distributions are invariant under complicated and high-dimensional symmetry groups which significantly reduce the dimensionality and complexity of the problem if they can be incorporated exactly. Although it has required considerable effort to develop ML models that incorporate the symmetries of lattice QCD into ML architectures, the investment has paid dividends in the efficacy of the resulting algorithms. This Perspective reviews the unique requirements and features of a class of ML-based sampling strategies that have been recently developed for lattice QCD applications and places these developments in the broader context of ML for sampling in scientific domains. Although this endeavour remains in its early stages, it is already clear that it has considerable potential, not only to emulate the transformative impact that ML has had in applications such as AlphaFold [13], but also to spur the advancement of ML itself. ## 2 Lattice QCD and the sampling problem The lattice method for computing physical observables in quantum field theories such as QCD proceeds by discretizing space and time onto a four-dimensional grid (or 'lattice'), with spacing \(a\) between neighbouring points and a finite volume \(V\). In this framework, the fundamental particle degrees of freedom of the theory-- quarks and gluons in QCD--are represented through 'quantum fields' that consist of complex numbers, vectors or matrices associated with the points and edges (or 'links') of the lattice. Quantities of physical interest are then defined by integrals over these field degrees of freedom, and the continuum, infinite-volume theory is recovered by taking the limit \(a\to 0\), \(V\to\infty\). A general physical observable can be defined in terms of quantum 'operators' \(\mathcal{O}\) and computed as a statistical expectation value [16]: \[\langle\mathcal{O}\rangle=\int\mathscr{D}\Phi\,\mathcal{O}[\Phi]p[\Phi],\; \text{where}\;p[\Phi]=e^{-S[\Phi]}/Z. \tag{1}\] Here the notation \(\int\mathscr{D}\Phi\) schematically indicates integration over all configurations of the discretized quantum fields collectively denoted by \(\Phi\), and \(Z=\int\mathscr{D}\Phi e^{-S[\Phi]}\) is a normalizing constant. The 'action' \(S[\Phi]\) encodes the dynamics of the theory by defining the statistical distribution \(p[\Phi]\); in QCD, it describes the fluctuations and interactions of the quark and gluon fields. The operator \(\mathcal{O}\) can be chosen to study various physical properties of the theory; for example, the mass of the proton can be calculated using an operator that represents the interaction of two up quarks and one down quark. In practice, the integral in equation (1) cannot be computed analytically and is instead evaluated by Monte Carlo integration, that is, using an ensemble of \(N\) field configurations \(\{\Phi_{1},\ldots,\Phi_{N}\}\) sampled from the distribution \(p[\Phi]\). Physical quantities are then computed as \(\langle\mathcal{O}\rangle\approx\frac{1}{N}\sum_{i=1}^{N}\mathcal{O}[\Phi_{i}]\) with an uncertainty that is systematically improvable by taking \(N\) large. The first step of any lattice field theory calculation is thus a sampling problem. Although the challenge of generating lattice field configurations is reminiscent of sampling problems in many other fields, the structure of the quantum fields, the complicated symmetries of the distribution \(p[\Phi]\) and the sheer scale of the required calculations set this apart as a particularly difficult computational problem. ### Structure and symmetries of field configurations In typical lattice quantum field theories, the discretized quantum fields not only extend over the spacetime lattice, but also have 'internal' degrees of freedom represented mathematically by a vector or matrix structure at each point or edge of the lattice. In particular, in QCD the gluon field \(U\) is encoded by \(SU(3)\) variables--\(3\times 3\) complex unitary, unit-determinant matrices--on each edge of the lattice, whereas the quark fields \(\Psi\) are encoded by \(4\times 3\) complex matrices on each site of the lattice, as shown in Fig. 1. For QCD, the calculation of a physical observable via equation (1) can thus be expressed as \[\begin{split}\langle\mathcal{O}\rangle&=\frac{1}{Z }\int\mathcal{D}\mathcal{U}\mathcal{D}\bar{\Psi}\mathcal{D}\mathcal{O}[U, \bar{\Psi},\Psi]e^{-S[U,\bar{\Psi},\Psi]}\\ &=\frac{1}{Z}\int\mathcal{D}\mathcal{U}\mathcal{O}^{\prime}[U]e^{ -S_{\text{eff}}[U]},\\ &\text{where }Z=\int\mathcal{D}\mathcal{U}\mathcal{D}\bar{\Psi} \mathcal{D}\Psi e^{-S[U,\bar{\Psi},\Psi]}.\end{split} \tag{2}\] Here the notation \(\int\mathcal{D}U\) indicates integration over all values of the discretised gluon field \(U\), whereas the integration \(\int\mathcal{D}\bar{\Psi}\mathcal{D}\Psi\) over all values of the discretized quark fields are Gaussian integrals that are evaluated analytically, yielding a modified operator \(\mathcal{O}^{\prime}\) and the modified weight \(p[U]=e^{-S_{\text{eff}}[U]}/Z\) over gluon field configurations. (In particular, the integral \(\int\mathcal{D}\bar{\Psi}\mathcal{D}\Psi\) is a Berezin integral [17] over elements of a Grassmann algebra, which must be analytically treated to produce an integral amenable to numerical evaluation.) In practice, auxiliary degrees of freedom known as 'pseudo-fermions' [18] are also typically introduced as stochastic estimators for determinants appearing in \(p[U]=\exp(-S_{\text{eff}}[U])/Z\). State-of-the-art lattice QCD calculations involve fields of size up to \(256^{3}\times 512\approx 8.6\) billion lattice sites with quantum fields represented by roughly 50 degrees of freedom per lattice site (this counting includes four \(SU(3)\) matrices for each lattice site, yielding \(4\times 8=32\) degrees of freedom, as well as complex \(4\times 3\) matrices with \(2\times 3\times 4=24\) degrees of freedom for each site, arising from the pseudo-fermion fields), meaning that, in practice, calculations involve Monte Carlo integration over as many as \(10^{12}\) variables. Symmetries in a lattice field theory manifest as transformations of field configurations that leave the probability density \(p[U]\) and the integration measure invariant. The action, and hence \(p\), is typically invariant under both discrete geometric symmetries of the hypercubic Euclidean spacetime, such as discrete translations, rotations and reflections, and under internal symmetry transformations. For example, one contribution to the lattice QCD action is given by \[S_{g}[U]=-\frac{\beta}{6}\sum_{x}\sum_{\begin{subarray}{c}\mu=1\\ \nu=1\end{subarray}}^{4}\operatorname{Re}\operatorname{Tr}[U_{\mu}(x)U_{\nu}( x+\hat{\mu})U_{\mu}^{\dagger}(x+\hat{\nu})U_{\nu}^{\dagger}(x)], \tag{3}\] where \(\beta\) is a parameter of the theory that is related to the lattice spacing \(a,x\) is summed over the sites of the discretized lattice, and \(\hat{\mu},\hat{\nu}\) indicate vectors of length \(a\) in the \(\mu\) and \(\nu\) directions, respectively (see Fig. 1). From this expression, it can be seen how 'gauge' symmetry is manifest in QCD, as \(p[U]\) is invariant under the transformation of the gauge field \(U\) according to \[U_{\mu}(x)\rightarrow\Omega(x)U_{\mu}(x)\Omega^{\dagger}(x+\hat{\mu}) \tag{4}\] for all possible choices of \(\Omega(x)\in\text{SU}(3)\) over all lattice sites. Because this symmetry is specified by one SU(3)-valued matrix per lattice site (so eight degrees of freedom per site), the symmetry group may have a dimension as large as \(10^{11}\) in state-of-the-art calculations. ### Approaches and challenges to sampling field configurations Conventionally, the generation of an ensemble of lattice fields distributed according to \(p[\Phi]\) is performed iteratively using a Markov process, in which a chain of configurations \(\{\Phi_{1},\Phi_{2},\ldots\}\) is generated by a sequence of stochastic updates beginning from an initial configuration \(\Phi_{0}\). In particular, the HMC algorithm was first conceived of in the 1980s specifically for this application in lattice field theory [11] and has since become a mainstay of the computational science community. In this paradigm, the rapid exploration of the state space is achieved by a directed evolution from each configuration to a new proposed configuration, which avoids an inefficient random walk. Exactness of the distribution is guaranteed by applying the Metropolis-Hastings procedure to accept the Figure 1: **Depiction of a single cube within the spacetime lattice of a lattice QCD calculation.** Shown are some elements \(U_{\mu}(x)\) of the discretized gluon field (red), each associated with an edge \((x,x+\hat{\mu})\) from site \(x\) to the neighboring site in direction \(\mu\in\{1,2,3,4\}\), and an element \(\Psi(y)\) of the discretized quark field (blue), associated with a site \(y\). The value of each \(U_{\mu}(x)\) is a complex unitary \(3\times 3\) matrix with determinant 1, that is, an SU(3) matrix, and each \(\Psi(y)\) is a \(4\times 3\) complex matrix. \(a\) is the lattice spacing between neighbouring points. The fourth dimension of the lattice is suppressed for clarity. proposed configuration with an appropriate probability [10, 19] (see also the next section). Despite the outstanding success of this approach -- which remains the workhorse of lattice field theory -- generating ensembles of field configurations is one of the notable computational costs of first-principles QCD calculations. In particular, because the approach evolves configurations via a local dynamical process, increasingly many updates are required to decorrelate samples on physical length scales as the continuum limit is approached (\(a\to 0\)). This is a manifestation of the phenomenon known as 'critical slowing-down' in this context [11, 20]. Simultaneously, the distribution of QCD gauge fields spans topologically distinct sectors, and Markov-based sampling algorithms such as HMC can become 'trapped' or 'frozen' in sectors of fixed topology. Any alternative approach to sampling lattice field configurations will need to satisfy several key requirements in order to be practically viable. Most importantly, it must be statistically improvable: that is, the true probability distribution \(p[\Phi]\), including the various symmetries that this distribution respects, must be recovered in the limit of a large number of samples. Furthermore, the approach must be efficiently scalable to state-of-the-art lattice field theory studies, which involve field configurations as large as many terabytes of memory each, with as many as \(10^{12}\) degrees of freedom. Finally, the approach must improve upon the impressive success of the HMC framework and mitigate the challenges of critical slowing-down and topological freezing in some regime of physical interest. ## 3 ML for sampling lattice field configurations An ML approach to sampling lattice field configurations is an appealing proposition: it offers a new paradigm of algorithms that are optimised specifically for the task at hand. Enabling radically different approaches to sampling, ML may mitigate critical slowing-down and other key challenges faced by traditional Markov-process algorithms such as HMC. Even for cases where HMC works well, ML may still provide advantages, for example by enabling embarrassingly parallel, rather than sequential, sampling, or by learning to approximate a large number of computational steps of traditional algorithms with fewer operations, as is observed in other fields such as ML approximations of partial differential equations solvers [21]. However, the application of ML to lattice field theory is not straightforward, given the previously highlighted features of the lattice field theory problem. In particular, for the theory of QCD, gauge field samples are collections of matrices that are constrained to be SU(3) matrices (see Fig. 1), whereas samples in typical ML domains such as images or natural language models are represented by vectors of unconstrained real numbers. When considering generative models based on diffeomorphisms, as discussed below, it is imperative to ensure that these constraints are satisfied by all transformations. The associated SU(3) gauge symmetry is also very atypical in comparison with usual ML applications, although other symmetries such as 4D translations may be handled by traditional ML methods such as convolutional neural networks. Moreover, generative models for traditional applications such as images and language do not require asymptotic guarantees of exactness in sampling, whereas these are critical in the lattice field theory context. Finally, the sheer scale of state-of-the-art lattice QCD calculations, both in terms of the scale of lattice samples and the computational cost required to manipulate them, presents a challenge to ML approaches. Figure 2 illustrates these stark contrasts between the lattice field generation problem and other sampling tasks that have been revolutionized through ML, such as image generation. Clearly, achieving state-of-the-art sampling performance with new ML algorithms in the context of lattice field theory will require the development of new algorithms and innovation in ML. ### Classes of generative models for sampling in lattice field theory ML models designed to (approximately) sample from a target density are known as generative or probabilistic models. A generative model typically consists of three components: a space of latent or hidden variables equipped with a density, a set of observed or target variables, and a parametric map that transforms points in the latent space into points in the target space. Optimization is performed on the parametric map so that the density it induces in the target space approximates the target density. A wide variety of ML-based generative architectures have been developed over the past decade, with transformative successes particularly evident in applications to sound/image data [22, 23, 24, 25] and language data [14, 15, 26, 27, 28, 29, 30]. One notable difference between these applications and the challenge of sampling field configurations for lattice QCD is that the true distribution over the space of images, sounds or text is not known, so the model distribution is learnt from data samples. For lattice field configurations, not only is the unnormalized target distribution known, but it must be sampled from with asymptotic guarantees of exactness. This can be achieved with ML models if they feature tractable likelihoods (the model probability density can be computed for any given sample); in this case, they can be embedded inside sampling algorithms with asymptotic guarantees, such as a Markov chain, as discussed further below. A tractable likelihood also allows one to optimize an ML model by minimizing a probability divergence \(D(q_{\theta};p)\) between the model probability density \(q_{\theta}[U]\) parameterized by \(\theta\) and the known target probability density \(p[U]=e^{-S_{\theta}t[U]}/Z\). However, in stark contrast to typical ML sampling problems, training models for lattice QCD sampling requires estimating the gradients \(\nabla_{\theta}D(q_{\theta};p)\) using only samples from the model or perhaps only a small number of 'ground truth' data samples. This restricts the family of probability divergences that can be used; \(f\)-divergences such as the Kullback-Leibler divergence [31] are commonly used, as they can be expressed as an expectation value under \(q_{\theta}\), allowing the divergence and its gradient to be estimated from model samples alone. Any ML approach to sampling for lattice QCD must also be'scalable' as the number of lattice sites, \(M=V/a^{4}\), is increased. Ideally, its computational and memory costs should scale linearly or sub-linearly with the number of lattice sites, which we denote by \(O(M)\) below. This applies to all aspects of the model: drawing samples, evaluating the likelihood \(q_{\theta}\), and evaluating the gradients \(\nabla_{\theta}D(q_{\theta};p)\). This consideration restricts or rules out certain classes of models, as discussed below. The features of various generative modelling frameworks that could be considered for the lattice QCD sampling problem are outlined below. * **Latent-variable models** such as generative adversarial networks [32] and VAEs [33, 34] typically have efficient \(O(M)\) sampling, but intractable likelihoods (involving marginalization of latent variables). * **Auto-regressive models**[14, 15, 22, 26, 27, 28, 29, 30, 35] typically have efficient \(O(M)\) likelihood evaluation. Sampling can also be achieved with \(O(M)\) cost in principle, but existing implementations are impractically slow. * **Continuous time models** include diffusion models [23, 24] defined via stochastic differential equations and continuous-time normalizing flows [36] defined via ordinary differential equations. In these models, likelihood computation requires integrating a scalar ordinary differential equation defined by the divergence of a vector field (the marginal score function). This computation typically has computational cost \(O(M^{3})\) unless additional structure is forced on to the model [37]. * **Discrete time normalizing flow models**[38, 39, 40, 41, 42, 43] remain as good candidate models. In a discrete flow, the generative process maps a latent vector \(z\) (a field configuration in the lattice field theory context) sampled from a base density into the target density via the composition of a series of parametric diffeomorphisms \(F_{1}\), \(F_{2}\),..., \(F_{n}\). If \(z\) is sampled with density \(r(z)\), then a flow sample \(x=F(z)\) has known density \(q(x)=r(z)\left|\det\partial F/\partial z\right|^{-1}\), where \(F=F_{1}\circ\cdots\circ F_{n}\) is the composed diffeomorphism. By restricting to \(F_{i}\) for which \(\partial F_{i}/\partial z\) is a triangular matrix, the cost of \(\left|\det\partial F/\partial z\right|\) is only \(O(M)\) (ref. 38). Figure 2: **Comparison between the sampling tasks of quantum field generation for lattice quantum chromodynamics and image generation.** In addition to differences in the target and symmetries of the problems, the hierarchy of degrees of freedom (dof) per sample to number of samples is inverted for quantum field generation as compared with image generation. The action \(S\) encodes the dynamics of the theory by defining the statistical distribution \(p\), \(U\) the gluon field, and \(Z\) is a normalizing constant. The image on the right side is reprinted from Kaggle ([https://www.kaggle.com/datasets/vitaliykinakh/stable-imagenet1k](https://www.kaggle.com/datasets/vitaliykinakh/stable-imagenet1k)) under a Creative Commons license (CC0 1.0). These models, however, also have intrinsic limitations that must be worked around, such as topology preservation of the diffeomorphism [44] and difficulty in modelling tail-behaviour of a target density if the tails are not already in the base density [45]. One can also consider sampling in an augmented space, where the data space is augmented with an additional set of auxiliary or latent variables. In this setting, it may be viable to reconsider VAEs [33, 34] or VAE-flow hybrids [46], but currently there are no results demonstrating these methods perform well compared to models working directly on the data space for lattice QCD. ### Methods to guarantee asymptotic exactness Several mechanisms have been proposed to combine generative models with Markov chain Monte Carlo and importance-sampling algorithms in order to inherit their asymptotic convergence guarantees. One of the simplest mechanisms is neural importance sampling, in which a model density \(q_{\theta}(x)\approx p(x)\) is used to evaluate expectations under the target \(p\) via \(\mathbb{E}_{p}[\mathcal{O}(x)]=\mathbb{E}_{q_{\theta}}[\frac{p(x)}{q_{\theta} (x)}\mathcal{O}(x)]\) (ref. [47]). An appealing alternative is to incorporate generative models into an asymptotically exact Markov process, which allows existing analysis techniques to be used or existing Markov chain updates to be combined with the ML sampling approach. Generally, the Metropolis-Hastings algorithm uses an ergodic transition kernel \(K(x^{\prime}|x)\) to propose Markov chain updates \(x\to x^{\prime}\) which are accepted with probability \[p_{\rm acc}(x^{\prime}|x)=\min\left(1,\frac{K(x|x^{\prime})p(x^{\prime})}{K(x^ {\prime}|x)p(x)}\right) \tag{5}\] to ensure that the asymptotic equilibrium distribution is the desired target distribution [10, 19]. Although this method is guaranteed to converge [48], the speed of convergence depends on the target density and the choice of transition kernel \(K(x^{\prime}|x)\). ML models can be combined with the Metropolis-Hastings approach by using generative models to construct the kernel \(K(x^{\prime}|x)\). A direct approach is to use the trained model \(q_{\theta}(x)\approx p(x)\) to produce independent and identically distributed (iid) proposal samples, \(K(x^{\prime}|x)=q_{\theta}(x^{\prime})\), with the convergence rate determined by the quality of the model approximation to the target distribution. More advanced techniques include neural transport Monte Carlo [49, 50], where Markov chain Monte Carlo is performed in the latent space; learned Monte Carlo proposals [51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64], where the goal is to directly learn the kernel \(K(x^{\prime}|x)\); learned sequential Monte Carlo sampling [65, 66, 67, 46], which combines deterministic flows with sequential Monte Carlo annealing techniques; Monte Carlo variance minimization [69, 67, 47]; and stochastic normalizing flows [46], which interleave deterministic transforms with latent-variable VAE-like components and Markov chain Monte Carlo transforms. HMC is a particularly successful family of kernels and can also be combined with learnt components. In the presence of pseudo-fermions, simulating the Hamiltonian dynamics for HMC requires an expensive computation of 'force terms', which is another area where acceleration may be possible using ML [62, 70, 71, 72, 73]. ### Incorporating manifold constraints and gauge symmetry in ML models When the target density for sampling features an exact or approximate symmetry, breaking that symmetry in a sampling algorithm will result in computational inefficiencies. In particular, continuous exact symmetries naturally reduce the effective dimensionality of the target distribution, such that incorporating them directly reduces the difficulty of modelling that target. In addition, training a model with the symmetry explicitly encoded modifies the structure of the loss landscape in a way that may make training feasible in cases where attempting to approximately 'learn' the symmetry may not be. Moreover, if guarantees of exactness are required, any remaining symmetry-breaking after training will result in additional costs to correct that breaking via the approaches discussed in the previous subsection. It is thus often advantageous, and in some cases critical, to incorporate symmetries into ML architectures and/or training approaches. Modern ML offers many techniques that seek to take advantage of known symmetries. The simplest such method is data augmentation [74], where the available training set is 'augmented' with randomly transformed input/output pairs. Another technique consists of explicitly adding a term to the optimization target that encourages equivariance or invariance with respect to a group of transformations [75, 76]. Both approaches only serve to assist the training, and symmetries still have to be learnt by the model architecture. Alternatively, it is also possible to construct architectures such that they respect known symmetries by construction; standard convolutional neural networks, for example, are equivariant maps with respect to translations. The most commonly studied symmetries are finite groups [77] and SE(3), the group of isometries of 3D Euclidean space [78, 79]. Naturally, the larger the symmetry group, the harder it is to learn the symmetries, either via data augmentation or via additional optimization targets. In the case of lattice QCD, the gauge symmetry has a prohibitively large dimension scaling with the number of lattice sites, hence it is likely essential to build gauge-equivariant and invariant neural networks for this application. For example, with current architectures and training approaches it has been demonstrated that it is essential to exactly incorporate gauge symmetry [80], but not translations and hybercubic transformations [81], for successful training of flow-based sampling algorithms for lattice field theory. Building generative models that exactly incorporate the symmetry constraints of lattice QCD is a non-trivial task that has required the introduction of several new ML models to treat both gauge and pseudo-fermionic degrees of freedom [80, 81, 82, 83, 84, 85, 86, 87, 88, 89]. This approach relies on the observation that starting from a base distribution that is gauge-invariant (such as the Haar measure on \(\mathrm{SU}(N)\)) and applying a gauge-equivariant diffeomorphism to this base density yields a new density that is also gauge-invariant [38, 76, 83, 90]. One can reduce the problem of building gauge-equivariant diffeomorphisms on the gauge degrees of freedom situated on the edges of a lattice to the problem of building matrix-conjugation-equivariant diffeomorphisms [80, 81] on SU(3), which is a simpler problem. As described in equation (4), a gauge symmetry transformation \(T_{\Omega}\) is parameterized by a field of SU(3) variables \(\Omega(x)\) and acts on an edge, or 'link', variable as \(T_{\Omega}U_{\mu}(x)=\Omega(x)U_{\mu}(x)\Omega^{\dagger}(x+\hat{\mu})\). As a consequence, a product of link variables along a closed loop \(\Lambda(x)\) starting and ending at a point \(x\) transforms as \(\Lambda(x)\rightarrow\Omega(x)\Lambda(x)\Omega^{\dagger}(x)\). Mathematically this is referred to as a matrix-conjugation, or adjoint transformation. If \(U_{\mu}(x)\) is a link, \(\Gamma(x,x+\hat{\mu})\) is a product of links along an open path from \(x\) to \(x+\hat{\mu}\) that does not contain \(U_{\mu}(x)\), and \(g:\text{SU(3)}\rightarrow\text{SU(3)}\) is a conjugation-equivariant diffeomorphism, then a gauge-equivariant diffeomorphism \(f\) can be constructed as \[f(U_{\mu}(x))=g(U_{\mu}(x)\Gamma^{\dagger}(x,x+\hat{\mu}))\Gamma(x,x+\hat{\mu}), \tag{6}\] as shown in Fig. 3. The diffeomorphism \(f\) then manifestly satisfies gauge equivariance, \(f(T_{\Omega}U)=T_{\Omega}f(U)\). Building gauge-equivariant diffeomorphisms for pseudo-fermion degrees of freedom is also possible using extensions of this approach based on parallel transport [91]. ### A roadmap for ML-based sampling in lattice QCD The great potential of ML-based sampling for lattice field theories has inspired rapid developments that have already demonstrated profound successes. Following early applications of flow-based sampling to field theories other than QCD [80, 88], the approach was developed for theories in two spacetime dimensions, specifically for SU(3) gauge fields without dynamical quark (fermionic) degrees of freedom [81] and for U(1) gauge fields with dynamical fermions [86]. Combining these advances enabled the first application of flow models to sampling QCD in 4D [92], albeit with small space-time volumes. ML-accelerated updating schemes have been developed, again for small volumes and with the SU(2) gauge group instead of SU(3) (ref. [62]), and continuous-time models inspired by previous work in the lattice field theory community [93, 94] have been applied to simple lattice field theories [95] and both U(1) gauge theory [82] and SU(3) gauge theory [96] in 2D. These approaches have had astounding success; Fig. 4 illustrates the advantages of flow-based sampling in one particular toy theory, but the conclusion that ML-accelerated sampling schemes can overcome the critical slowing-down and topological freezing challenges faced by HMC has been clear and universal. It is important to emphasize that this success includes theories with fermionic degrees of freedom, where ML-accelerated sampling schemes have been developed to integrate with the usual approach of pseudo-fermions [91, 92, 97]. However, the crucial aspect missing in all applications so far is a demonstration of the effectiveness of ML techniques at the scale of state-of-the-art lattice QCD calculations in nuclear and particle physics. We expect that not only will we soon see applications of ML-accelerated sampling to lattice field theory at scale, but that running at scale is a key ingredient necessary to realize the full potential of ML in this context. In particular, we anticipate that the first impact of ML for this application will be that, once the potentially high cost of training is paid, ML-based sampling will be orders of magnitudes faster than traditional HMC, mitigating critical slowing-down, overcoming topological freezing, and opening the door to a sampling regime where this training cost can be efficiently amortized, as depicted in Fig. 5. At precisely which scale this advantage will be reached is not yet clear; the computational cost of training ML models in this context may vary by orders of magnitude between different architectures and training approaches [98]. As the optimal approach to model parameterization and training can depend sensitively on the number of samples which are ultimately required, the balance of training and sampling costs is highly problem-dependent, and the regime in which flow-based sampling outperforms HMC for lattice QCD applications will depend on precisely how the flow models are used (and reused). It is already evident, however, that achieving this paradigm of efficient ML-accelerated sampling will require considerable investment; it is clear that in the field of generative ML as a whole, the substantial progress in text [14, 15, 26, 27, 28, 29, 26, 27, 28, 29, 30] and image modelling [22, 23, 24, 25] has required pushing the boundaries of model size. So far, the generative ML experiments for lattice field theory are of comparatively small scale, despite the target scale of the problem itself being comparable to, or even larger than, applications in these domains. Success will thus likely require model scales, and corresponding investments in upfront training, that will constitute a change Figure 3: **Illustration of a gauge-equivariant transformation layer [80, 81].** Here, a parametric gauge-equivariant diffeomorphism is constructed from a diffeomorphism \(g\) satisfying \(g(\Omega U/\Omega^{\dagger})=\Omega g(U)\Omega^{\dagger}\) (matrix-conjugation equivariance). To update the matrix-valued link \(U_{\mu}(x)\) in red, this transformation first updates the plaquette \(U_{\mu}(x)\Gamma^{\dagger}(x,x+\hat{\mu})\) containing that link, before ‘pushing’ that update onto the link by assigning \(U^{\prime}_{\mu}(x)=g(U_{\mu}(x)\Gamma^{\dagger}(x,x+\hat{\mu}))\Gamma(x,x+ \hat{\mu})\). The output link \(U^{\prime}_{\mu}(x)\) transforms appropriately under gauge transformations when \(U_{\mu}(x)\) and \(\Gamma(x,x+\hat{\mu})\) are transformed. See the main text for further details. in paradigm of computational resource use for the theoretical physics community. Nevertheless, as a fundamentally structured problem, we anticipate that scaling the custom ML solutions developed for lattice field theory to large models will pay dividends and that lattice QCD will join the list of scientific problems which have seen significant impacts from ML at state-of-the-art scale, such as low-dimension Bayesian parameter inference for astrophysics [99], quantum Monte Carlo [100] and protein-folding [13]. Beyond the anticipated impacts of mitigating sampling challenges such as critical slowing-down and topological freezing, ML models have the potential to catalyse other paradigm shifts in lattice field theory. For example, they naturally offer new opportunities for community resource sharing. Ensembles of lattice field configurations are large enough in size (petabytes for state-of-the-art ensembles) that they can not be easily shared, and massive investments in tape resources are made to store them. In contrast, even the largest ML models only contain a few terabytes of parameters. These can easily be shared, allowing research groups around the globe to efficiently generate their own configurations or reproduce ensembles from a known seed, in both cases capitalizing on community-owned pre-trained models. Another important opportunity is that ML-based samplers can be conditioned on various parameters of the theory, from the lattice spacing and volume to physical parameters such as the strength of coupling of the fundamental particles of the theory. The potential to generate 'correlated' sets of samples at different parameters, interpolated, or even extrapolated [101, 102], from the parameters used during training, is qualitatively distinct from what is possible using traditional sampling algorithms such as HMC (in principle this is also possible for pre-selected parameter sets, using approaches such as parallel tempering), and offers new parameter extrapolation methods. Ultimately, one could even imagine more general ways of conditioning these models (for example, on a symbolic description of the target action), enabling new approaches such as direct measurements of the effect of modifying the action on physical observables. As such, ML techniques hold the promise of redefining conventional wisdom in this field, with implications that have yet to be fully explored. ## 4 Outlook As ML continues to evolve in scope and complexity, its applications in science are being driven into two broad categories: those that can adapt existing ML technologies (usually developed to model images, sound and text) and those that demand ground-up development and inspire innovation. Lattice field theory is becoming established as a prime example of the latter, being simultaneously an important science application in which algorithmic acceleration will have wide-reaching implications for fundamental physics and a massive computa Figure 4: **Demonstration of the advantages of flow-based sampling in a U(1) lattice gauge theory in two spacetime dimensions [80]**. The inset shows the rapid mixing of topological charge \(Q\) when sampling with normalizing flows, compared with Hamiltonian/hybrid Monte Carlo (HMC) and Heat Bath (HB) algorithms for the action defined by \(\beta=7\) (see equation (3) for the definition of the analogous parameter in quantum chromodynamics). The main graph shows the asymptotically improved scaling of \(\tau_{Q}^{\text{int}}\) towards the continuum limit \(\beta\to\infty\), where \(\tau_{Q}^{\text{int}}\) is the ‘integrated autocorrelation time’ of the topological charge, which is a measure of cost in Markov-process sampling and can be interpreted here as a metric for critical slowing-down. Reproduced with permission from ref. [80], APS. tional challenge of a scope and scale that has driven advances in computation and algorithms for decades. In particular, the challenge of sampling lattice field configurations for nuclear and particle physics calculations is a definitive proving-ground for generative ML models in science. With strict requirements of asymptotic exactness, and an ultimate scale at which each sample is several terabytes in size, it is clear that symmetries, structure and domain knowledge must be incorporated into ML architectures designed for this task. In that regard, the application of ML to sampling in lattice field theory is a key exemplar informing the debate around the value of engineering and incorporating domain or expert knowledge that is currently underway in the ML community: whereas ardenut supporters of modern deep learning often argue against the long-term value of incorporating such knowledge (the 'bitter lesson' theory [103], advocated by Richard Sutton), in the physical sciences, and in particular in theoretical physics calculations, there are often precise mathematical formulations of domain knowledge that not only make little sense to ignore, but are intractable to learn from data. Adding to the complexity of this engineering and design challenge, existing algorithmic benchmarks for sampling in lattice field theory are extremely high; new ML samplers must compete against well-established algorithms that have been optimized in co-design with high-performance computing systems for more than four decades. As such, ML for lattice field theory is not only a benchmark for the application of ML in science, but it is also an endeavour with paradigm-shifting potential for physics. If the success already achieved in sampling field theories at toy scales can be mimicked at state-of-the-art scales, it will transform the computational landscape of a field that is one of the largest consumers of open-science supercomputing (computing available to public scientific applications) worldwide, with impacts across particle, nuclear and condensed matter physics and beyond. On the ML side, it will be a flagship example of the power of sophisticated domain-specific customization and engineering to achieve transformative impact in computational science. ## Acknowledgements We thank W. Detmold and R. D. Young for comments on the manuscript. P.E.S. was supported in part by the U.S. Department of Energy, Office of Science, Office of Nuclear Physics, under grant Contract Number DE-SC0011090 and by Early Career Award DE-SC0021006, by a NEC research award, and by the Carl G. and Shirley Sontheimer Research Fund. G.K. was supported by funding from the Schweizerischer Nationalfonds (grant agreement no. 200020_200424). ## Author contributions The authors contributed equally to all aspects of the article.
2310.18564
A General Framework for Robust G-Invariance in G-Equivariant Networks
We introduce a general method for achieving robust group-invariance in group-equivariant convolutional neural networks ($G$-CNNs), which we call the $G$-triple-correlation ($G$-TC) layer. The approach leverages the theory of the triple-correlation on groups, which is the unique, lowest-degree polynomial invariant map that is also complete. Many commonly used invariant maps--such as the max--are incomplete: they remove both group and signal structure. A complete invariant, by contrast, removes only the variation due to the actions of the group, while preserving all information about the structure of the signal. The completeness of the triple correlation endows the $G$-TC layer with strong robustness, which can be observed in its resistance to invariance-based adversarial attacks. In addition, we observe that it yields measurable improvements in classification accuracy over standard Max $G$-Pooling in $G$-CNN architectures. We provide a general and efficient implementation of the method for any discretized group, which requires only a table defining the group's product structure. We demonstrate the benefits of this method for $G$-CNNs defined on both commutative and non-commutative groups--$SO(2)$, $O(2)$, $SO(3)$, and $O(3)$ (discretized as the cyclic $C8$, dihedral $D16$, chiral octahedral $O$ and full octahedral $O_h$ groups)--acting on $\mathbb{R}^2$ and $\mathbb{R}^3$ on both $G$-MNIST and $G$-ModelNet10 datasets.
Sophia Sanborn, Nina Miolane
2023-10-28T02:27:34Z
http://arxiv.org/abs/2310.18564v2
# A General Framework for Robust \(G\)-Invariance ###### Abstract We introduce a general method for achieving robust group-invariance in group-equivariant convolutional neural networks (\(G\)-CNNs), which we call the \(G\)-triple-correlation (\(G\)-TC) layer. The approach leverages the theory of the triple-correlation on groups, which is the unique, lowest-degree polynomial invariant map that is also _complete_. Many commonly used invariant maps--such as the max--are incomplete: they remove both group and signal structure. A complete invariant, by contrast, removes only the variation due to the actions of the group, while preserving all information about the structure of the signal. The completeness of the triple correlation endows the \(G\)-TC layer with strong robustness, which can be observed in its resistance to invariance-based adversarial attacks. In addition, we observe that it yields measurable improvements in classification accuracy over standard Max \(G\)-Pooling in \(G\)-CNN architectures. We provide a general and efficient implementation of the method for any discretized group, which requires only a table defining the group's product structure. We demonstrate the benefits of this method for \(G\)-CNNs defined on both commutative and non-commutative groups--\(SO(2)\), \(O(2)\), \(SO(3)\), and \(O(3)\) (discretized as the cyclic \(C8\), dihedral \(D16\), chiral octahedral \(O\) and full octahedral \(O_{h}\) groups)--acting on \(\mathbb{R}^{2}\) and \(\mathbb{R}^{3}\) on both \(G\)-MNIST and \(G\)-ModelNet10 datasets. ## 1 Introduction The _pooling_ operation is central to the convolutional neural network (CNN). It was originally introduced in the first CNN architecture--Fukushima's 1980 _Neocognitron_[17]--and remained a fixture of the model since. The Neocognitron was directly inspired by the canonical model of the visual cortex as a process of hierarchical feature extraction and local pooling [25; 1]. In both the neuroscience and CNN model, pooling is intended to serve two purposes. First, it facilitates the local-to-global _coarse-graining_ of structure in the input. Second, it facilitates _invariance_ to local changes--resulting in network activations that remain similar under small perturbations of the input. In this way, CNNs construct hierarchical, multi-scale features that have increasingly large extent and increasing invariance. The pooling operation in traditional CNNs, typically a local max or average, has remained largely unchanged over the last forty years. The variations that have been proposed in the literature [40; 56] mostly tackle its _coarse-graining_ purpose, improve computational efficiency, or reduce overfitting, but do not seek to enhance its properties with respect to _invariance_. Both max and avg operations are reasonable choices to fulfill the goal of coarse-graining within CNNs and \(G\)-CNNs. However, they are excessively imprecise and lossy with respect to the goal of constructing robust representations of objects that are invariant only to irrelevant visual changes. Indeed, the max and avg operations are invariant to many natural image transformations such as translations and rotations, but also to unnatural transformations, including pixel permutations, that may destroy the image structure. This excessive invariance has been implicated in failure modes such as vulnerability to adversarial perturbations [20; 26], and a bias towards textures rather than objects [4]. To overcome these challenges and enable robust and selective invariant representation learning, there is a need for novel computational primitives that selectively parameterize invariant maps for natural transformations. Many of the transformations that occur in visual scenes are due to the actions of _groups_. The appreciation of this fact has led to the rise of group-equivariant convolutional networks (\(G\)-CNNs) [8] and the larger program of Geometric Deep Learning [6]. While this field has leveraged the mathematics of group theory to attain precise generalized group-equivariance in convolutional network layers, the pooling operation has yet to meet its group theoretic grounding. Standardly, invariance to a group \(G\) is achieved with a simple generalization of max pooling: Max \(G\)-Pooling [8] --see Fig. 1 (top-right). However, this approach inevitably suffers from the lossiness of the max operation. Here, we unburden the pooling operation of the dual duty of invariance and coarse-graining, by uncoupling these operations into two steps that can be performed with precision. We retain the standard max and avg pooling for coarse-graining, but introduce a new method for robust \(G\)-invariance via the group-invariant triple correlation --see Fig. 1 (bottom-right). The group-invariant triple correlation is the lowest-order complete operator that can achieve exact invariance [32]. As such, we propose a general framework for robust \(G\)-Invariance in \(G\)-Equivariant Networks. We show the advantage of this approach over standard max \(G\)-pooling in several \(G\)-CNN architectures. Our extensive experiments demonstrate improved scores in classification accuracy in traditional benchmark datasets as well as improved adversarial robustness. ## 2 Background We first cover the fundamentals of group-equivariant neural networks--also known as \(G\)-CNNs, or \(G\)-Equivariant Networks--before introducing the framework for \(G\)-Invariant Pooling. ### Mathematical Prerequisites The construction of \(G\)-CNNs requires mathematical prerequisites of group theory, which we recall here. The interested reader can find details in [23]. **Groups.** A _group_\((G,\cdot)\) is a set \(G\) with a binary operation \(\cdot\), which we can generically call the _product_. The notation \(g_{1}\cdot g_{2}\) denotes the product of two elements in the set; however, it is standard to omit Figure 1: **Achieving Robust \(G\)-Invariance in \(G\)-CNNs with the \(G\)-Triple-Correlation**. The output of a \(G\)-Convolutional layer is equivariant to the actions of \(G\) on the domain of the signal. To identify signals that are equivalent up to group action, the layer can be followed by a \(G\)-Invariant map that eliminates this equivariance. In \(G\)-CNNs, Max \(G\)-Pooling is a commonly used for this purpose. Taking the maximum of the \(G\)-Convolutional equivariant output is indeed invariant to the actions of the group. However, it is also lossy: many non-equivalent output vectors have the same maximum. Our method—the \(G\)_-Triple-Correlation_ is the lowest-order polynomial invariant map that is _complete_[46]. As a complete invariant, it preserves all information about the signal structure, removing only the action of the group. Our approach thus provides a new foundation for achieving robust \(G\)-Invariance in \(G\)-CNNs. the operator and write simply \(g_{1}g_{2}\)--a convention we adopt here. Concretely, a group \(G\) may define a class of transformations. For example, we can consider the group of two-dimensional rotations in the plane--the special orthogonal group \(SO(2)\)--or the group of two-dimensional rotations and translations in the plane--the special euclidean group \(SE(2)\). Each element of the group \(g\in G\) defines a _particular_ transformation, such as one _rotation by \(30^{\circ}\)_ or one _rotation by \(90^{\circ}\)_. The binary operation \(\cdot\) provides a means for combining two particular transformations--for example, first rotating by \(30^{\circ}\) and then rotating by \(90^{\circ}\). In mathematics, for a set of transformations \(G\) to be a group under the operation \(\cdot\), the four axioms of closure, associativity, identity and inverse must hold. These axioms are recalled in Appendix A. **Group Actions on Spaces.** We detail how a transformation \(g\) can transform elements of a space, for example how a rotation of \(30^{\circ}\) indeed rotates a vector in the plane by \(30^{\circ}\). We say that the transformations \(g\)'s act on (the elements of) a given space. Specifically, consider \(X\) a space, such as the plane. A _group action_ is a function \(L:G\times X\to X\) that maps \((g,x)\) pairs to elements of \(X\). We say a group \(G\)_acts_ on a space \(X\) if the following properties of the action \(L\) hold: 1. _Identity_: The identity \(e\) of the group \(G\) "does nothing", i.e., it maps any element \(x\in X\) to itself. This can be written as: \(L(e,x)=x\). 2. _Compatibility_: Two elements \(g_{1},g_{2}\in G\) can be combined before or after the map \(L\) to yield the same result, i.e., \(L(g_{1},L(g_{2},x))=L(g_{1}g_{2},x)\). For example, rotating a 2D vector by \(30^{\circ}\) and then \(40^{\circ}\) yields the same result as rotating that vector by \(70^{\circ}\) in one time. For simplicity, we will use the shortened notation \(L_{g}(x)\) to denote \(L(g,x)\) the action of the transformation \(g\) on the element \(x\). Some group actions \(L\) have additional properties and turn the spaces \(X\) on which they operate into _homogeneous spaces_. Homogeneous spaces play an important role in the definition of the \(G\)-convolution in \(G\)-CNNs, so that we recall their definition here. We say that \(X\) is a _homogeneous space_ for a group \(G\) if \(G\) acts transitively on \(X\)--that is, if for every pair \(x_{1},x_{2}\in X\) there exists an element of \(g\in G\) such that \(L_{g}(x_{1})=x_{2}\). The concept can be clearly illustrated by considering the surface of a sphere, the space \(S^{2}\). The sphere \(S^{2}\) is a homogeneous space for \(SO(3)\), the group of orthogonal \(3\times 3\) matrices with determinant one that define 3-dimensional rotations. Indeed, for every pair of points on the sphere, one can define a 3D rotation matrix that takes one to the other. **Group Actions on Signal Spaces.** We have introduced essential concepts from group theory, where a group \(G\) can act on any abstract space \(X\). Moving towards building \(G\)-CNNs, we introduce how groups can act on spaces of signals, such as images. Formally, a _signal_ is a map \(f:\Omega\to\mathbb{R}^{c}\), where \(\Omega\) is called the domain of the signal and \(c\) denotes the number of channels. The _space of signals_ itself is denoted \(L_{2}(\Omega,\mathbb{R}^{c})\). For example, \(\Omega=\mathbb{R}^{2}\) or \(\mathbb{R}^{3}\) for 2D and 3D images. Gray-scale images have one channel (\(c=1\)) and color images have the 3 red-green-blue channels (\(c=3\)). Any action of a group of transformations \(G\) on a domain \(\Omega\) yields an action of that same group on the spaces of signals defined on that domain, i.e., on \(L_{2}(\Omega,\mathbb{R}^{c})\). For example, knowing that the group of 2D rotations \(SO(2)\) acts on the plane \(\Omega=\mathbb{R}^{2}\) allows us to define how \(SO(2)\) rotates 2D gray-scale images in \(L_{2}(\mathbb{R}^{2},\mathbb{R}^{c})\). Concretely, the action \(L\) of a group \(G\) on the domain \(\Omega\) yields the following action of \(G\) on \(L_{2}(\Omega,\mathbb{R}^{c})\): \[L_{g}[f](u)=f(L_{g^{-1}}(u)),\qquad\text{for all $u\in\Omega$ and for all $g\in G$}. \tag{1}\] We use the same notation \(L_{g}\) to refer to the action of the transformation \(g\) on either an element \(u\) of the domain or on a signal \(f\) defined on that domain, distinguishing them using \([\cdot]\) for the signal case. We note that the domain of a signal can be the group itself: \(\Omega=G\). In what follows, we will also consider actions on real signals defined on a group, i.e., on signals such as \(\Theta:G\to\mathbb{R}\). **Invariance and Equivariance**. The concepts of group-invariance and equivariance are at the core of what makes the \(G\)-CNNs desirable for computer vision applications. We recall their definitions here. A function \(\psi:X\mapsto Y\) is _\(G\)-invariant_ if \(\psi(x)=\psi(L_{g}(x))\), for all \(g\in G\) and \(x\in X\). This means that group actions on the input space have no effect on the output. Applied to the group of rotations acting on the space of 2D images \(X=L_{2}(\Omega,\mathbb{R}^{c})\) with \(\Omega=\mathbb{R}^{2}\), this means that a \(G\)-invariant function \(\psi\) produces an input that will stay the same for any rotated version of a given signal. For example, whether the image contains the color red is invariant with respect to any rotation of that image. A function \(\psi:X\mapsto Y\) is _\(G\)-equivariant_ if \(\psi(L_{g}(x))=L_{g}^{\prime}(\psi(x))\) for all \(g\in G\) and \(x\in X\), where \(L\) and \(L^{\prime}\) are two different actions of the group \(G\), on the spaces \(X\) and \(Y\) respectively. This means that a group action on the input space results in a corresponding group action of the same group element \(g\) on the output space. For example, consider \(\psi\) that represents a neural network performing a foreground-background segmentation of an image. It is desirable for \(\psi\) to be equivariant to the group of 2D rotations. This equivariance ensures that, if the input image \(f\) is rotated by \(30^{\circ}\), then the output segmentation \(\psi(f)\) rotates by \(30^{\circ}\) as well. ### \(G\)-Equivariant Networks \(G\)-CNNs are built from the following fundamental building blocks: \(G\)-convolution, spatial pooling, and \(G\)-pooling. The \(G\)-convolution is equivariant to the action of the group \(G\), while the \(G\)-pooling achieves \(G\)-invariance. Spatial pooling achieves coarse-graining. We review the group-specific operations here. The interested reader can find additional details in [8; 10], which include the definitions of these operations using the group-theoretic framework of principal bundles and associated vector bundles. #### 2.2.1 \(G\)-Convolution In plain language, a standard translation-equivariant convolutional neural network layer sweeps filters across a signal (typically, an image), translating the filter and then taking an inner product with the signal to determine the similarity between a local region and the filter. \(G\)-CNNs [8] generalize this idea, replacing translation with the action of other groups that define symmetries in a machine learning task--for example, rotating a filter, to determine the presence of a feature in various orientations. Consider a signal \(f\) defined on a domain \(\Omega\) on which a group \(G\) acts. A neural network filter is a map \(\phi:\Omega\to\mathbb{R}^{c}\) defined with the same domain \(\Omega\) and codomain \(\mathbb{R}^{c}\) as the signal. A \(G\)-convolutional layer is defined by a set of filters \(\{\phi_{1},...,\phi_{K}\}\). For a given filter \(k\), the layer performs a \(G\)-_convolution_ with the input signal \(f\): \[\Theta_{k}(g)=(\phi_{k}*f)(g)=\int_{u\in\Omega}\phi_{k}(L_{g^{-1}}(u))f(u)du, \quad\forall g\in G, \tag{2}\] by taking the dot product in \(\mathbb{R}^{c}\) of the signal with a transformed version of the filter. In practice, the domain \(\Omega\) of the signal is discretized, such that the \(G\)-convolutional layer becomes: \[\Theta_{k}(g)=\sum_{u\in\Omega}\phi_{k}(L_{g^{-1}}(u))f(u),\quad\forall g\in G. \tag{3}\] The output of one filter \(k\) is therefore a map \(\Theta_{k}:G\to\mathbb{R}\), while the output of the whole layer with \(K\) filters is \(\Theta:G\to\mathbb{R}^{K}\) defined as \(\Theta(g)=[\Theta_{1}(g),\dots,\Theta_{K}(g)]\) for all \(g\in G\). The \(G\)-convolution therefore outputs a signal \(\Theta\) whose domain has necessarily become the group \(\Omega=G\) and whose number of channels is the number of convolutional filters \(K\). The \(G\)-convolution is _equivariant_ to the action of the group on the domain of the signal \(f\)[8]. That is, the action of \(g\) on the domain of \(f\) results in a corresponding action on the output of the layer. Specifically, consider a filter \(\phi_{k}\), we have: \[\phi_{k}*L_{g}[f]=L_{g}^{\prime}[\phi_{k}*f],\qquad\forall g\in G, \tag{4}\] where \(L_{g}\) and \(L_{g}^{\prime}\) represent the actions of the same group element \(g\) on the functions \(f\) and \(\phi_{k}*f\) respectively. This property applies for the \(G\)-convolutions of the first layer and of the next layers [8]. #### 2.2.2 \(G\)-Pooling _Invariance_ to the action of the group is achieved by pooling over the group (\(G\)-Pooling) [8]. The pooling operation is typically performed after the \(G\)-convolution, so that we restrict its definition to signals \(\Theta\) defined over a group \(G\). In \(G\)-pooling, a max typically is taken over the group elements: \[\mu_{k}=\max_{g\in G}\Theta_{k}(g). \tag{5}\] \(G\)-pooling extracts a single real scalar value \(\mu_{k}\) from the full feature vector \(\Theta_{k}\), which has \(|G|\) values, with \(|G|\) the size of the (discretized) group \(G\) as shown in Fig. 1. When the group \(G\) is a grid discretizing \(\mathbb{R}^{n}\), max \(G\)-Pooling is equivalent to the standard spatial max pooling used in translation-equivariant CNNs, and it can be used to achieve coarse-graining. More generally, \(G\)-Pooling is \(G\)-invariant, as shown in [8]. However, we argue here that it is excessively \(G\)-invariant. Although it achieves the objective of invariance to the group action, it also loses substantial information. As illustrated in Fig. 1, many different signals \(\Theta\) may yield same result \(\mu\) through the \(G\)-pooling operation, even if these signals do not share semantic information. This excessive invariance creates an opportunity for adversarial susceptibility. Indeed, inputs \(f\) can be designed with the explicit purpose of generating a \(\mu_{k}\) that will fool a neural network and yield an unreasonable classification result. For this reason, we introduce our general framework for robust, selective \(G\)-invariance. ## 3 The \(G\)-Triple-Correlation Layer for Robust \(G\)-Invariance We propose a \(G\)-Invariant layer designed for \(G\)-CNNs that is _complete_--that is, it preserves all information about the input signal except for the group action. Our approach leverages the theory of the triple correlation on groups [32] and applies it to the design of robust neural network architectures. Its theoretical foundations in signal processing and invariant theory allows us to generally define the unique \(G\)-invariant maps of lowest polynomial order that are complete, hence providing a general framework for selective, robust \(G\)-invariance in \(G\)-CNNs [46]. ### The \(G\)-Triple-Correlation Layer The \(G\)_-Triple-Correlation_ (\(G\)-TC) on a real signal \(\Theta:G\rightarrow\mathbb{R}\) is the integral of the signal multiplied by two independently transformed copies of it [32]: \[\tau_{\Theta}(g_{1},g_{2})=\int_{g\in G}\Theta(g)\Theta\left(gg_{1}\right) \Theta\left(gg_{2}\right)dg. \tag{6}\] This definition holds for any locally compact group \(G\) on which we can define the Haar measure \(dg\) used for integration purposes [28]. This definition above is applicable to the \(G\)-CNNs where \(\Theta\) is a collection of scalar signals over the group. We show in Appendix B that we can extend the definition to steerable \(G\)-CNNs where \(\Theta\) can be an arbitrary field [9]. In the equation above, the \(G\)-TC is computed for a pair of group elements \(g_{1},g_{2}\). In practice, we sweep over all pairs in the group. Appendix C illustrates the triple correlation on three concrete groups. Importantly, the \(G\)-triple-correlation is invariant to the action of the group \(G\) on the signal \(\Theta\)[28], as shown below. **Proposition 1**.: _Consider a signal \(\Theta:G\mapsto\mathbb{R}^{c}\). The \(G\)-Triple-Correlation \(\tau\) is \(G\)-invariant:_ \[\tau_{L_{g}[\Theta]}=\tau_{\Theta},\quad\text{for all $g\in G$,} \tag{7}\] _where \(L_{g}\) denotes an action of a transformation \(g\) on the signal \(\Theta\)._ The proof is recalled in Appendix D. We propose to achieve \(G\)-invariance in a \(G\)-CNN by applying the \(G\)-Triple-Correlation (\(G\)-TC) to the output \(\Theta\) of a \(G\)-convolutional layer. Specifically, we apply the \(G\)-TC to each real scalar valued signal \(\Theta_{k}\) that comes from the \(G\)-convolution of filter \(\phi_{k}\), for \(k\in\{1,...,K\}\). We only omit the subscript \(k\) for clarity of notations. In practice, we will use the triple correlation on discretized groups, where the integral is replaced with a summation: \[T_{\Theta}(g_{1},g_{2})=\sum_{g\in G}\Theta(g)\Theta(gg_{1})\Theta(gg_{2}), \tag{8}\] for \(\Theta\) a scalar valued function defined over \(G\). While it seems that the layer computes \(T_{\Theta}(g_{1},g_{2})\) for all pairs of group elements \((g_{1},g_{2})\), we note that the real scalars \(\Theta(gg_{1})\) and \(\Theta(gg_{2})\) commute so that only half of the pairs are required. We will see that we can reduce the number of computations further when the group \(G\) possesses additional properties such as commutativity. We note that the triple correlation is the spatial dual of the _bispectrum_, which has demonstrated robustness properties in the context of deep learning with bispectral neural networks [42]. The goal of bispectral neural networks is to learn an unknown group \(G\) from data. The bispectral layer proposed in [42] assumes an MLP architecture. Our work is the first to generalize the use of bispectral invariants to convolutional networks. Here, we assume that the group \(G\) is known in advance, and exploit the theoretical properties of the triple correlation to achieve robust invariance. One path for future extension may be to combine our approach with the learning approach of [42], to parameterize and learn the group \(G\) that defines a \(G\)-Equivariant and \(G\)-Invariant layer. ### Selective Invariance through Completeness We show here that the proposed \(G\)-triple-correlation is guaranteed to preserve all information aside from any equivariant component due to the group action on the input domain. This crucial property distinguishes our proposed layer from standard \(G\)-Pooling methods, which collapse signals and lose crucial information about the input (Figure 1). In contrast with standard, excessively invariant \(G\)-pooling methods, we show here that our \(G\)-TC layer is instead _selectively_\(G\)-invariant thanks to its _completeness_ property [54; 29; 31], defined here: **Proposition 2**.: _Every integrable function with compact support \(G\) is completely identified--up to group action--by its \(G\)-triple-correlation. We say that the \(G\)-triple-correlation is complete._ Mathematically, an operator \(\mathcal{T}\) is complete for a group action \(L\) if the following holds: for every pair of signals \(\Theta_{1}\) and \(\Theta_{2}\), if \(\mathcal{T}(\Theta_{1})=\mathcal{T}(\Theta_{2})\) then the signals are equal up to the group action, that is: there exists a group element \(h\) such that \(\Theta_{2}=L_{h}[\Theta_{1}]\). The proof of the completeness of the \(G\)-triple-correlation is only valid under a precise set of assumptions [32] (Theorem 2). As we seek to integrate the \(G\)-triple-correlation to enhance robustness in neural networks, we investigate here the scope of these assumptions. First, the assumptions are not restrictive on the type of groups \(G\) that can be used. Indeed, the proof only requires the groups to be Tatsuuma duality groups and the groups of interest in this paper meet this condition. This includes all locally compact commutative groups, all compact groups including the groups of rotations, the special orthogonal groups \(SO(n)\), and groups of translations and rotations, the special euclidean groups \(SE(n)\). Second, the assumptions are not restrictive on the types of signals. Indeed, the signal only needs to be such that any of its Fourier transform coefficients are invertible. For example, when the Fourier transform coefficients are scalar values, this means that we require these scalars to be non-zero. In practical applications on real image data with noise, there is a probability 0 that the Fourier transform coefficients of the input signal will be exactly 0 (scalar case) or non-invertible (matrix case). This is because the group of invertible matrices is dense in the space of matrices. Therefore, this condition is also verified in the applications of interest and more generally we expect the property of completeness of our \(G\)-TC layer to hold in practical neural network applications. ### Uniqueness The above two subsections prove that our \(G\)-Triple Correlation layer is selectively \(G\)-invariant. Here, we note that our proposed layer is the lowest-degree polynomial layer that can achieve this goal. In invariant theory, it is observed that the \(G\)-Triple Correlation is the _only_ third-order polynomial invariant (up to change of basis) [46]. Moreover, it is the lowest-degree polynomial invariant that is also complete. It thus provides a unique and minimal-complexity solution to the problem of robust invariance within this function class. ### Computational Complexity The \(G\)-Triple Correlation enjoys some symmetries that we can leverage to avoid computing it for each pair of group elements (which would represent \(|G|^{2}\) computations), hence making the feedforward pass more efficient. We summarize these symmetries here. **Proposition 3**.: _Consider two transformations \(g_{1},g_{2}\in G\). The \(G\)-Triple Correlation of a real signal \(\Theta\) has the following symmetry:_ \[T_{\Theta}(g_{1},g_{2})=T_{\Theta}(g_{2},g_{1}).\] _If \(G\) is commutative, the \(G\)-Triple Correlation of a real signal has the following additional symmetries:_ \[T_{\Theta}(g_{1},g_{2})=T_{\Theta}(g_{1}^{-1},g_{2}g_{1}^{-1})=T_{\Theta}(g_{ 2}g_{1}^{-1},g_{1}^{-1})=T_{\Theta}(g_{2}^{-1},g_{1}g_{2}^{-1})=T_{\Theta}(g_{ 1}g_{2}^{-1},g_{2}^{-1}).\] The proofs are given in [39] for the group of translations. We extend them to any locally compact group \(G\) in Appendix E. In practice, these symmetries mean that even if there are theoretically \(|G|^{2}\) computations, this number immediately reduces to \(\frac{|G|(|G|+1)}{2}\) and further reduces if the group \(G\) of interest is commutative. In addition, more subtle symmetries can be exploited to reduce the computational cost to linear \(|G|+1\) for the case of one-dimensional cyclic groups [34] by considering the spectral dual of the \(G\)-TC: the bispectrum. We provide a computational approach to extend this reduction to more general, non-commutative groups in Appendix F. The theory supporting our approach has yet to be extended to this general case. Thus, there is an opportunity for new theoretical work that further increases the computational efficiency of the \(G\)-Triple-Correlation. ## 4 Related Work **The Triple Correlation.** The triple correlation has a long history in signal processing [48; 5; 39]. It originally emerged from the study of the higher-order statistics of non-Gaussian random processes, but its invariance properties with respect to translation have been leveraged in texture statistics [53] and data analysis in neuroscience [13], as well as early multi-layer perceptron architectures in the 1990's [12; 33]. The triple correlation was extended to groups beyond translations in [32], and its completeness with respect to general compact groups was established in [30]. To the best of our knowledge, the triple correlation has not previously been introduced as a method for achieving invariance in convolutional networks for either translation or more general groups. **Pooling in CNNs.** Pooling in CNNs typically has the dual objective of coarse graining and achieving local invariance. While invariance is one desiderata for the pooling mechanism, the machinery of group theory is rarely employed in the computation of the invariant map itself. As noted in the introduction, max and average pooling are by far the most common methods employed in CNNs and \(G\)-CNNs. However, some approaches beyond strict max and average pooling have been explored. Soft-pooling addresses the lack of smoothness of the max function and uses instead a smooth approximation of it, with methods including polynomial pooling [49] and learned-norm [22], among many others [15; 14; 43; 44; 3; 45; 11; 35]. Stochastic pooling [57] reduces overfitting in CNNs by introducing randomness in the pooling, yielding mixed-pooling [55], max pooling dropout [51], among others [47; 58; 21] **Geometrically-Aware Pooling.** Some approaches have been adopted to encode spatial or structural information about the feature maps, including spatial pyramid pooling [24], part-based pooling [59], geometric \(L_{p}\) pooling [16] or pooling regions defined as concentric circles [41]. In all of these cases, the pooling computation is still defined by a max. These geometric pooling approaches are reminiscent of the Max \(G\)-Pooling for \(G\)-CNNs introduced by [8] and defined in Section 2.2.2, without the explicit use of group theory. **Higher-Order Pooling.** Average pooling computes first-order statistics (the mean) by pooling from each channel separately and does not account for the interaction between different feature maps coming from different channels. Thus, second-order pooling mechanisms have been proposed to consider correlations between features across channels [38; 19], but higher-orders are not investigated. Our approach computes a third-order polynomial invariant; however, it looks for higher-order correlations within the group rather than across channels and thus treats channels separately. In principle, these approaches could be combined. ## 5 Results **Implementation.** We implement the \(G\)-TC Layer for arbitrary discretized groups with an efficient implementation built on top of the ESCNN library [7; 50], which provides a general implementation of \(E(n)\)-Equivariant Steerable Convolutional Layers. The method is flexibly defined, requiring the user only to provide a (Cayley) table that defines the group's product structure. We provide a link to the codebase in the supplementary materials. Here, we demonstrate the approach on the groups \(SO(2)\), and \(O(2)\), \(SO(3)\), and \(O(3)\), discretized as the groups \(C_{n}\) (cyclic), \(D_{n}\) (dihedral), \(O\) (chiral octahedral), and \(O_{h}\) (full octahedral), respectively. ESCNN provides implementations for \(G\)-Conv layers on all of these \(E(n)\) subgroups. **Experimental Design.** We examine the performance of the \(G\)-TC over Max \(G\)-Pooling in \(G\)-Equivariant Networks defined on these groups and trained on \(G\)-Invariant classification tasks. For the groups \(SO(2)\) and \(O(2)\) acting on \(\mathbb{R}^{2}\), we use the MNIST dataset of handwritten characters [37], and for the groups \(SO(3)\) and \(O(3)\) acting on \(\mathbb{R}^{3}\), we use the voxelized ModelNet10 database of 3D objects [52]. We generate \(G\)-MNIST and \(G\)-ModelNet10 datasets by transforming the domain of each signal in the dataset by a randomly sampled group element \(g\in G\). In these experiments, we train pairs of models in parameter-matched architectures, in which only the \(G\)-Pooling method differs. Note that the purpose of these experiments is to compare _differences in performance_ between models using Max \(G\)-Pooling vs. the \(G\)-TC--not to achieve SOTA accuracy. Thus, we do not optimize the models for overall performance. Rather, we fix a simple architecture and set of hyperparameters and examine the change in performance that arises from replacing Max \(G\)-Pooling with the \(G\)-TC Layer. To isolate the effects of the \(G\)-Pooling method, all models are comprised of a single \(G\)-Conv block followed by \(G\)-Pooling (Max or TC) and an MLP Classifier. Notably, while many \(G\)-Conv models in the literature use the semi-direct product of \(G\) with \(\mathbb{R}^{n}\)--i.e. incorporating the actions of the group \(G\) into a standard translational convolutional model--here, we perform only _pure_\(G\)-Conv, without translation. Thus, we use filters the same size as the input in all models. The \(G\)-Conv block is comprised of a \(G\)-Conv layer, a batch norm layer, and an optional nonlinearity. For the Max \(G\)-Pool model, ReLU is used as the nonlinearity. Given the third-order nonlinearity of the TC, we omit the nonlinearity in the \(G\)-Conv block in the TC Model. The \(G\)-TC layer increases the dimensionality of the output of the \(G\)-Conv block; consequently the input dimension of the first layer of the MLP is larger and the weight matrix contains more parameters than for the Max \(G\)-Pool model. To compensate for this, we increase the dimension of the output of the first MLP layer in the Max Model, to match the overall number of parameters. **Evaluation Methods.** We evaluate the models in two ways. First, we examine differences in the raw _classification accuracy_ obtained by replacing Max \(G\)-Pooling with the \(G\)-TC Layer. Second, we assess the _completeness_ of the model by optimizing "metameric" stimuli for the trained models--inputs that yield the same pre-classifier representation as a target input, but are perceptually distinct. The completeness evaluation is inspired by a recent paper that incorporates the bispectrum--the spectral dual of the triple correlation--into a neural network architecture trained to yield \(G\)-invariant representations for \(G\)-transformed data [42]. In this work, two inputs are considered "perceptually distinct" if they are not in the same group orbit. They find that all inputs optimized to yield the same representation in the bispectral model are identical up to the group action. By contrast, many metameric stimuli can be found for \(E(2)\)-CNN [50], a \(G\)-Equivariant CNN that uses Max \(G\)-Pooling. Given the duality of the bispectrum and the triple correlation, we expect to observe similar "completeness" for \(G\)-CNNs using the \(G\)-TC Layer. ### Classification Performance We train \(G\)-TC and Max \(G\)-Pooling models on the \(SO(2)\) and \(O(2)\)-MNIST and chiral (\(O\)) and full (\(O_{h}\)) octahedral voxelized ModelNet10 training datasets and examine their classification performance on the test set. Full training details including hyperparameters are provided in Appendix G. Table 1 shows the test classification accuracy obtained by the Max-\(G\) and \(G\)-TC architectures on each dataset. Accuracy is averaged over four random seeds, with confidence intervals showing standard deviation. We find that the model equipped with \(G\)-TC obtains a significant improvement in overall classification performance--an increase of \(1.3\), \(0.89\), \(1.84\) and \(3.49\) percentage points on \(SO(2)\)-MNIST, \(O(2)\)-MNIST, \(O\)-ModelNet10 and \(O_{h}\)-ModelNet10 respectively. \begin{table} \begin{tabular}{c c c c c} \hline \hline & \(C\)**8**-CNN on \(SO(2)\)**-MNIST** & \(D16\)**-CNN on \(O(2)\)**-MNIST** \\ \cline{2-5} Method & Accuracy & Parameters & Accuracy & Parameters \\ \hline Max \(G\)-Pool & 95.23 \(\pm\) 0.15 & 32,915 & 92.17\% \(\pm\) 0.23 & 224,470 \\ \(G\)**-TC** & **96.53 \(\pm\) 0.16** & 35,218 & **93.06 \% \(\pm\) 0.09** & 221,074 \\ \hline \hline & \(O\)**-CNN on \(O\)**-ModelNet10** & \(O_{h}\)**-CNN on \(O_{h}\)**-ModelNet10** \\ \cline{2-5} Method & Accuracy & Parameters & Accuracy & Parameters \\ \hline Max \(G\)-Pool & 72.17\% \(\pm\) 0.95 & 500,198 & 71.73\% \(\pm\) 0.23 & 1,826,978 \\ \(G\)**-TC** & **74.01\% \(\pm\) 0.48** & **472,066** & **75.22\% \(\pm\) 0.62** & **1,817,602** \\ \hline \hline \end{tabular} \end{table} Table 1: **Classification Accuracy & Parameter Counts for Models Trained on \(G\)-MNIST and \(G\)-ModelNet10**. Confidence intervals reflect standard deviation over four random seeds per model. The model equipped with \(G\)-TC rather than Max \(G\)-Pooling obtains significantly improved classification performance on all datasets. ### Completeness Following the analysis of [42], we next evaluate the completeness of the models trained on the \(G\)-MNIST Dataset. Figure 2 shows inputs optimized to yield the same pre-classifier representation as a set of target images. In line with similar findings from [42], we find that all inputs yielding an identical representations and classifications in the \(G\)-TC Model are within the same group orbit. Notably, the optimized images are identical to the targets, _up to the group action_. This reflects exactly the completness of the \(G\)-TC: the \(G\)-TC preserves all signal structure up to the group action. Thus, any rotated version of the a target will yield the same \(G\)-TC Layer output. By contrast, many "metameric" misclassified stimuli can be found for the Max \(G\)-Pool Model, a consequence of the lossiness of this pooling operation. ## 6 Discussion In this work, we introduced a new method for achieving robust group-invariance in group-equivariant convolutional neural networks. Our approach, the \(G\)_-TC Layer_, is built on the _triple correlation_ on groups, the lowest-degree polynomial that is a complete group-invariant map [32, 46]. Our method inherits its completeness, which provides measurable gains in robustness and classification performance as compared to the ubiquitous Max \(G\)-Pooling. This improved robustness comes at a cost: the \(G\)-TC Layer increases the dimension of the output of a \(G\)-Convolutional layer from \(G\) to \(\frac{|G|(|G|+1)}{2}\). While the dimension of the discretized groups used in \(G\)-CNNs is typically small, this increase in computational cost may nonetheless deter practitioners from its use. However, there is a path to further reduction in computational complexity provided that we consider its spectral dual: the bispectrum. In [34], an algorithm is provided that exploits more subtle symmetries of the bispectrum to demonstrate that only \(|G|+1\) terms are needed to provide a complete signature of signal structure, for the one-dimensional cyclic group. In Appendix F, we extend the computational approach from [34] to more general groups and provided a path for substantial reduction in the complexity of the \(G\)-TC Layer, thus expanding its practical utility. Novel mathematical work that grounds our proposed computations in group theory is required to quantify the exact complexity reduction that we provide. As geometric deep learning is applied to increasingly complex data from the natural sciences [18, 2, 27], we expect robustness to play a critical role in its success. Our work is the first to introduce the general group-invariant triple correlation as a new computational primitive for geometric deep learning. We expect the mathematical foundations and experimental successes that we present here to provide a basis for rethinking the problems of invariance and robustness in deep learning architectures. Figure 2: **Optimized Model Metamers.** For each model, 100 targets from the MNIST dataset were randomly selected. 100 inputs were randomly initialized and optimized to yield identical pre-classifier model presentations. All inputs optimized for the \(G\)-TC Model converge to the orbit of the target. By contrast, metamers that bear no semantic relationship to the targets are found for every target in the Max \(G\)-Pooling model. ## Acknowledgments The authors thank Christopher Hillar, Bruno Olshausen, and Christian Shewmake for many conversations on the bispectrum and triple correlation, which have helped shape the ideas in this work. Thanks also to the members of the UCSB Geometric Intelligence Lab and to four anonymous reviewers for feedback on earlier versions. Lastly, the authors acknowledge financial support from the UC Noyce Initiative: UC Partnerships in Computational Transformation, NIH R01 1R01GM144965-01, and NSF Grant 2134241.
2305.13687
Flexible Bayesian Quantile Analysis of Residential Rental Rates
This article develops a random effects quantile regression model for panel data that allows for increased distributional flexibility, multivariate heterogeneity, and time-invariant covariates in situations where mean regression may be unsuitable. Our approach is Bayesian and builds upon the generalized asymmetric Laplace distribution to decouple the modeling of skewness from the quantile parameter. We derive an efficient simulation-based estimation algorithm, demonstrate its properties and performance in targeted simulation studies, and employ it in the computation of marginal likelihoods to enable formal Bayesian model comparisons. The methodology is applied in a study of U.S. residential rental rates following the Global Financial Crisis. Our empirical results provide interesting insights on the interaction between rents and economic, demographic and policy variables, weigh in on key modeling features, and overwhelmingly support the additional flexibility at nearly all quantiles and across several sub-samples. The practical differences that arise as a result of allowing for flexible modeling can be nontrivial, especially for quantiles away from the median.
Ivan Jeliazkov, Shubham Karnawat, Mohammad Arshad Rahman, Angela Vossmeyer
2023-05-23T04:49:12Z
http://arxiv.org/abs/2305.13687v2
# Flexible Bayesian Quantile Analysis of Residential Rental Rates ###### Abstract This article develops a random effects quantile regression model for panel data that allows for increased distributional flexibility, multivariate heterogeneity, and time-invariant covariates in situations where mean regression may be unsuitable. Our approach is Bayesian and builds upon the generalized asymmetric Laplace distribution to decouple the modeling of skewness from the quantile parameter. We derive an efficient simulation-based estimation algorithm, demonstrate its properties and performance in targeted simulation studies, and employ it in the computation of marginal likelihoods to enable formal Bayesian model comparisons. The methodology is applied in a study of U.S. residential rental rates following the Global Financial Crisis. Our empirical results provide interesting insights on the interaction between rents and economic, demographic and policy variables, weigh in on key modeling features, and overwhelmingly support the additional flexibility at nearly all quantiles and across several sub-samples. The practical differences that arise as a result of allowing for flexible modeling can be nontrivial, especially for quantiles away from the median. Keywords: Bayesian inference, generalized asymmetric Laplace distribution, Markov chain Monte Carlo, panel data, rental markets. ## 1 Introduction This paper aims to provide complementary methodological and empirical contributions to the quantile regression literature. On the methodological side, we develop a flexible Bayesian approach to random effects quantile regression based on a generalization of the asymmetric Laplace distrbution, specify an efficient Markov chain Monte Carlo (MCMC) estimation algorithm, and present methods for formal model comparison of various nested and non-nested models that also enable us to assess the importance of flexible modeling. Our methods are readily motivated by the econometric challenges of studying U.S. residential rental rates and their dependence on unemployment and mortgage policies following the Global Financial Crisis. Our investigation deals with key features of the data, including considerable zip-code-level heterogeneity and skewness in the distribution of rents. The separation of modeling features into those that are practically relevant from those that are not warranted in this context is handled by model comparison. Koenker and Bassett (1978) introduced quantile regression as a minimization problem involving an asymmetrically weighted linear loss function, but subsequent work has noted the duality between that approach and modeling through a likelihood function built on the asymmetric Laplace (AL) distribution (Koenker and Machado, 1999; Yu and Moyeed, 2001). The latter approach becomes very potent when the AL distribution is expressed as a mixture of normal and exponential distributions (Kozumi and Kobayashi, 2011). The mixture formulation permits estimation by simple, yet powerful, MCMC algorithms, and has enabled extensions of the quantile methodology to a variety of other settings including censored data (Kozumi and Kobayashi, 2011; Benoit and Poel, 2010; Ojha and Rahman, 2021), ordinal outcomes (Rahman, 2016; Alhamzawi, 2016; Maheshwari and Rahman, 2023), linear mixed models (Luo et al., 2012), and panels of binary (Rahman and Vossmeyer, 2019; Bresson et al., 2021), ordinal (Alhamzawi and Ali, 2018), or dynamic censored data (Kobayashi and Kozumi, 2012). Recent work by Goncalves et al. (2022) considered extensions to the case of dynamic quantile linear models. While the application of the AL distribution has unlocked a plethora of new research opportunities, the AL distribution itself is not without its limitations. For instance, the skewness parameter is completely determined once a quantile is chosen and the mode of the distribution is always fixed at the value of the location parameter. These limitation can be circumvented by introducing a shape parameter into the mean of the normal kernel in the AL mixture representation leading to the generalized asymmetric Laplace (GAL) distribution (Yan and Kottas, 2017; Rahman and Karnawat, 2019). We extend GAL modeling to the random effects panel setting by proposing an efficient MCMC sampler which offers a variety of algorithmic improvements through suitable transformations of the mixture variables, block sampling of scale and shape parameters, and block sampling of the individual-specific and common effect parameters (cf. Nascimento and Goncalves, 2021). These changes eliminate the problem of high autocorrelation in the MCMC draws, but are also applica ble to the MCMC estimation models based on the simpler AL distribution (cf. Luo et al., 2012) while also allowing for correlated random effects. For both the GAL and AL panel data models, we adapt the methods of Chib (1995) and Chib and Jeliazkov (2001) to enable model comparison through marginal likelihoods, which, with few exceptions (e.g., Kobayashi and Kozumi, 2012; Maheshwari and Rahman, 2023) is broadly lacking in the quantile literature. Several simulation studies carefully examine the properties and practical appeal of the proposed techniques. The empirical contribution of the paper involves the study of U.S. residential rental rates during the recovery period following the Global Financial Crisis. We construct a novel data set that includes median rental rates in \(14,533\) zip codes from \(2010\) to \(2016\), as well as zip-code level economic, demographic, mortgage, and tax policy controls. Our methodology is particularly appealing in this setting because housing prices and rents are heavily skewed and heterogeneous across regions. For instance, from \(2010\) to \(2016\), the Cleveland MSA region's change in "All Transaction House Price Index" was about \(8.42\), whereas the San Francisco MSA region's change was about \(156\).1 Skewness, along with heterogeneity in economic outcomes, has been identified as an important driver of public policy and political economy considerations (Benhabib and Bisin, 2018). Footnote 1: Based on data from the FRED database at the Federal Reserve Bank of St. Louis. The data reveal a positive impact of unemployment on residential rental rates as uncertain job prospects reduce the willingness and ability of households to commit to homeownership and instead shift demand towards rental units. We also find that home mortgage deductions decrease rental prices by making homeownership more attractive. This finding is particularly relevant in the context of the Tax Cuts and Jobs Act, which was passed in \(2017\). The law lowered mortgage deductions, suggesting that one consequence of the policy change is an expected rise in rents. Lastly, model comparisons across many quantiles and samples reveal that the data overwhelmingly support the more flexible GAL modeling framework. The remainder of the paper is organized as follows. In Section 2, we present the proposed modeling, estimation and model comparison framework. This section also presents improved algorithms for the simpler AL-based model. Section 3 illustrates the proposed algorithms in multiple simulation studies. Section 4 describes the data, presents our rental rates application and discusses the results, while Section 5 concludes. Methodology This section introduces our proposed model, discusses the challenges associated with its estimation, and presents the MCMC estimation algorithm and model comparison framework. The section also offers an improved algorithm for estimating AL-based models. ### The Flexible Random Effects Quantile (FREQ) Model We focus on a panel data model which takes the form \[y_{it}=x^{\prime}_{it}\beta_{p_{0}}+z^{\prime}_{it}\alpha_{i}+\varepsilon_{it}, \qquad i=1,\ldots,n,\quad t=1,\ldots,T_{i}, \tag{1}\] where \(y_{it}\) denotes the \(t\)-th response on the \(i\)-th unit, \(x_{it}\) is a \(k\) vector of covariates, \(\beta_{p_{0}}\) is a \(k\) vector of common parameters at the \(p_{0}\)-th quantile (henceforth, simply \(\beta\)), \(z_{it}\) is an \(l\) vector of variables with \(z_{it}\subseteq x_{it}\), and \(\alpha_{i}\) is an \(l\) vector of subject-specific random effects that induces dependence between observations on the same individual.2 Footnote 2: An unfortunate rift in terminology has persisted between statistics and econometrics in the panel (longitudinal) context. In statistics, \(\beta\) and \(\{\alpha_{i}\}\) are called fixed and random effects, respectively, because the former do not vary with \(i\), whereas the latter are subject-specific. In econometrics these terms are used to distinguish between alternative ways of dealing with \(\{\alpha_{i}\}\) – fixed effects estimators remove the heterogeneity (if possible) by data transformations such as mean- or first-differencing, whereas random effects estimators model the \(\{\alpha_{i}\}\) explicitly through a distribution. Although not immediately obvious from the notation, the setup is rather general and can capture dynamics, unknown covariate functions, and correlated random effects, depending on what is included in \(x_{it}\) (and potentially also in \(z_{it}\subseteq x_{it}\)). In particular, dynamic modeling can be pursued by including lags of \(y_{it}\) in \(x_{it}\) and ensuring that the lag coefficients satisfy stationarity. Flexible functional modeling for some covariate \(s\) can be implemented through a set of basis functions \(\mathcal{B}=\{b_{1},\ldots,b_{m}\}\), e.g., B-splines, natural splines, truncated power series, wavelets, etc., (Ruppert et al., 2003) so that \(f(s)=\sum_{j=1}^{m}b_{j}(s)\delta_{j}\), in which case \(x_{it}\) includes \((b_{1}(s_{i}),\ldots,b_{m}(s_{i}))^{\prime}\), while \((\delta_{1},\ldots,\delta_{m})^{\prime}\) becomes part of the regression parameter vector \(\beta\). In addition, correlated random effect models where the heterogeneity can be correlated with certain observed covariates is handled by interacting those covariates with \(z_{it}\) and including the result in \(x_{it}\)(see, e.g., Chamberlain, 1984; Mundlak, 1978; Chib and Jeliazkov, 2006). Random effect models are also indispensable in settings with multivariate heterogeneity or time-invariant covariates because the data transformations (i.e., mean- or first-differencing) underlying "fixed effects" estimators in econometrics (i) do not remove slope heterogeneity and (ii) wipe out any time-invariant covariates. We parameterize the model in Equation (1) by letting \(\varepsilon_{it}\stackrel{{ iid}}{{\sim}}\text{GAL}(0,\sigma,p_{0},\gamma)\), using the quantile-fixed GAL distribution (Yan and Kottas, 2017; Rahman and Karnawat, 2019) - a generalization stemming from the mixture representation of the AL distribution - to decouple the modeling of skewness from the quantile parameter. A variable \(s\) is said to follow a quantile-fixed GAL distribution, i.e., \(s\sim\text{GAL}(\mu,\sigma,p_{0},\gamma)\), where \(\mu\), \(\sigma\), \(p_{0}\), and \(\gamma\) reprsent the location, scale, quantile, and skewness parameters, respectively, if it has density \[\begin{split} f_{GAL}(s|\mu,\sigma,p_{0},\gamma)&= \frac{2p(1-p)}{\sigma}\Bigg{(}\bigg{[}\Phi\left(-s^{*}\,\frac{p_{\gamma_{+}}}{ |\gamma|}+\frac{p_{\gamma_{-}}}{p_{\gamma_{+}}}|\gamma|\right)-\Phi\left(\frac{ p_{\gamma_{-}}}{p_{\gamma_{+}}}|\gamma|\right)\bigg{]}\\ &\times\exp\bigg{\{}-s^{*}\,p_{\gamma_{-}}+\frac{\gamma^{2}}{2} \bigg{(}\frac{p_{\gamma_{-}}}{p_{\gamma_{+}}}\bigg{)}^{2}\bigg{\}}I\left( \frac{s^{*}}{\gamma}>0\right)\\ &+\Phi\left(-|\gamma|+s^{*}\,\frac{p_{\gamma_{+}}}{|\gamma|}I \left(\frac{s^{*}}{\gamma}>0\right)\right)\exp\bigg{\{}-s^{*}\,p_{\gamma_{+}} +\frac{\gamma^{2}}{2}\bigg{\}}\Bigg{)},\end{split} \tag{2}\] where \(s^{*}=\frac{s-\mu}{\sigma}\), \(p\equiv p(\gamma,p_{0})=I(\gamma<0)+[p_{0}-I(\gamma<0)]/g(\gamma)\), \(p_{\gamma_{+}}=p-I(\gamma>0)\) and \(p_{\gamma_{-}}=p-I(\gamma<0)\), \(g(\gamma)=2\Phi(-|\gamma|)\exp(\gamma^{2}/2)\) and \(\gamma\in(L,U)\), where \(L\) is the negative square root of \(g(\gamma)=1-p_{0}\) and \(U\) is the positive square root of \(g(\gamma)=p_{0}\) (see Section 2 in Rahman and Karnawat (2019) for more details). The term "quantile-fixed" refers to the fact that the density in Equation (2) satisfies \(\int_{-\infty}^{\mu}f_{GAL}(\varepsilon_{it}|\mu,\sigma,p_{0},\gamma)d \varepsilon_{it}=p_{0}\). The special case of AL density results when \(\gamma=0\), this is more clearly seen from the mixture representation presented in Equation (4). Figure 1 offers a visualization of the differences between the quantile-fixed GAL and AL densities. The figure shows three different quantiles when \(\sigma=1\), the standard case. We observe that the GAL distribution, unlike the AL distribution, allows the mode to vary rather than being fixed at \(\mu=0\) at all quantiles. Additionally, at the median \(p_{0}=0.50\), the GAL distribution can be positively (\(\gamma<0\)) or negatively (\(\gamma>0\)) skewed and can have tails that are heavier or narrower than the AL distribution. These characteristics make GAL significantly more flexible than AL, but the value of the extra flexibility is an application-specific empirical quesiton. Because of the additional flexibility that the GAL distribution offers over the AL distribution, we refer to the model based on the former as the Flexible Random Effects Quantile (FREQ) model, and the model based on the latter as the Random Effects Quantile (REQ) model. The distributional assumption on the error term, \(\varepsilon_{it}\stackrel{{ iid}}{{\sim}}\text{GAL}(0,\sigma,p_{0},\gamma)\) implies that \(y_{it}|\alpha_{i}\stackrel{{ ind}}{{\sim}}\text{GAL}(x^{\prime}_{it} \beta+z^{\prime}_{it}\alpha_{i},\sigma,p_{0},\gamma)\) for \(i=1,\ldots,n\) and \(t=1,\ldots,T_{i}\). Assuming a density \(f(\alpha|\varphi^{2})\) for the random effects, the complete data likelihood can be expressed as \[f(y,\alpha|\beta,\sigma,\gamma,\varphi^{2})=\prod_{i=1}^{n}\bigg{[}\bigg{\{}\prod_ {j=1}^{T_{i}}f_{GAL}\left(y_{it}|x^{\prime}_{it}\beta+z^{\prime}_{it}\alpha_{i}, \sigma,p_{0},\gamma\right)\bigg{\}}f(\alpha_{i}|\varphi^{2})\bigg{]}, \tag{3}\] where \(\alpha=(\alpha_{1},\ldots,\alpha_{n})\), \(y=(y_{1},\ldots,y_{n})\) with each \(y_{i}=(y_{i1},\ldots,y_{iT_{i}})^{\prime}\) for \(i=1,\ldots,n\). The density \(f(\alpha_{i}|\varphi^{2})\) can be any suitable distribution, but is typically assumed normal (Luo et al., 2012), e.g., here we let \(\alpha_{i}|\varphi^{2}\stackrel{{ iid}}{{\sim}}N(0_{l},\varphi^{2 }I_{l})\) for \(i=1,\ldots,n\). The complete data likelihood in Equation (3) can be combined with priors on the parameters to obtain the joint posterior distribution, but this posterior does not yield known conditional posteriors suitable for a tractable MCMC algorithm. Hence, we utilize the mixture representation of the GAL distribution obtained by introducing a shape parameter into the mean of the normal kernel in the normal-exponential mixture of the AL distribution and mixing with respect to a half-normal distribution (Yan and Kottas, 2017; Rahman and Karnawat, 2019). Unlike the AL distribution, the GAL distribution allows the skewness and mode to vary for a given quantile, offering some desirable flexibility. For \(\varepsilon_{it}\sim\text{GAL}(0,\sigma,p_{0},\gamma)\), the mixture representation can be expressed as, \[\varepsilon_{it}=\sigma A\omega_{it}+\sigma C|\gamma|s_{it}+\sigma(B\omega_{it })^{\frac{1}{2}}u_{it}, \tag{4}\] where \(s_{it}\sim N^{+}(0,1)\), \(\omega_{it}\sim\mathcal{E}(1)\), \(u_{it}\sim N(0,1)\), \(A\equiv A(p)=\frac{1-2p}{p(1-p)}\), \(B\equiv B(p)=\frac{2}{p(1-p)}\), \(C=[I(\gamma>0)-p]^{-1}\), and \(p\) is as defined earlier. Here, \(N^{+},\mathcal{E},N\) denote half-normal, exponential, and normal distributions, respectively. Note that the GAL mixture distribution reduces to an AL mixture distribution when \(\gamma\) is set to \(0\), as mentioned earlier. Substituting the Figure 1: Probability density plots of the AL (\(\gamma=0\)) and GAL (\(\gamma\neq 0\)) distributions. mixture representation given by Equation (4) into Equation (1), the model can be written as \(y_{it}=x^{\prime}_{it}\beta+z^{\prime}_{it}\alpha_{i}+\sigma A\omega_{it}+\sigma C |\gamma|s_{it}+\sigma(B\omega_{it})^{\frac{1}{2}}u_{it}.\) In this formulation, the scale parameter appears in the conditional mean which is not suitable for estimation (Kozumi and Kobayashi, 2011). Therefore, we make the following transformation \(h_{it}=\sigma s_{it},\)\(\nu_{it}=\sigma\omega_{it}\) and rewrite the model as, \[y_{it}=x^{\prime}_{it}\beta+z^{\prime}_{it}\alpha_{i}+A\nu_{it}+C|\gamma|h_{it }+(\sigma B\nu_{it})^{\frac{1}{2}}u_{it}. \tag{5}\] Stacking the model given by Equation (5) for each individual \(i\), we get \[y_{i}=X_{i}\beta+Z_{i}\alpha_{i}+A\nu_{i}+C|\gamma|h_{i}+\Lambda_{i}^{1/2}u_{i}, \tag{6}\] where \(y_{i}=(y_{i1},\cdots,y_{iT_{i}})^{\prime}\), \(X_{i}=(x^{\prime}_{i1},\cdots,x^{\prime}_{iT_{i}})^{\prime}\) is the design matrix of size \(T_{i}\times k\) for each individual \(i\), \(Z_{i}=(z^{\prime}_{i1},\cdots,z^{\prime}_{iT_{i}})^{\prime}\) is \(T_{i}\times l\) matrix of covariates associated with the random effects, \(\nu_{i}=(\nu_{i1},\cdots,\nu_{iT_{i}})^{\prime}\), \(h_{i}=(h_{i1},\cdots,h_{iT_{i}})^{\prime}\), \(u_{i}=(u_{i1},\cdots,u_{iT_{i}})^{\prime}\) and the diagonal matrix \[\Lambda_{i}=\begin{bmatrix}\sigma B\nu_{i1}&0&\cdots&0\\ 0&\sigma B\nu_{i2}&\cdots&0\\ \vdots&\vdots&\ddots&\vdots\\ 0&0&\cdots&\sigma B\nu_{iT_{i}}\end{bmatrix}.\] The model in Equation (6) implies that \(y_{i}|\beta,\alpha_{i},\nu_{i},h_{i},\sigma,\gamma\sim N_{T_{i}}\left(X_{i} \beta+Z_{i}\alpha_{i}+A\nu_{i}+C|\gamma|h_{i},\ \Lambda_{i}\right)\), which is combined with the priors \[\beta\sim N(\beta_{0},B_{0}),\ \ \ \ \ \sigma\sim IG\left(\frac{n_{0}}{2},\frac{d_{ 0}}{2}\right),\ \ \ \ \ \gamma\sim Unif(L,U),\ \ \ \ \ \varphi^{2}\sim IG\left(\frac{c_{1}}{2},\frac{d_{1}}{2}\right), \tag{7}\] to obtain the joint posterior distribution. Let \(\Theta=(\beta,\alpha,\nu,h,\sigma,\gamma,\varphi^{2})\), then the posterior distribution can be expressed as follows, \[\pi(\Theta|y)\propto f(y|\Theta)\times\pi(\alpha|\varphi^{2}) \times\pi(\nu)\times\pi(h)\times\pi(\beta)\times\pi(\sigma)\times\pi(\varphi^ {2})\times\pi(\gamma)\] \[\ \ \ -C|\gamma|h_{i})\bigg{\}}\times\left(\varphi^{2} \right)^{-\frac{l}{2}}\exp\bigg{\{}-\frac{1}{2}\frac{\alpha^{\prime}_{i}\alpha _{i}}{\varphi^{2}}\bigg{\}}\times\left(\sigma^{-T_{i}}\exp\Big{\{}-\sum_{j=1} ^{T_{i}}\frac{\nu_{it}}{\sigma}\Big{\}}\right)\] \[\ \ \ \times\left(\sigma^{-T_{i}}\exp\left\{-\frac{h^{\prime}_{i}h_{ i}}{2\sigma^{2}}\right\}\right)\bigg{]}\times\exp\bigg{\{}-\frac{1}{2}(\beta-\beta_{0})^{ \prime}B_{0}^{-1}(\beta-\beta_{0})\bigg{\}}\] \[\ \ \ \times\left(\sigma^{-\frac{n_{0}}{2}-1}\exp\left\{-\frac{d_{ 0}}{2\sigma}\right\}\right)\times\left(\varphi^{2}\right)^{-\frac{c_{1}}{2}-1} \exp\left\{-\frac{d_{1}}{2\varphi^{2}}\right\}, \tag{8}\] where \(f(y|\Theta)\) denotes the density, conditional on \(\alpha\), resulting from the stacked FREQ model given by Equation (6). The FREQ model has several appealing properties: (i) it can accommodate both common and random effect parameters, (ii) random effects can be associated with multiple variables, in addition to the constant, allowing for both slope and intercept heterogeneity, (iii) quantile regression gives us the ability to explore the entire distribution of the outcome variable \(y\), and (iv) flexibility in the skewness parameter allows better fit for various settings. Economic data that are skewed, exhibit power laws, or present odd asymmetries would benefit from the model offered in this paper. Such data include distributions of income, bank assets, social networks, and house or rental prices, the latter of which is explored in this paper. ### Estimation The FREQ model can be estimated by sampling the objects of interest, \((\beta,\alpha,\nu,h,\sigma,\gamma,\varphi^{2})\) from their respective conditional posterior densities as in Algorithm 1. We first sample the parameters \((\beta,\alpha)\) in a block conditional on remaining parameters, where \(\beta\) is sampled marginally of \(\alpha\) and then \(\alpha\) is sampled conditional on \(\beta\). Both the conditional posteriors follow a normal distribution with hyperparameters updated as shown in Algorithm 1. We utilize block sampling for two reasons: (i) to account for possible correlation between the two parameters, and (ii) to reduce the inefficiency factors in the MCMC draws (Chib and Carlin, 1999; Greenberg, 2012). The random effects variance parameter, \(\varphi^{2}\), is sampled from an inverse-Gamma distribution with updated hyperparameters. The scale and shape parameters \((\sigma,\gamma)\) are jointly sampled marginally of \((\nu,h)\) using a random-walk Metropolis-Hastings (MH) algorithm (Chib and Greenberg, 1995). Here, the target density is the product of the GAL likelihood and the prior distributions, given by Equation (3) and Equation (7), respectively; while the proposal values are drawn from a bivariate truncated normal distribution. We note that joint sampling of \((\sigma,\gamma)\) is critical in reducing the autocorrelation of MCMC draws and hence to the efficiency of the algorithm. The mixture variable, \(\nu\), is sampled from a generalized inverse Gaussian (GIG) distribution, draws from which are generated using the technique in Devroye (2014). Lastly, the mixture variable, \(h\) conditional on \((\sigma,\gamma)\) and the remaining parameters, is sampled from a half-normal distribution with updated hyperparameters. The derivations of the conditional posterior distributions and further details of Algorithm 1 are presented in Appendix A. To exemplify the practical utility of the FREQ model, we also estimate the more established REQ model using the sampler presented in Algorithm 2. This sampling algorithm has two important improvements from the sampler proposed in Luo et al. (2012). First, \((\beta,\alpha)\) are sampled in a single block which significantly lowers the autocorrelation in the MCMC draws and improves the mixing of the Markov chain. Because we can achieve lower inefficiency factors in Algorithm 2, the number of MCMC draws can typically be reduced, thereby decreasing computational burdens and run times. Second, we correct the updating for \(\sigma\) by including the terms involving the exponential variable in the updated hyperparameters. The resulting MCMC algorithm is fast, efficient, and maintains the tractability of the sampling distributions. ### Bayesian Model Comparison and Marginal Likelihood Estimation To properly address model uncertainty, Bayesian model comparison proceeds by representing the posterior model probability of model \(\mathcal{M}_{s}\) given the data \(y\) as \[\Pr(\mathcal{M}_{s}|y)\propto\Pr(\mathcal{M}_{s})m(y|\mathcal{M}_{s}),\] where \(\Pr(\mathcal{M}_{s})\) is the prior model probability and \(m(y|\mathcal{M}_{s})\) is the marginal likelihood. Given the sampling density \(f(y|\mathcal{M}_{s},\Theta_{s})\) and prior distribution \(\pi(\Theta_{s}|\mathcal{M}_{s})\) under model \(\mathcal{M}_{s}\), the marginal likelihood is defined as the integral \[m(y|\mathcal{M}_{s})=\int f(y|\mathcal{M}_{s},\Theta_{s})\pi(\Theta_{s}|\mathcal{ M}_{s})\,d\Theta_{s},\] which can also be expressed, using Bayes' theorem, as \[m(y|\mathcal{M}_{s})=\frac{f(y|\mathcal{M}_{s},\Theta_{s})\,\pi(\Theta_{s}| \mathcal{M}_{s})}{\pi(\Theta_{s}|y,\mathcal{M}_{s})}, \tag{9}\] where the numerator is the product of the likelihood function and prior density, and the denominator is the joint posterior density (Chib, 1995; Chib and Jeliazkov, 2001). Equation (9) is known as the _basic marginal likelihood identity_ since it holds for all values in the parameter space. However, marginal likelihood estimate is typically computed at a high-density point (such as the mean or mode), denoted \(\Theta_{s}^{*}\), to minimize estimation variability. The numerator quantities in Equation (9) are generally directly available, and therefore, the problem of marginal likelihood estimation is reduced to finding an estimate of the posterior ordinate in the denominator of Equation (9). Well-known properties of Bayesian model comparisons based on marginal likelihoods and their ratios, or Bayes factors, are that they lead to finite-sample model probabilities, do not require competing models to be nested and have appealing asymptotic properties that give rise to information criteria (Greenberg, 2012). Another important, yet underappreciated, point is that marginal likelihoods provide a measure of sequential out of sample predictive fit, which can be seen by writing \[m(y|\mathcal{M}_{s}) = \prod_{i=1}^{n}m(y_{i}|\{y_{j}\}_{j<i},\mathcal{M}_{s})\] \[= \prod_{i=1}^{n}\int f(y_{i}|\{y_{j}\}_{j<i},\Theta_{s},\mathcal{ M}_{s})\pi(\Theta_{s}|\{y_{j}\}_{j<i},\mathcal{M}_{s})\,d\Theta_{s}.\] Therefore, the adequacy of the model as captured by the marginal likleihood corresponds to the cumulative out-of-sample predictive record where the fit of \(y_{i}\) is measured with respect to the posterior density using data \(\{y_{j}\}_{j<i}\) up to the \(i\)th data point. This is in sharp contrast to in-sample measures of fit that condition on the entire data set \(y\). Also, the marginal likelihood is invariant to permutations in the indices of the data, so that the same \(m(y|\mathcal{M}_{s})\) will be obtained if the data are rearranged. We next consider the computation of the marginal likelihood for the FREQ and REQ models. #### 2.3.1 Marginal Likelihood for the FREQ Model The marginal likelihood for the FREQ model is derived following Chib and Jeliazkov (2001) since the conditional posterior for \((\sigma,\gamma)\) does not have a tractable form and is sampled using an MH algorithm (see Algorithm 1). Let \(\Theta=(\beta,\varphi^{2},\Theta_{1})\) where \(\Theta_{1}=(\sigma,\gamma)\), then the joint posterior density for the FREQ model (marginally of \(\alpha\), \(\nu\), and \(h\)) can be expressed as, \[\pi(\beta^{*},\varphi^{2*},\Theta_{1}^{*}|y)=\pi(\Theta_{1}^{*}|y)\,\pi(\beta^ {*}|y,\Theta_{1}^{*})\,\pi(\varphi^{2*}|y,\beta^{*},\Theta_{1}^{*}), \tag{10}\] where \((\beta^{*},\varphi^{2*},\Theta_{1}^{*})\) denotes a high density point of \((\beta,\varphi^{2},\Theta_{1})\). The latent variables \((\alpha,\nu,h)\) are marginalized to reduce the computational burden since computing high dimensional ordinates is costly and leads to inefficient estimates. Moreover, in the decomposition presented in Equation (10), we have intentionally placed the intractable posterior ordinate \(\pi(\Theta_{1}^{*}|y)\) first so as to avoid the MH step in the _reduced MCMC run_ - the process of running an MCMC sampler with one or more parameters fixed at some value (Greenberg, 2012). We first estimate \(\pi(\Theta_{1}^{*}|y)\), followed by \(\pi(\beta^{*}|y,\Theta_{1}^{*})\), and lastly, \(\pi(\varphi^{2*}|y,\beta^{*},\Theta_{1}^{*})\). To get an estimate of \(\pi(\Theta_{1}^{*}|y)\), we first need to express the ordinate in a computationally convenient formulation. We know \(\Theta_{1}\) is sampled using an MH step, which requires a proposal density and a transition kernel. Define the transition kernel from \(\Theta_{1}\) to \(\Theta_{1}^{*}\) as, \[P(\Theta_{1},\Theta_{1}^{*}|y,\beta,\varphi^{2},\alpha)=\alpha_{MH}(\Theta_{1 },\Theta_{1}^{*}|y,\beta,\varphi^{2},\alpha)\;q(\Theta_{1},\Theta_{1}^{*}|y, \beta,\varphi^{2},\alpha), \tag{11}\] where \(q(\Theta_{1},\Theta_{1}^{*}|y,\beta,\varphi^{2},\alpha)\) denotes the proposal density for the transition from \(\Theta_{1}\) to \(\Theta_{1}^{*}\), and \[\alpha_{MH}(\Theta_{1},\Theta_{1}^{*}|y,\beta,\varphi^{2},\alpha)=\min\bigg{\{} 1,\frac{f_{GAL}(y,\alpha|\beta,\Theta_{1}^{*})\;\pi(\beta,\Theta_{1}^{*})}{f_{ GAL}(y,\alpha|\beta,\Theta_{1})\;\pi(\beta,\Theta_{1})}\;\frac{q(\Theta_{1}^{*}, \Theta_{1}|y,\beta,\varphi^{2},\alpha)}{q(\Theta_{1},\Theta_{1}^{*}|y,\beta, \varphi^{2},\alpha)}\bigg{\}}, \tag{12}\] denotes the probability of making the move from \(\Theta_{1}\) to \(\Theta_{1}^{*}\). Note that the conditioning of the proposal density on \(y\) and the remaining parameters is only for the sake of generality, and a particular proposal density may be independent of both \(y\) and \((\beta,\varphi^{2},\alpha)\). Since the transition kernel, (i.e., Equation 11) satisfies the reversibility condition, we exploit this property and, through suitable modifications following Chib and Jeliazkov (2001), arrive at the following expression, \[\pi(\Theta_{1}^{*}|y)=\frac{E_{1}\{\alpha_{MH}(\Theta_{1},\Theta_{1}^{*}|y, \beta,\varphi^{2},\alpha)\,q(\Theta_{1},\Theta_{1}^{*})|y,\beta,\varphi^{2}, \alpha)\}}{E_{2}\{\alpha_{MH}(\Theta_{1}^{*},\Theta_{1}|y,\beta,\varphi^{2}, \alpha)\}}, \tag{13}\] where \(E_{1}\) represents expectation with respect to the posterior distribution \(\pi(\Theta_{1},\beta,\varphi^{2},\alpha|y)\) and \(E_{2}\) represents expectation with respect to the distribution \(\pi(\beta,\varphi^{2},\alpha|y,\Theta_{1}^{*})\times q(\Theta_{1}^{*},\Theta_{ 1}|y)\). In this formulation, the numerator in Equation (13) can be estimated by using draws from the _complete MCMC run_ and taking an average of \(\alpha_{MH}(\Theta_{1},\Theta_{1}^{*}|y,\beta,\varphi^{2},\alpha)\,q(\Theta_{1}, \Theta_{1}^{*})|y,\beta,\varphi^{2},\alpha)\), where \(\alpha_{MH}(\Theta_{1},\Theta_{1}^{*}|y,\beta,\varphi^{2},\alpha)\) is given by Equation (12) and \(q(\Theta_{1},\Theta_{1}^{*})|y,\beta,\varphi^{2},\alpha)\) is bivariate truncated normal distribution described in Algorithm 1. To compute the denominator in Equation (13), we note that the distribution \(\pi(\beta,\varphi^{2},\alpha|y,\Theta_{1}^{*})\) is conditioned on \(\Theta_{1}^{*}\). Therefore, we conduct a _reduced run_ of Algorithm 1, i.e., sample \(\beta\), \(\alpha\), \(\varphi^{2}\), \(v\), and \(h\) with \(\Theta_{1}=(\sigma,\gamma)\) fixed at \(\Theta_{1}^{*}=(\sigma^{*},\gamma^{*})\). Additionally, at each iteration of the reduced run, we generate, \(\Theta_{1}^{(m)}\sim q(\Theta_{1}^{*},\Theta_{1}|y,\beta^{(m)},\varphi^{2(m)},\alpha^{(m)})\). The draws \(\{\beta^{(m)},\varphi^{2(m)},\alpha^{(m)},\Theta_{1}^{(m)}\}\) obtained from such a procedure are draws from \(\pi(\beta,\varphi^{2},\alpha|y,\Theta_{1}^{*})\times q(\Theta_{1}^{*},\Theta_ {1}|y)\) which can be utilized to compute the denominator. Therefore, an estimate of the posterior ordinate, \(\pi(\Theta_{1}^{*}|y)\), can be obtained as, \[\hat{\pi}(\Theta_{1}^{*}|y)=\frac{M^{-1}\sum_{m=1}^{M}\{\alpha_{MH}(\Theta_{1 }^{(m)},\Theta_{1}^{*}|y,\beta^{(m)},\varphi^{2(m)},\alpha^{(m)})\;q(\Theta_{1 }^{(m)},\Theta_{1}^{*}|y,\beta^{(m)},\varphi^{2(m)},\alpha^{(m)})\}}{M_{1}^{- 1}\sum_{m=1}^{M_{1}}\alpha_{MH}(\Theta_{1}^{*},\Theta_{1}^{(m)}|y,\beta^{(m)}, \varphi^{2(m)},\alpha^{(m)})}, \tag{14}\] where \(M\) and \(M_{1}\) denote the number of MCMC draws from the _complete_ and (first) _reduced_ MCMC runs. Next, we need to estimate \(\pi(\beta^{*}|y,\Theta_{1}^{*})\) and \(\pi(\varphi^{2*}|y,\beta^{*},\Theta_{1}^{*})\). We already have the sequence of draws \(\{\beta^{(m)},\varphi^{(m)},\alpha^{(m)},\nu^{(m)},h^{(m)}\}_{m=1}^{M_{1}}\) from the _reduced_ MCMC run conditioned on \(\Theta_{1}^{*}\). These draws can be utilized to estimate \(\pi(\beta^{*}|y,\Theta_{1}^{*})=\int\pi(\beta^{*}|y,\Theta_{1}^{*},\alpha, \varphi^{2},\nu,h)\,d\alpha\,d\varphi^{2}\,d\nu\,dh\), as follows, \[\hat{\pi}(\beta^{*}|y,\Theta_{1}^{*})=\frac{\sum_{m=1}^{M_{1}}\pi(\beta^{*}|y, \Theta_{1}^{*},\varphi^{2*(m)},\alpha^{(m)},\nu^{(m)},h^{(m)})}{M_{1}}. \tag{15}\] To estimate \(\pi(\varphi^{2*}|y,\beta^{*},\Theta_{1}^{*})\), we conduct a second _reduced MCMC run_, i.e., run Algorithm 1 for \(M_{2}\) iterations with \((\beta,\Theta_{1})\) fixed at \((\beta^{*},\Theta_{1}^{*})\). The resulting draws \(\{\alpha^{(m)},\varphi^{2*},\nu^{(m)},h^{(m)}\}_{m=1}^{M_{2}}\) can be utilized to estimate \(\pi(\varphi^{*}|y,\beta^{*},\Theta_{1}^{*})=\int\pi(\varphi^{2*}|y,\beta^{*}, \Theta_{1}^{*},\alpha,\nu,h)\,d\alpha\,d\nu\,dh\), as given by, \[\hat{\pi}(\varphi^{2*}|y,\beta^{*},\Theta_{1}^{*})=\frac{\sum_{m=1}^{M_{2}} \pi(\varphi^{2*}|y,\beta^{*},\Theta_{1}^{*},\alpha^{(m)},\nu^{(m)},h^{(m)})}{ M_{2}}. \tag{16}\] Substituting the expression from Equations (14)-(16) in Equation (10), we have an estimate of the joint posterior ordinate \(\pi(\beta^{*},\varphi^{2*},\Theta_{1}^{*}|y)\). The other quantities in the marginal likelihood (see Equation 9) are prior ordinates and the likelihood of the FREQ model. Both quantities require straightforward evaluations. All the prior distributions are completely known (see Equation 7), so prior ordinates can be easily evaluated at a chosen high-density point \(\Theta^{*}=(\beta^{*},\varphi^{2*},\sigma^{*},\gamma^{*})\). The likelihood also requires evaluation at \(\Theta^{*}\) but we first need to express it marginally of \((\alpha,\nu,h)\) since we marginalized them while computing the joint posterior ordinate. The required FREQ model likelihood can be written as, \[f(y|\beta,\varphi^{2},\sigma,\gamma) = \int f(y,\alpha|\beta,\varphi^{2},\sigma,\gamma)\,d\alpha\] \[= \int\prod_{i=1}^{n}\bigg{[}\bigg{\{}\prod_{j=1}^{T_{i}}f_{GAL} \left(y_{it}|x^{\prime}_{it}\beta+z^{\prime}_{it}\alpha_{i},\sigma,p_{0},\gamma \right)\bigg{\}}f(\alpha_{i}|\varphi^{2})\,d\alpha_{i}\bigg{]},\] where \(f_{GAL}\) denotes the density of GAL distribution. The likelihood can be computed at \(\Theta^{*}=(\beta^{*},\varphi^{2*},\sigma^{*},\gamma^{*})\) using Monte Carlo integration as follows, \[f(y|\beta^{*},\varphi^{2*},\sigma^{*},\gamma^{*})\simeq\sum_{j=1}^{J}\frac{f(y |\beta^{*},\sigma^{*},\gamma^{*},\alpha^{(j)})}{J},\] where \(\{\alpha_{i}^{(j)}\}\) are draws from \(f(\alpha_{i}|\varphi^{2*})\) for \(i=1,\cdots,n\), and \(J\) is some large number. Additionally, \((\nu,h)\) are automatically marginalized since \(f_{GAL}(\cdot)\) is the GAL density that does not involve any mixture variables (see Equation (2) in Rahman and Karnawat, 2019, for the form of the density). #### 2.3.2 Marginal Likelihood for the REQ Model The derivation of the marginal likelihood for REQ model follows Chib (1995) since all the conditional posteriors have a known form (see Algorithm 2). Let \(\Theta=(\beta,\sigma,\varphi^{2})\), then the joint posterior (marginally of \(\alpha\) and \(\nu\)) can be expressed as, \[\pi(\Theta^{*}|y)=\pi(\beta^{*}|y)\,\pi(\varphi^{2*}|y,\beta^{*})\,\pi(\sigma^ {*}|y,\beta^{*},\varphi^{2*}), \tag{17}\] where the \(*\) on the parameters denotes a high-density point. Each expression on the right-hand side of Equation (17) can be written in terms of the conditional posteriors (see Algorithm 2) and an estimate is obtained by taking the ergodic average of the conditional posterior density with MCMC draws either from the complete or reduced runs. The posterior density \(\pi(\beta^{*}|y)\) is expressed as \(\pi(\beta^{*}|y)=\int\pi(\beta^{*}|y,\nu,\sigma,\varphi^{2})\,\,d\nu\,d\sigma \,d\varphi^{2}\) and its estimate is computed as \(\hat{\pi}(\beta^{*}|y)=G^{-1}\sum_{g=1}^{G}\pi(\beta^{*}|y,\nu^{(g)},\sigma^{ (g)},\varphi^{2(g)})\), where the G draws are from the _complete_ MCMC run. The remaining two terms are reduced conditional density ordinates and require MCMC draws from two separate _reduced runs_. To obtain an estimate of \(\pi(\varphi^{2*}|y,\beta^{*})=\int\pi(\varphi^{2*}|y,\beta^{*},\sigma,\alpha, \nu)\,d\sigma\,d\alpha\,d\nu\), we conduct a (first) _reduced run_, i.e., run Algorithm 2 for \(G_{1}\) iterations with \(\beta\) fixed at \(\beta^{*}\). We then compute an estimate of the ordinate as \(G_{1}^{-1}\sum_{g=1}^{G_{1}}\pi(\varphi^{2*}|y,\beta^{*},\sigma^{(g)},\alpha^{(g)},\nu^{(g)})\). Finally, an estimate of the third term \(\pi(\sigma^{*}|y,\beta^{*},\varphi^{2*})=\int\pi(\sigma^{*}|y,\beta^{*},\varphi^ {2*},\alpha,\nu)\ d\alpha\,d\nu\), is obtained as \(\hat{\pi}(\sigma^{*}|y,\beta^{*},\varphi^{2*})=G_{2}^{-1}\sum_{g=1}^{G_{2}}\pi( \sigma^{*}|y,\beta^{*},\varphi^{2*},\alpha^{(g)},\nu^{(g)})\), where the \(G_{2}\) Gibbs draws are from the second _reduced run_ of Algorithm 2 with \((\beta,\varphi^{2})\) fixed at \((\beta^{*},\varphi^{2*})\). With an estimate of the joint posterior ordinate now available, we need to compute the prior ordinates and the likelihood to estimate the marginal likelihood for REQ model. The prior ordinates are readily available since the prior distributions for \((\beta,\sigma,\varphi^{2})\) have tractable forms. The likelihood is calculated marginally of \((\alpha,\nu)\) since we marginalized them while computing the joint posterior ordinate. The required likelihood can be written as, \[f(y|\beta,\sigma,\varphi^{2})=\int f(y,\alpha|\beta,\sigma,\varphi^{2})\,d \alpha=\int\prod_{i=1}^{n}\bigg{[}\bigg{\{}\prod_{t=1}^{T_{i}}f_{AL}\left(y_{ it}|x^{\prime}_{it}\beta+z^{\prime}_{it}\alpha_{i},\sigma,p\right)\bigg{\}}f( \alpha_{i}|\varphi^{2})\,d\alpha_{i}\bigg{]},\] where \(f_{AL}\) denotes the density of the AL distribution. The above expression can be computed using Monte Carlo integration at \(\Theta^{*}=(\beta^{*},\sigma^{*},\varphi^{2*})\) as follows, \[f(y|\beta^{*},\sigma^{*},\varphi^{2*})\simeq\sum_{j=1}^{J}\frac{f(y|\beta^{*},\sigma^{*},\alpha^{(j)})}{J},\] where \(\{\alpha_{i}^{(j)}\}\) are draws from \(f(\alpha_{i}|\varphi^{2*})\) for \(i=1,\cdots,n\), with \(J\) being a large number. Note that \(\nu\) is automatically marginalized by virtue of not using the mixture representation of AL distribution in the likelihood. ## 3 Simulation Studies In this section, we conduct several simulation studies to illustrate the performance of the proposed MCMC algorithm for estimating FREQ model and compare the results to those from the REQ model (estimated using Algorithm 2) in order to understand the benefits of the additional flexibility of the GAL distribution. Specifically, the data for the simulation studies are generated from the following panel data model, \[y_{it}=\alpha_{1i}+\alpha_{2i}\,z_{2it}+\beta_{1}+\beta_{2}\,x_{2it}+\beta_{3 }\,x_{3it}+\varepsilon_{it},\] where \(\alpha=(\alpha_{1},\alpha_{2})^{\prime}\sim N_{2}([0,0]^{\prime},[1,0;0,1])\), \(\beta=(\beta_{1},\beta_{2},\beta_{3})^{\prime}=(10,5,2)^{\prime}\), \(z_{2it}\sim\text{Unif}(0,1)\), \(x_{2}\sim N(0,0.25)\), \(x_{3}\sim N(2,0.25)\), and the errors \(\varepsilon\) were generated from a standard logistic distribution \(\mathcal{L}(0,1)\). We generate 9 different data samples with \(T_{i}=(5,10,15)\) and \(n=(100,250,500)\), where \(T_{i}\) denotes the number of repeated observations for each individual \(i\) and \(n\) represents the number of individuals. In each simulation study, the posterior estimates of the parameters in the FREQ model are obtained based on the simulated data and the following prior distributions: \(\beta\sim N(0_{k},100I_{k})\), \(\varphi^{2}\sim IG(12/2,10/2)\), \(\sigma\sim IG(5/2,8/2)\) and \(\gamma\sim\text{Unif}(L,U)\), where \((L,U)\) are obtained as mentioned in Section 2. The same prior distributions are employed for the REQ model. Table 1 reports, for each simulated dataset, the MCMC results at five different quantiles obtained from 10,000 iterations after a burn-in of 2,500 iterations. Inefficiency factors are calculated using the formula, \(1+2\sum_{t=1}^{T}\rho_{k}(t)\bigg{(}\frac{T-t}{T}\bigg{)}\), where \(\rho_{k}(t)\) denotes the autocorrelation for the \(k\)th parameter at lag \(t\), and \(T\) is the value at which the autocorrelations taper off (typically, 0.05 or 0.10). In the MH sampling of \((\sigma,\gamma)\), the tuning factor \(\iota\) is adjusted to obtain an acceptance rate of approximately 30 percent. Convergence of the MCMC draws is quick, as demonstrated in the trace plots for the 25th quantile from Simulation Study 1 in Figure 2. The trace plots for the remaining quantiles in Simulation Study 1 and all quantiles in the other 8 simulation studies are similar to Figure 2, except for \(\gamma\) at the 50th quantile when the posterior mean is not statistically different from zero, i.e., posterior mean of \(\gamma\) is close to 0 with high standard deviations. \begin{table} \begin{tabular}{l c c c c c c c c c c c c c c} \hline & \multicolumn{4}{c}{10th qtl} & \multicolumn{4}{c}{25th qtl} & \multicolumn{4}{c}{50th qtl} & \multicolumn{4}{c}{75th qtl} & \multicolumn{4}{c}{90th qtl} \\ \cline{2-13} SS1 & mean & sd & if & mean & sd & if & mean & sd & if & mean & sd & if & mean & sd & if \\ \hline \(\beta_{1}\) & 8.12 & 0.67 & 5.53 & 9.16 & 0.65 & 3.63 & 10.08 & 0.61 & 2.11 & 11.11 & 0.70 & 4.41 & 12.17 & 0.65 & 5.27 \\ \(\beta_{2}\) & 4.72 & 0.33 & 5.39 & 4.81 & 0.32 & 3.73 & 4.87 & 0.30 & 2.13 & 4.79 & 0.34 & 4.17 & 4.88 & 0.31 & 5.35 \\ \(\beta_{3}\) & 2.01 & 0.32 & 5.38 & 1.98 & 0.32 & 3.71 & 2.04 & 0.30 & 2.14 & 1.99 & 0.34 & 4.19 & 1.94 & 0.31 & 5.29 \\ \(\varphi^{2}\) & 0.98 & 0.20 & 8.28 & 0.98 & 0.19 & 6.82 & 1.04 & 0.20 & 5.01 & 0.99 & 0.20 & 6.75 & 1.03 & 0.19 & 6.14 \\ \(\sigma\) & 0.42 & 0.03 & 7.94 & 0.51 & 0.06 & 8.09 & 0.64 & 0.03 & 6.20 & 0.48 & 0.08 & 14.70 & 0.41 & 0.03 & 8.04 \\ \(\gamma\) & 2.98 & 0.29 & 7.45 & 1.35 & 0.20 & 7.82 & 0.07 & 0.10 & 15.59 & \(-1.42\) & 0.27 & 12.41 & \(-2.99\) & 0.30 & 7.41 \\ \hline SS2 & \multicolumn{4}{c}{10th qtl} & \multicolumn{4}{c}{25th qtl} & \multicolumn{4}{c}{50th qtl} & \multicolumn{4}{c}{75th qtl} & \multicolumn{4}{c}{90th qtl} \\ \hline \(\beta_{1}\) & 7.76 & 0.46 & 5.94 & 8.88 & 0.44 & 3.53 & 9.90 & 0.43 & 2.26 & 10.95 & 0.43 & 4.06 & 12.10 & 0.44 & 5.58 \\ \(\beta_{2}\) & 5.16 & 0.21 & 5.47 & 5.17 & 0.21 & 3.30 & 5.19 & 0.21 & 2.22 & 5.21 & 0.21 & 4.04 & 5.23 & 0.21 & 5.66 \\ \(\beta_{3}\) & 2.14 & 0.22 & 6.05 & 2.05 & 0.21 & 3.58 & 2.04 & 0.21 & 2.22 & 2.01 & 0.21 & 4.16 & 1.90 & 0 & 0.21 & 5.66 \\ \(\varphi^{2}\) & 1.41 & 0.16 & 5.73 & 1.36 & 0.16 & 5.51 & 1.41 & 0.16 & 4.14 & 1.35 & 0.16 & 5.02 & 1.44 & 0.16 & 6.19 \\ \(\sigma\) & 0.42 & 0.02 & 7.99 & 0.52 & 0.04 & 12.23 & 0.64 & 0.02 & 7.48 & 0.48 & 0.06 & 14.01 & 0.41 & 0.02 & 9.10 \\ \hline \end{tabular} \end{table} Table 1: Posterior mean (mean), standard deviation (sd) and inefficiency factor (if) of the parameters in the family of FREQ models from nine simulation studies: SS1 (\(T_{i}=5,n=100\)), SS2 (\(T_{i}=5,n=250\)), SS3 (\(T_{i}=5,n=500\)), SS4 (\(T_{i}=10,n=250\)), SS5 (\(T_{i}=10,n=250\)), SS6 (\(T_{i}=10,n=500\)), SS7 (\(T_{i}=15,n=100\)), SS8 (\(T_{i}=15,n=250\)) and SS9 (\(T_{i}=15,n=500\)) \begin{table} \begin{tabular}{l c c c c c c c c c c c c c c c} \hline \hline \(\gamma\) & 2.93 & 0.20 & 7.12 & 1.26 & 0.16 & 11.82 & 0.00 & 0.06 & 11.23 & \(-1.42\) & 0.19 & 13.52 & \(-3.00\) & 0.19 & 7.99 \\ \hline SS3 & \multicolumn{3}{c}{10th qtl} & \multicolumn{3}{c}{25th qtl} & \multicolumn{3}{c}{50th qtl} & \multicolumn{3}{c}{75th qtl} & \multicolumn{3}{c}{90th qtl} \\ \hline \(\beta_{1}\) & 7.91 & 0.31 & 5.34 & 8.87 & 0.32 & 5.04 & 9.97 & 0.31 & 2.49 & 11.02 & 0.31 & 3.82 & 11.99 & 0.31 & 5.90 \\ \(\beta_{2}\) & 5.09 & 0.15 & 5.58 & 5.09 & 0.15 & 4.79 & 5.13 & 0.15 & 2.25 & 5.11 & 0.15 & 4.20 & 5.03 & 0.16 & 5.69 \\ \(\beta_{3}\) & 2.06 & 0.15 & 5.65 & 2.05 & 0.15 & 4.89 & 2.01 & 0.15 & 2.32 & 2.01 & 0.15 & 3.87 & 2.01 & 0.15 & 6.06 \\ \(\varphi^{2}\) & 0.98 & 0.10 & 9.18 & 0.89 & 0.08 & 7.47 & 0.95 & 0.09 & 6.70 & 0.91 & 0.09 & 8.67 & 0.96 & 0.09 & 9.78 \\ \(\sigma\) & 0.42 & 0.02 & 8.81 & 0.46 & 0.06 & 34.95 & 0.67 & 0.01 & 6.19 & 0.52 & 0.05 & 18.13 & 0.43 & 0.02 & 9.17 \\ \(\gamma\) & 3.00 & 0.14 & 8.06 & 1.53 & 0.18 & 31.16 & \(-0.01\) & 0.04 & 13.82 & \(-1.35\) & 0.14 & 17.36 & \(-2.97\) & 0.14 & 7.86 \\ \hline SS4 & \multicolumn{3}{c}{10th qtl} & \multicolumn{3}{c}{25th qtl} & \multicolumn{3}{c}{50th qtl} & \multicolumn{3}{c}{75th qtl} & \multicolumn{3}{c}{90th qtl} \\ \hline \(\beta_{1}\) & 7.15 & 0.48 & 6.14 & 8.21 & 0.46 & 4.55 & 9.27 & 0.46 & 2.55 & 10.18 & 0.48 & 3.61 & 11.16 & 0.47 & 5.57 \\ \(\beta_{2}\) & 4.90 & 0.25 & 6.60 & 5.04 & 0.23 & 4.97 & 5.02 & 0.22 & 2.65 & 5.03 & 0.23 & 3.26 & 4.96 & 0.24 & 6.15 \\ \(\beta_{3}\) & 2.35 & 0.23 & 6.79 & 2.32 & 0.22 & 4.97 & 2.32 & 0.22 & 2.66 & 2.32 & 0.23 & 3.73 & 2.31 & 0.22 & 5.80 \\ \(\varphi^{2}\) & 1.27 & 0.21 & 5.48 & 1.14 & 0.18 & 4.13 & 1.17 & 0.18 & 3.41 & 1.14 & 0.18 & 3.43 & 1.12 & 0.18 & 4.18 \\ \(\sigma\) & 0.44 & 0.02 & 6.97 & 0.50 & 0.05 & 7.78 & 0.65 & 0.02 & 7.62 & 0.59 & 0.02 & 5.04 & 0.44 & 0.02 & 6.79 \\ \(\gamma\) & 2.91 & 0.17 & 6.70 & 1.39 & 0.15 & 7.43 & 0.11 & 0.06 & 9.74 & \(-1.00\) & 0.11 & 7.61 & \(-2.78\) & 0.21 & 7.37 \\ \hline SS5 & \multicolumn{3}{c}{10th qtl} & \multicolumn{3}{c}{25th qtl} & \multicolumn{3}{c}{50th qtl} & \multicolumn{3}{c}{75th qtl} & \multicolumn{3}{c}{90th qtl} \\ \hline \(\beta_{1}\) & 8.03 & 0.30 & 6.20 & 9.16 & 0.30 & 4.32 & 10.32 & 0.30 & 2.65 & 11.28 & 0.30 & 3.88 & 12.26 & 0.31 & 6.01 \\ \(\beta_{2}\) & 4.81 & 0.15 & 6.59 & 4.79 & 0.15 & 4.67 & 4.80 & 0.15 & 2.76 & 4.84 & 0.14 & 3.95 & 4.85 & 0.15 & 6.86 \\ \(\beta_{3}\) & 1.99 & 0.15 & 6.70 & 1.90 & 0.15 & 4.46 & 1.84 & 0.14 & 2.69 & 1.87 & 0.15 & 4.01 & 1.87 & 0.15 & 6.48 \\ \(\varphi^{2}\) & 1.04 & 0.11 & 5.95 & 1.01 & 0.11 & 4.75 & 1.05 & 0.11 & 4.04 & 1.01 & 0.10 & 4.60 & 1.06 & 0.12 & 5.84 \\ \(\sigma\) & 0.43 & 0.01 & 8.38 & 0.52 & 0.04 & 16.72 & 0.67 & 0.01 & 5.03 & 0.54 & 0.03 & 7.20 & 0.44 & 0.01 & 7.09 \\ \(\gamma\) & 2.94 & 0.13 & 7.67 & 1.37 & 0.13 & 15.95 & 0.01 & 0.04 & 12.67 & \(-1.30\) & 0.10 & 7.54 & \(-2.91\) & 0.13 & 7.14 \\ \hline SS6 & \multicolumn{3}{c}{10th qtl} & \multicolumn{3}{c}{25th qtl} & \multicolumn{3}{c}{50th qtl} & \multicolumn{3}{c}{75th qtl} & \multicolumn{3}{c}{90th qtl} \\ \hline \(\beta_{1}\) & 7.91 & 0.22 & 7.27 & 8.91 & 0.22 & 5.68 & 9.96 & 0.20 & 2.52 & 10.98 & 0.21 & 4.21 & 12.02 & 0.22 & 6.33 \\ \(\beta_{2}\) & 4.89 & 0.11 & 6.76 & 4.86 & 0.10 & 5.16 & 4.85 & 0.11 & 2.79 & 4.88 & 0.10 & 4.27 & 4.91 & 0.11 & 6.63 \\ \(\beta_{3}\) & 2.03 & 0.11 & 7.54 & 2.02 & 0.10 & 5.82 & 2.02 & 0.10 & 2.60 & 2.02 & 0.10 & 4.34 & 1.98 & 0.11 & 6.53 \\ \(\varphi^{2}\) & 0.97 & 0.08 & 6.68 & 0.89 & 0.07 & 5.89 & 0.95 & 0.07 & 3.92 & 0.92 & 0.07 & 4.72 & 0.94 & 0.08 & 7.12 \\ \(\sigma\) & 0.43 & 0.01 & 8.21 & 0.49 & 0.03 & 15.30 & 0.67 & 0.01 & 4.82 & 0.54 & 0.03 & 19.24 & 0.44 & 0.01 & 7.39 \\ \(\gamma\) & 2.96 & 0.09 & 7.14 & 1.47 & 0.10 & 14.88 & 0.04 & 0.03 & 14.67 & \(-1.28\) & 0.09 & 17.06 & \(-2.95\) & 0.09 & 7.57 \\ \hline SS7 & \multicolumn{3}{c}{10th qtl} & \multicolumn{3}{c}{25th qtl} & \multicolumn{3}{c}{50th qtl} & \multicolumn{3}{c}{75th qtl} & The results presented in Table 1 show that the posterior estimates of the regression coefficients \(\beta\) are close to the true values \((10,5,2)\) with small standard deviations for all the considered quantiles and simulation studies. Therefore, the algorithm is successful in recovering the true values of the \begin{table} \begin{tabular}{c c c c c c c c c c c c c c} \hline SS9 & \multicolumn{2}{c}{10th qtl} & \multicolumn{2}{c}{25th qtl} & \multicolumn{2}{c}{50th qtl} & \multicolumn{2}{c}{75th qtl} & \multicolumn{2}{c}{90th qtl} \\ \hline \(\beta_{1}\) & 7.92 & 0.18 & 8.40 & 8.93 & 0.17 & 5.32 & 10.06 & 0.17 & 2.65 & 11.07 & 0.17 & 4.66 & 11.93 & 0.18 & 6.49 \\ \(\beta_{2}\) & 5.01 & 0.09 & 7.51 & 5.00 & 0.09 & 7.25 & 5.01 & 0.08 & 2.84 & 4.99 & 0.09 & 4.86 & 4.94 & 0.09 & 7.42 \\ \(\beta_{3}\) & 2.01 & 0.09 & 8.85 & 2.00 & 0.08 & 5.85 & 1.96 & 0.08 & 2.76 & 1.98 & 0.08 & 5.15 & 2.03 & 0.08 & 6.93 \\ \(\varphi^{2}\) & 1.08 & 0.08 & 5.06 & 1.03 & 0.07 & 4.22 & 1.07 & 0.08 & 3.52 & 1.04 & 0.07 & 3.79 & 1.07 & 0.08 & 5.27 \\ \(\sigma\) & 0.44 & 0.01 & 7.77 & 0.49 & 0.03 & 21.61 & 0.68 & 0.01 & 5.48 & 0.56 & 0.02 & 7.42 & 0.44 & 0.01 & 7.75 \\ \(\gamma\) & 2.95 & 0.07 & 7.23 & 1.46 & 0.08 & 20.86 & 0.02 & 0.02 & 10.54 & \(-1.27\) & 0.06 & 8.02 & \(-2.93\) & 0.07 & 7.07 \\ \hline \end{tabular} \end{table} Table 1: _Continued from previous page_ Figure 2: Trace plots of the MCMC draws for Simulation Study 1 (\(m=5,n=100\)) at the 25th quantile. parameters. The inefficiency factors are also low, which indicates that the MCMC draws mix well and the algorithm is efficient. Moreover, the posterior estimates of \(\varphi^{2}\) and \(\sigma\) are also close to the true values with small standard deviations and low inefficiency factors. The posterior estimates of the shape parameter \(\gamma\) suggest that the data distribution has approximately zero skewness, i.e., the distribution is symmetric. For example, in Simulation Study 1, the posterior mean of \(\gamma\) at the 25th (75th) quantile is \(1.35\) (\(-1.42\)) which corresponds to a skewness of \(-0.17\) (\(0.20\)). This is in sharp contrast to the fixed skewness of \(1.64\) and \(-1.64\) in the REQ model. Additionally, the posterior mean at the 50th quantile is \(0.07\) with a high standard deviation (\(0.10\)), which implies zero skewness. Thus, all estimates of the skewness parameter point to a symmetric distribution, which is correct since the error is generated from a symmetric logistic distribution. These results show us that the FREQ model can correctly identify the skewness of the underlying distribution. We next investigate whether this flexibility translates to a better model fit that may justify the additional effort required to estimate the FREQ model. For comparison, we estimate the REQ model using Algorithm 2 and calculate the marginal likelihood for the two models as described in Section 2.3. The log-marginal likelihoods are reported in Table 2. Looking at the results from Simulation Study 1, we see that the log-marginal likelihood is higher for the FREQ model at the 10th, 25th, 75th, and 90th quantiles and thus is better supported by the data. Moreover, the difference between the marginal likelihoods of the FREQ and REQ models is higher at the 10th or 90th quantiles as compared to 25th or 75th quantiles, which suggests that the gains from flexibility increase as we move towards the tail of the distribution. At the 50th quantile, the log-marginal likelihood is higher for the REQ framework, but here a direct comparison of marginal likelihoods may be misleading. This is because the posterior mean of \(\gamma\) is statistically equivalent to zero, thus pointing to a zero skewness framework, i.e., the REQ model. Therefore, both frameworks are rather equivalent at the 50th quantile. Across different simulation studies, we observe that the FREQ framework provides better model fitting in 33 out of 36 cases (91.67 percent) at the non-50th quantiles. Whereas, for the 50th quantile, the two models are equivalent except for Simulation Study 4 where the REQ model provides a better fit because \(\gamma\) is statistically different from zero. In conclusion, the FREQ model tends to be a better model than its REQ counterpart at quantiles away from the median. The appropriate model (AL vs. GAL) will be application- and data-specific. However, in order for a researcher to have a more thorough understanding of the data distribution, both approaches should be considered and compared in a model comparison analysis. We conduct this exercise in the next section where we study residential rental rates in the US. ## 4 Application The US housing sector has received considerable attention as a result of the Global Financial Crisis (GFC) when mortgage delinquencies and foreclosures increased. Homeownership rates fell from 69% in 2004 to 62% in 2016. Because of the nexus between house prices and residential rents (see Loewenstein and Willen (2023) for a recent study), a drop in homeownership is likely to increase demand for rental units. However, the GFC also featured a drop in household income and an increase in unemployment.3 The concurrence of these events, theoretically, leads to ambiguous effects on rental rates. In this application, we take an empirical approach and explore changes \begin{table} \begin{tabular}{l r r r r r} \hline \hline & \multicolumn{2}{c}{10th qtl} & \multicolumn{1}{c}{25th qtl} & \multicolumn{1}{c}{50th qtl} & \multicolumn{1}{c}{75th qtl} & \multicolumn{1}{c}{90th qtl} \\ \hline SS1-FREQ & \(-5.930\) & \(-4.753\) & \(-5.439\) & \(-4.609\) & \(-6.554\) \\ SS1-REQ & \(-13.708\) & \(-6.146\) & \(-5.075\) & \(-6.563\) & \(-14.022\) \\ \hline SS2-FREQ & \(-8.473\) & \(-6.850\) & \(-7.567\) & \(-6.960\) & \(-9.115\) \\ SS2-REQ & \(-16.731\) & \(-8.191\) & \(-6.904\) & \(-8.545\) & \(-17.178\) \\ \hline SS3-FREQ & \(-8.911\) & \(-7.119\) & \(-8.274\) & \(-7.229\) & \(-9.179\) \\ SS3-REQ & \(-16.841\) & \(-8.329\) & \(-7.049\) & \(-8.359\) & \(-16.528\) \\ \hline SS4-FREQ & \(-8.126\) & \(-6.576\) & \(-7.232\) & \(-6.919\) & \(-8.088\) \\ SS4-REQ & \(-14.175\) & \(-7.347\) & \(-6.356\) & \(-7.565\) & \(-13.316\) \\ \hline SS5-FREQ & \(-9.583\) & \(-7.850\) & \(-8.880\) & \(-8.270\) & \(-9.925\) \\ SS5-REQ & \(-14.729\) & \(-8.521\) & \(-7.499\) & \(-8.769\) & \(-15.488\) \\ \hline SS6-FREQ & \(-10.884\) & \(-9.113\) & \(-10.191\) & \(-9.263\) & \(-11.123\) \\ SS6-REQ & \(-15.512\) & \(-9.236\) & \(-8.409\) & \(-9.574\) & \(-16.039\) \\ \hline SS7-FREQ & \(-9.786\) & \(-8.004\) & \(-8.620\) & \(-8.226\) & \(-9.941\) \\ SS7-REQ & \(-15.467\) & \(-9.128\) & \(-7.595\) & \(-8.607\) & \(-14.210\) \\ \hline SS8-FREQ & \(-10.721\) & \(-8.998\) & \(-10.136\) & \(-9.508\) & \(-11.101\) \\ SS8-REQ & \(-14.931\) & \(-9.300\) & \(-8.450\) & \(-9.388\) & \(-15.075\) \\ \hline SS9-FREQ & \(-12.034\) & \(-10.283\) & \(-11.187\) & \(-10.591\) & \(-12.385\) \\ SS9-REQ & \(-15.955\) & \(-10.139\) & \(-9.208\) & \(-10.410\) & \(-16.357\) \\ \hline \hline \end{tabular} \end{table} Table 2: Quantile log marginal likelihoods for the FREQ and REQ models in 9 simulations studies. to residential rental rates in the post-GFC United States. In particular, we examine how median rental prices in 14,533 zip codes are influenced by unemployment rates and mortgage policies. In studying rental rates, two concerns must be addressed: (1) heterogeneity because regions of the US vary greatly by housing supply, income, and other factors that influence prices, and (2) skewness in the distribution of prices because that distribution has a large right tail and significant outliers. Exhibiting these concerns, Figure 3 presents box plots of median rental rates in 5 states and the entire US for 2010 and 2016. Apparent from the figure are the dramatic differences across the states, where California's lower quartile is above most other states' upper quartile. Additionally, all states display a strong upper (right) skew. Thus, we employ our FREQ model to accommodate heterogeneity and allow for skewness flexibility in the error term. The performance of the FREQ model is tested relative to the REQ model using our novel marginal likelihood approach. Figure 3: Box plots of median rental rates in 5 states and the entire US for 2010 and 2016. Note that the y-axis has been capped at $8000 for better visual representation. This exercise provides important insights for understanding how the data support the different specifications across various quantiles. There is a vast literature on house prices and rental rates both before and after the Global Financial Crisis. Studies have examined various price determinants including zoning, regulation, and housing supply (Glaeser et al., 2020; Jackson, 2018), income differences (Quigley and Raphael, 2004), and tax policy (Chatterjee and Eyigungor, 2015). Many of these studies highlight how heterogeneity in city-specific features, such as average income and land availability, can lead to discrepancies in the effect of the boom and bust on prices. Additionally, a few international papers have focused on the distribution or quantiles of rental prices (Thomschke, 2015; Marz et al., 2016; Waltl, 2018). We contribute to this literature by implementing a novel quantile regression approach to study rental rates in the United States. Moving beyond mean regression and exploiting large differences in population, income, and economic activity provides a deeper understanding of the determinants of rental markets. Further, our new methodology and model comparison approaches allow us to uncover potential biases that may result from ignored heterogeneity and erroneous distributional assumptions. ### Data We construct a novel zip-code-level data set where our outcome variable of interest, \(y_{it}\), is the median monthly rental price of zip code \(i\) at year \(t\). The sample includes \(n=14,533\) zip codes in the United States from 2010-2016 (\(T=7\)). The residential rental price data come from the Zillow Rental Index (ZRENT). Our covariates include annual controls for each zip code's population, demographics, socioeconomic status, agriculture, property ownership, mortgage characteristics, and unemployment. The covariates are constructed from the Statistics of Income (SOI) Tax Stats, Individual Income Tax Statistics, provided by the Internal Revenue Service (IRS). Table 3 presents descriptions and summary statistics of our variables. We proxy for "population" using the total number of tax returns filed in the zip code and the remaining variables are generally a function of that measure. We model the data using the FREQ and REQ models, where \(y_{it}=LnRent_{it}\), \(x_{it}\) includes the remaining variables in Table 3, and \(z_{it}\) is a constant to control for zip-code-level heterogeneity. Heterogeneity is an important concern. While a researcher can control for a host of demographic, socioeconomic, and location characteristics, much is left unobserved. City-level policies, nearby neighborhood spillovers, and commuting effects may enter the error term, which heavily influence rental prices. Thus, our specifications for both models include zip code random effects. Additionally, we include time dummies to capture aggregate changes to prices. ### Training Sample Priors Prior distributions play an important role in Bayesian inference, particularly in model comparison where marginal likelihoods and hence their ratios, the Bayes factors, become arbitrary with improper priors, or sensitive to the prior with formally proper, but increasingly diffuse priors. Therefore, we employ a training sample approach where we take 10% of our data as a training sample and retain the remainder as a comparison sample. The data in the training sample are used to construct a first-stage posterior distribution which is used as a proper informative training sample prior when analyzing the comparison sample. Information from the training sample is not lost, as it is now part of the prior density used in evaluating the marginal likelihood over the remaining 90% of the data.4 Footnote 4: When estimating the model on the training sample, we used the following relatively uninformative priors: \(\beta\sim N(0_{k},25I_{k})\), \(\varphi^{2}\sim IG(12/2,10/2)\), \(\sigma\sim IG(10/2,8/2)\) and \(\gamma\sim\text{Unif}(L,U)\), where \((L,U)\) are obtained as mentioned in Section 2. \begin{table} \begin{tabular}{l l r r r r} \hline \hline variable & Description & Mean & SD & Max & Min \\ \hline LnRent & Median monthly rental price & 7.177 & 0.378 & 9.745 & 6.082 \\ SSBfrac & Fraction of the population receiving social security & 0.139 & 0.059 & 0.814 & 0 \\ Farmfrac & Fraction of the population receiving farming credits & 0.019 & 0.033 & 0.315 & 0 \\ REfrac & Fraction of the population with real estate taxes & 0.273 & 0.134 & 0.821 & 0 \\ HMRate & Fraction of the population with home mortgage deductions & 0.236 & 0.116 & 0.786 & 0 \\ AltMinRate & Fraction of the population paying alternative minimum taxes & 0.027 & 0.050 & 0.444 & 0 \\ EnergyRate & Fraction of the population receiving energy tax credits & 0.024 & 0.019 & 0.147 & 0 \\ EITCRate & Fraction of the population receiving earned income tax credits & 0.184 & 0.099 & 0.711 & 0 \\ UnempRate & Fraction of the population receiving unemployment compensation & 0.072 & 0.043 & 0.518 & 0.001 \\ lAvgAGI & ln-average adjusted gross income & 4.040 & 0.458 & 7.899 & 1.610 \\ lreturn & ln-number of returns filed (proxy for population) & 8.491 & 1.103 & 10.902 & 4.605 \\ \hline \hline \end{tabular} \end{table} Table 3: Data summary. ### Results Before getting to the parameter estimates, we bring attention to the marginal likelihood results. We compare the performance of the FREQ model, relative to the REQ model, for the full sample of US zip codes, as well as several state-specific models (Arizona, California, and Illinois). We consider these smaller samples of states to empirically explore the model fit when \(n\) is smaller and when there are varying degrees of heterogeneity and skewness in the sample. Table 4 presents the log-marginal likelihood estimates for the four samples across five quantiles. We find that the FREQ model has a higher marginal likelihood than the REQ model in all samples at the 10th, 25th, 75th, and 90th quantiles. The differences, particularly further in the tails, are quite dramatic, giving the FREQ model a posterior model probability of \(\approx 1\) over the REQ model. At the 50th quantile, we find that the REQ is the favored model in all samples except California. In fact, at the 50th quantile, \(\gamma\) is statistically equivalent to 0 in the FREQ model (for all states except California), implying the AL parameterization is more appropriate. Overall, these results demonstrate strong support from the data for the additional flexibility of the FREQ model. Researchers especially interested in the tail of their distribution of interest should employ this more flexible approach in their applied work to improve model fit. Turning attention to the parameter estimates, Table 5 presents the results for the FREQ model and Table 6 presents the results for the REQ model. We find that unemployment is positively associated with residential rental rates. All else equal, a 1 percentage point increase in the frac \begin{table} \begin{tabular}{l c c c c c} \hline \hline & 10th qtl & 25th qtl & 50th qtl & 75th qtl & 90th qtl \\ \hline Arizona\(-\)FREQ & \(-371.53\) & \(-318.93\) & \(-305.80\) & \(-322.42\) & \(-388.23\) \\ Arizona\(-\)REQ & \(-540.59\) & \(-350.09\) & \(-301.39\) & \(-356.09\) & \(-543.64\) \\ \hline California\(-\)FREQ & \(-394.32\) & \(-333.22\) & \(-316.31\) & \(-323.32\) & \(-368.53\) \\ California\(-\)REQ & \(-633.98\) & \(-387.15\) & \(-327.43\) & \(-380.68\) & \(-610.52\) \\ \hline Illinois\(-\)FREQ & \(-357.94\) & \(-305.63\) & \(-289.33\) & \(-303.70\) & \(-355.26\) \\ Illinois\(-\)REQ & \(-535.54\) & \(-337.10\) & \(-284.03\) & \(-339.78\) & \(-550.31\) \\ \hline US\(-\)FREQ & \(-398.24\) & \(-350.69\) & \(-333.37\) & \(-345.48\) & \(-392.04\) \\ US\(-\)REQ & \(-600.87\) & \(-379.30\) & \(-328.52\) & \(-378.16\) & \(-609.34\) \\ \hline \hline \end{tabular} \end{table} Table 4: Logarithm of marginal likelihood across 5 quantiles (10th, 25th, 50th, 75th, and 90th) within the FREQ and REQ framework for Arizona, California, Illinois, and the entire US. tion of the population that receives unemployment compensation is associated with an increase in monthly residential rental rates of 0.33% (at the 50th quantile). In a 2012 speech to the National Association of Home Builders, Ben Bernanke, then Chairman of the Federal Reserve, stated that "High unemployment and uncertain job prospects may have reduced the willingness of some households to commit to homeownership." By not committing to homeownership, individuals shift their preferences toward renting. The increase in demand for rental units puts upward pressure on prices, explaining the positive result. We find that the effect gets incrementally larger at higher quantiles, i.e., regions of the US that are more expensive. As an economy recovers from a crisis, attention should be paid to the price of rental units. Policymakers may want to focus on limiting the upward pressure on these prices as individuals and families may have a more difficult time recovering from the economic downturn if rents increase. We also find negative effects from our home mortgage variable (HMrate). Thus, the fraction \begin{table} \begin{tabular}{l r r r r r r r r r} \hline \hline & \multicolumn{2}{c}{10th quantile} & \multicolumn{2}{c}{25th quantile} & \multicolumn{2}{c}{50th quantile} & \multicolumn{2}{c}{75th quantile} & \multicolumn{2}{c}{90th quantile} \\ \cline{2-11} & mean & sd & mean & sd & mean & sd & mean & sd & mean & sd \\ \hline Intercept & 5.64 & 0.02 & 5.64 & 0.02 & 5.66 & 0.02 & 5.70 & 0.02 & 5.77 & 0.02 \\ SSBfrac & \(-0.89\) & 0.02 & \(-0.94\) & 0.02 & \(-0.94\) & 0.02 & \(-0.95\) & 0.02 & \(-0.96\) & 0.02 \\ Farmfrac & \(-0.42\) & 0.05 & \(-0.40\) & 0.05 & \(-0.39\) & 0.04 & \(-0.39\) & 0.05 & \(-0.42\) & 0.05 \\ REfrac & 0.54 & 0.03 & 0.51 & 0.03 & 0.50 & 0.03 & 0.50 & 0.03 & 0.51 & 0.03 \\ HMrate & \(-0.30\) & 0.04 & \(-0.31\) & 0.04 & \(-0.28\) & 0.03 & \(-0.30\) & 0.03 & \(-0.31\) & 0.04 \\ AltMinRate & 2.10 & 0.04 & 2.07 & 0.04 & 2.02 & 0.04 & 2.05 & 0.04 & 2.09 & 0.04 \\ EnergyRate & 0.35 & 0.03 & 0.40 & 0.02 & 0.40 & 0.02 & 0.42 & 0.02 & 0.41 & 0.03 \\ EITCrate & \(-0.61\) & 0.02 & \(-0.64\) & 0.02 & \(-0.64\) & 0.02 & \(-0.65\) & 0.02 & \(-0.67\) & 0.02 \\ UnempRate & 0.29 & 0.02 & 0.33 & 0.02 & 0.33 & 0.01 & 0.33 & 0.01 & 0.34 & 0.02 \\ lAvgAGI & 0.22 & 0.00 & 0.24 & 0.00 & 0.24 & 0.00 & 0.25 & 0.00 & 0.25 & 0.00 \\ lreturn & 0.07 & 0.00 & 0.07 & 0.00 & 0.07 & 0.00 & 0.07 & 0.00 & 0.06 & 0.00 \\ y11 & 0.01 & 0.00 & 0.01 & 0.00 & 0.01 & 0.00 & 0.00 & 0.00 & \(-0.01\) & 0.00 \\ y12 & 0.03 & 0.00 & 0.02 & 0.00 & 0.02 & 0.00 & 0.02 & 0.00 & 0.01 & 0.00 \\ y13 & 0.08 & 0.00 & 0.07 & 0.00 & 0.07 & 0.00 & 0.06 & 0.00 & 0.05 & 0.00 \\ y14 & 0.10 & 0.00 & 0.09 & 0.00 & 0.09 & 0.00 & 0.08 & 0.00 & 0.07 & 0.00 \\ y15 & 0.12 & 0.00 & 0.11 & 0.00 & 0.11 & 0.00 & 0.10 & 0.00 & 0.09 & 0.00 \\ y16 & 0.12 & 0.00 & 0.12 & 0.00 & 0.12 & 0.00 & 0.12 & 0.00 & 0.11 & 0.00 \\ \(\sigma\) & 0.02 & 0.00 & 0.02 & 0.00 & 0.02 & 0.00 & 0.02 & 0.00 & 0.02 & 0.00 \\ \(\gamma\) & 2.28 & 0.31 & 1.03 & 0.11 & \(-0.01\) & 0.01 & \(-0.87\) & 0.07 & \(-2.25\) & 0.28 \\ \(\varphi^{2}\) & 0.04 & 0.00 & 0.04 & 0.00 & 0.04 & 0.00 & 0.04 & 0.00 & 0.04 & 0.00 \\ \hline \hline \end{tabular} \end{table} Table 5: Results for the entire US data assuming a GAL error distribution – Posterior mean (mean) and posterior standard deviation (sd) of the parameters. of the population taking home mortgage tax deductions is negatively associated with rental prices. The ability to deduct mortgage interest on individual income taxes makes homeownership more attractive than renting. This result is important in light of the Tax Cuts and Jobs Act (TCJA), which was signed into law in the United States in 2017. The Act lowered the mortgage deduction limit and put a limit on how much an individual can subtract from their taxable income. Our model results suggest that this decrease in home mortgage deductions puts upward pressure on rental prices, a costly unintended consequence. Our results are in line with Hembre and Dantas (2022), who find that reductions in homeownership subsidies increase rental payments. The other results in Tables 5 and 6 largely align with intuition. Specifically, we find that income is positively associated with rental rates. Average adjusted gross income has a positive effect across the quantiles and the fraction of the population paying alternative minimums (i.e., high-income taxpayers) is also positively associated with rental rates. Whereas, the fraction of the population claiming earned income tax credits (EITC), which represents low-income working individuals, is \begin{table} \begin{tabular}{l r r r r r r r r r} \hline \hline & \multicolumn{2}{c}{10th quantile} & \multicolumn{2}{c}{25th quantile} & \multicolumn{2}{c}{50th quantile} & \multicolumn{2}{c}{75th quantile} & \multicolumn{2}{c}{90th quantile} \\ \cline{2-10} & mean & sd & mean & sd & mean & sd & mean & sd & mean & sd \\ \hline Intercept & 5.70 & 0.02 & 5.67 & 0.02 & 5.66 & 0.02 & 5.76 & 0.02 & 5.86 & 0.02 \\ SSBfrac & \(-0.75\) & 0.02 & \(-0.83\) & 0.02 & \(-0.94\) & 0.02 & \(-0.94\) & 0.02 & \(-0.89\) & 0.02 \\ Farmfrac & \(-0.43\) & 0.04 & \(-0.42\) & 0.04 & \(-0.39\) & 0.04 & \(-0.40\) & 0.05 & \(-0.44\) & 0.05 \\ REfrac & 0.50 & 0.03 & 0.52 & 0.03 & 0.49 & 0.03 & 0.47 & 0.03 & 0.47 & 0.03 \\ HMrate & \(-0.19\) & 0.03 & \(-0.24\) & 0.03 & \(-0.28\) & 0.03 & \(-0.25\) & 0.03 & \(-0.24\) & 0.03 \\ AltMinRate & 2.10 & 0.04 & 2.05 & 0.04 & 2.03 & 0.04 & 2.03 & 0.04 & 1.99 & 0.04 \\ EnergyRate & 0.17 & 0.02 & 0.29 & 0.02 & 0.40 & 0.02 & 0.40 & 0.02 & 0.35 & 0.02 \\ EITCrate & \(-0.57\) & 0.02 & \(-0.59\) & 0.02 & \(-0.64\) & 0.02 & \(-0.68\) & 0.02 & \(-0.69\) & 0.02 \\ UnempRate & 0.19 & 0.02 & 0.26 & 0.01 & 0.33 & 0.01 & 0.34 & 0.01 & 0.36 & 0.02 \\ lAvgAGI & 0.18 & 0.00 & 0.21 & 0.00 & 0.24 & 0.00 & 0.24 & 0.00 & 0.24 & 0.00 \\ lreturn & 0.08 & 0.00 & 0.07 & 0.00 & 0.07 & 0.00 & 0.06 & 0.00 & 0.06 & 0.00 \\ y11 & 0.03 & 0.00 & 0.02 & 0.00 & 0.01 & 0.00 & \(-0.01\) & 0.00 & \(-0.02\) & 0.00 \\ y12 & 0.04 & 0.00 & 0.04 & 0.00 & 0.02 & 0.00 & 0.01 & 0.00 & \(-0.01\) & 0.00 \\ y13 & 0.09 & 0.00 & 0.08 & 0.00 & 0.07 & 0.00 & 0.04 & 0.00 & 0.03 & 0.00 \\ y14 & 0.12 & 0.00 & 0.11 & 0.00 & 0.09 & 0.00 & 0.07 & 0.00 & 0.05 & 0.00 \\ y15 & 0.13 & 0.00 & 0.12 & 0.00 & 0.11 & 0.00 & 0.09 & 0.00 & 0.07 & 0.00 \\ y16 & 0.13 & 0.00 & 0.13 & 0.00 & 0.12 & 0.00 & 0.11 & 0.00 & 0.10 & 0.00 \\ \(\sigma\) & 0.01 & 0.00 & 0.02 & 0.00 & 0.02 & 0.00 & 0.02 & 0.00 & 0.01 & 0.00 \\ \(\varphi^{2}\) & 0.05 & 0.00 & 0.05 & 0.00 & 0.04 & 0.00 & 0.05 & 0.00 & 0.05 & 0.00 \\ \hline \hline \end{tabular} \end{table} Table 6: Results for the entire US data assuming an AL error distribution – Posterior mean (mean) and posterior standard deviation (sd) of the parameters. negatively associated with rental rates. Additionally, the year indicators, which are relative to 2010, are positive and get incrementally larger, capturing aggregate increases in prices. #### 4.3.1 Additional Considerations In this section, we present the FREQ and REQ results when the sample is restricted to zip codes in Illinois. The model specifications remain the same as before. We chose Illinois (IL) for two reasons: (1) we wish to explore empirical parameter estimates in a smaller sample setting and (2) IL provides extensive variation in land value from expensive metropolitan regions (e.g., Chicago) to rural, farming areas. Table 7 presents the FREQ results and Table 8 presents the REQ results. Importantly, recall from Table 4 that the data support the FREQ model over the REQ model at all quantiles except the 50th. In looking at the results for the 10th quantile, a major discrepancy between the FREQ and REQ is apparent. The REQ model results suggest that unemployment compensation is negatively \begin{table} \begin{tabular}{l r r r r r r r r r} \hline \hline & \multicolumn{2}{c}{10th quantile} & \multicolumn{2}{c}{25th quantile} & \multicolumn{2}{c}{50th quantile} & \multicolumn{2}{c}{75th quantile} & \multicolumn{2}{c}{90th quantile} \\ \cline{2-10} & mean & sd & mean & sd & mean & sd & mean & sd & mean & sd \\ \hline Intercept & 5.47 & 0.12 & 5.44 & 0.12 & 5.42 & 0.12 & 5.49 & 0.12 & 5.63 & 0.13 \\ SSBfrac & \(-1.37\) & 0.13 & \(-1.52\) & 0.13 & \(-1.62\) & 0.13 & \(-1.74\) & 0.13 & \(-1.74\) & 0.13 \\ Farmfrac & 1.06 & 0.24 & 1.27 & 0.26 & 1.43 & 0.26 & 1.38 & 0.25 & 1.16 & 0.26 \\ REfrac & 0.64 & 0.17 & 0.75 & 0.17 & 0.84 & 0.18 & 0.93 & 0.18 & 0.94 & 0.18 \\ HMrate & \(-0.80\) & 0.18 & \(-0.90\) & 0.18 & \(-0.97\) & 0.18 & \(-1.02\) & 0.19 & \(-0.94\) & 0.18 \\ AltMinRate & 0.94 & 0.18 & 0.79 & 0.19 & 0.65 & 0.20 & 0.61 & 0.20 & 0.66 & 0.20 \\ EnergyRate & 1.09 & 0.12 & 1.04 & 0.12 & 0.96 & 0.12 & 0.82 & 0.12 & 0.70 & 0.13 \\ EITCrate & \(-0.51\) & 0.10 & \(-0.51\) & 0.10 & \(-0.52\) & 0.10 & \(-0.52\) & 0.10 & \(-0.47\) & 0.10 \\ UnempRate & \(-0.05\) & 0.09 & \(-0.04\) & 0.09 & \(-0.02\) & 0.09 & 0.03 & 0.09 & \(-0.01\) & 0.09 \\ lAvgAGI & 0.27 & 0.02 & 0.28 & 0.02 & 0.30 & 0.02 & 0.31 & 0.02 & 0.30 & 0.02 \\ lreturn & 0.09 & 0.01 & 0.10 & 0.01 & 0.10 & 0.01 & 0.09 & 0.01 & 0.08 & 0.01 \\ y11 & 0.00 & 0.01 & \(-0.00\) & 0.01 & \(-0.01\) & 0.01 & \(-0.02\) & 0.01 & \(-0.03\) & 0.01 \\ y12 & 0.02 & 0.01 & 0.01 & 0.01 & 0.01 & 0.01 & \(-0.01\) & 0.01 & \(-0.02\) & 0.01 \\ y13 & 0.04 & 0.01 & 0.04 & 0.01 & 0.03 & 0.01 & 0.01 & 0.01 & \(-0.01\) & 0.01 \\ y14 & 0.04 & 0.01 & 0.03 & 0.01 & 0.03 & 0.01 & 0.01 & 0.01 & \(-0.01\) & 0.01 \\ y15 & 0.03 & 0.01 & 0.03 & 0.01 & 0.02 & 0.01 & 0.02 & 0.01 & \(-0.00\) & 0.01 \\ y16 & 0.02 & 0.01 & 0.02 & 0.01 & 0.02 & 0.01 & 0.02 & 0.01 & 0.00 & 0.01 \\ \(\sigma\) & 0.02 & 0.00 & 0.02 & 0.00 & 0.02 & 0.00 & 0.02 & 0.00 & 0.02 & 0.00 \\ \(\gamma\) & 1.72 & 0.07 & 0.54 & 0.04 & \(-0.11\) & 0.04 & \(-0.85\) & 0.05 & \(-2.01\) & 0.08 \\ \(\varphi^{2}\) & 0.04 & 0.00 & 0.04 & 0.00 & 0.04 & 0.00 & 0.04 & 0.00 & 0.04 & 0.00 \\ \hline \hline \end{tabular} \end{table} Table 7: Results for Illinois assuming a GAL error distribution – Posterior mean (mean) and posterior standard deviation (sd) of the parameters. associated with residential rental prices. Meaning, in regions of IL that are inexpensive (10th quantile), an increase in the fraction of the population receiving unemployment compensation should decrease rental rates. However, the FREQ model results suggest that the effect of unemployment is not statistically different from zero (i.e., unemployment has no effect on rental prices). In considering the marginal likelihood results (Table 4), we know that the posterior model probability of the FREQ model is approximately 1, relative to the REQ model, demonstrating that the data overwhelmingly support the specification with the flexibility in the skewness of the error provided by the GAL distribution. Thus, the results of the FREQ model are validated, whereas those of the REQ model are negated. This example demonstrates the dangers of ignoring skewness in the error distribution. Had a researcher or policymaker solely considered a model with the AL distributional assumption (which is commonly done), they would have arrived at an erroneous conclusion about the relationship between unemployment and rental rates. We caution against this approach and instead motivate \begin{table} \begin{tabular}{l r r r r r r r r r} \hline \hline & \multicolumn{2}{c}{10th quantile} & \multicolumn{2}{c}{25th quantile} & \multicolumn{2}{c}{50th quantile} & \multicolumn{2}{c}{75th quantile} & \multicolumn{2}{c}{90th quantile} \\ \cline{2-11} & mean & sd & mean & sd & mean & sd & mean & sd & mean & sd \\ \hline Intercept & 5.52 & 0.12 & 5.52 & 0.03 & 5.41 & 0.12 & 5.54 & 0.13 & 5.80 & 0.13 \\ SSBfrac & \(-1.08\) & 0.14 & \(-1.35\) & 0.13 & \(-1.67\) & 0.13 & \(-1.75\) & 0.14 & \(-1.53\) & 0.13 \\ Farmfrac & 0.09 & 0.22 & 1.11 & 0.24 & 1.48 & 0.25 & 1.32 & 0.25 & 0.92 & 0.24 \\ REfrac & 0.48 & 0.18 & 0.60 & 0.17 & 0.89 & 0.18 & 0.94 & 0.18 & 0.90 & 0.17 \\ HMrate & \(-0.60\) & 0.18 & \(-0.76\) & 0.18 & \(-1.00\) & 0.18 & \(-0.89\) & 0.19 & \(-0.73\) & 0.18 \\ AltMinRate & 1.09 & 0.19 & 0.96 & 0.18 & 0.58 & 0.20 & 0.47 & 0.21 & 0.69 & 0.22 \\ EnergyRate & 1.23 & 0.13 & 1.17 & 0.11 & 0.91 & 0.12 & 0.61 & 0.12 & 0.40 & 0.12 \\ EITCrate & \(-0.48\) & 0.10 & \(-0.51\) & 0.10 & \(-0.52\) & 0.10 & \(-0.43\) & 0.10 & \(-0.23\) & 0.10 \\ UnempRate & \(-0.22\) & 0.09 & \(-0.14\) & 0.09 & \(-0.01\) & 0.01 & \(-0.05\) & 0.09 & \(-0.09\) & 0.09 \\ lAvgAGI & 0.23 & 0.02 & 0.26 & 0.02 & 0.30 & 0.02 & 0.31 & 0.02 & 0.29 & 0.03 \\ lreturn & 0.10 & 0.01 & 0.10 & 0.01 & 0.10 & 0.01 & 0.08 & 0.01 & 0.06 & 0.01 \\ y11 & 0.02 & 0.01 & 0.01 & 0.01 & \(-0.01\) & 0.01 & \(-0.04\) & 0.01 & \(-0.06\) & 0.01 \\ y12 & 0.04 & 0.01 & 0.02 & 0.01 & 0.00 & 0.01 & \(-0.03\) & 0.01 & \(-0.05\) & 0.01 \\ y13 & 0.06 & 0.01 & 0.05 & 0.01 & 0.02 & 0.01 & \(-0.01\) & 0.01 & \(-0.03\) & 0.01 \\ y14 & 0.06 & 0.01 & 0.05 & 0.01 & 0.02 & 0.01 & 0.00 & 0.01 & \(-0.03\) & 0.01 \\ y15 & 0.05 & 0.01 & 0.04 & 0.01 & 0.02 & 0.01 & 0.00 & 0.01 & \(-0.02\) & 0.01 \\ y16 & 0.03 & 0.01 & 0.03 & 0.01 & 0.02 & 0.01 & 0.01 & 0.01 & \(-0.01\) & 0.01 \\ \(\sigma\) & 0.01 & 0.00 & 0.02 & 0.00 & 0.02 & 0.00 & 0.02 & 0.00 & 0.01 & 0.00 \\ \(\varphi^{2}\) & 0.05 & 0.00 & 0.05 & 0.00 & 0.04 & 0.00 & 0.04 & 0.00 & 0.04 & 0.00 \\ \hline \hline \end{tabular} \end{table} Table 8: Results for Illinois assuming an AL error distribution – Posterior mean (mean) and posterior standard deviation (sd) of the parameters. researchers to use model comparison to uncover the best model and to especially consider the GAL approach at higher and lower quantiles, where the benefits are most dramatic. ## 5 Conclusion This article has considered the Bayesian analysis of a random effects quantile regression model for panel data under the generalized asymmetric Laplace distribution, which eliminates the dependence of distributional skewness on the quantile parameter. New computationally efficient MCMC sampling algorithms have been developed for parameter estimation, as well as model comparison, in both the FREQ and REQ versions of the model. Key to the improved properties of our posterior simulator is the idea of carefully designed parameter blocking. Various features of the proposed modeling framework and estimation methodology have been studied in simulation studies. The paper has also devoted considerable attention to studying the behavior of U.S. residential rental rates following the Global Financial Crisis. Our methodology fits this purpose very well due to the strong right skew of rental rates and the extensive heterogeneity at the zip-code level across different regions. Our results reveal that unemployment has positive effects on rental rates and mortgage deductions have negative effects. Regions of the U.S. characterized by high unemployment also exhibited declines in homeownership, leading to an increase in demand for rental units and putting upward pressure on prices. The negative effect of mortgage deductions sheds light on the unintended consequences of the Tax Cuts and Jobs Act (TCJA) as a potential contributor to the large increases in rental prices since 2017. Based on our model comparisons, we find that the data overwhelmingly support the FREQ model in various subsamples and at nearly all quantiles, especially away from the median, suggesting that researchers interested in the tails of the distribution could find the more flexible GAL modeling framework decidedly more useful. ## Appendix A Conditional Densities in the FREQ model We utilize the joint posterior density of the FREQ model, given by Equation (8), to derive the conditional posterior densities of our objects of interest. The principle behind the derivation is to collect all terms involving the parameter of interest and identifying its distribution, while holding all other parameters fixed. The derivation of the conditional posteriors below follows the sequence in Algorithm 1. **(1)** The parameters \((\beta,\alpha)\) are sampled in a block to account for possible correlation between the parameters and reduce autocorrelation in the MCMC draws. The joint posterior of \((\beta,\alpha)\) can be expressed as, \[\pi(\beta,\alpha|y,\nu,h,\sigma,\gamma,\varphi^{2}) =\pi(\beta|y,\nu,h,\sigma,\gamma,\varphi^{2})\pi(\alpha|y,\beta, \nu,h,\sigma,\gamma,\varphi^{2})\] \[=\pi(\beta|y,\nu,h,\sigma,\gamma,\varphi^{2})\prod_{i=1}^{n}\pi( \alpha_{i}|y,\beta,\nu,h,\sigma,\gamma,\varphi^{2}).\] We first sample \(\beta\) marginally of \(\alpha\) and then draw \(\alpha\) conditional on \(\beta\) and other model parameters. (a) To find the conditional posterior density \(\pi(\beta|y,\nu,h,\sigma,\gamma,\varphi^{2})\), we integrate out \((\alpha_{i},u_{i})\) from the model, \[y_{i}=X_{i}\beta+Z_{i}\alpha_{i}+A\nu_{i}+C|\gamma|h_{i}+\Lambda_{i}^{1/2}u_{i}\] where \(\alpha_{i}\sim N(0_{l},\varphi^{2}I_{l})\) and \(u_{i}\sim N(0_{T_{i}},I_{T_{i}})\). This implies that \(y_{i}|\beta,\nu,h,\gamma,\sigma,\varphi^{2}\) follows a normal distribution with mean, \[E(y_{i})=X_{i}\beta+A\nu_{i}+C|\gamma|h_{i}\] and covariance, \[V_{i} =E\left[(y_{i}-E(y_{i}))(y_{i}-E(y_{i}))^{\prime}\right]\] \[=E\left[(Z_{i}\alpha_{i}+\Lambda_{i}^{1/2}u_{i})(Z_{i}\alpha_{i}+ \Lambda_{i}^{1/2}U_{i})^{\prime}\right]\] \[=E\left[Z_{i}\alpha_{i}\alpha_{i}^{\prime}Z_{i}^{\prime}+\Lambda_ {i}^{1/2}u_{i}u_{i}^{\prime}\Lambda_{i}^{1/2}\right]\] \[=\varphi^{2}Z_{i}Z_{i}^{\prime}+\Lambda_{i},\] i.e., \(y_{i}|\beta,\nu,h,\sigma,\gamma,\varphi^{2}\sim N(X_{i}\beta+A\nu_{i}+C|\gamma |h_{i},\ \varphi^{2}Z_{i}Z_{i}^{\prime}+\Lambda_{i})\) for \(i=1,2,\cdots,n\). Thus it follows that the conditional posterior of \(\beta\) can be derived as, \[\pi(\beta|y,\nu,h,\sigma,\gamma,\varphi^{2})\propto f(y|\beta,\nu,h, \sigma,\gamma,\varphi^{2})\times\pi(\beta)\] \[\propto\exp\bigg{\{}-\frac{1}{2}\bigg{[}\sum_{i=1}^{n}(y_{i}-X_{i }\beta-A\nu_{i}-C|\gamma|h_{i})^{\prime}V_{i}^{-1}(y_{i}-X_{i}\beta-A\nu_{i}-C| \gamma|h_{i})\] \[\qquad\qquad\qquad+(\beta-\beta_{0})^{\prime}B_{0}^{-1}(\beta- \beta_{0})\bigg{]}\bigg{\}}\] \[\propto\exp\bigg{\{}-\frac{1}{2}\bigg{[}\sum_{i=1}^{n}(y_{i}-A\nu _{i}-C|\gamma|h_{i})^{\prime}V_{i}^{-1}X_{i}\beta-\beta^{\prime}\sum_{i=1}^{n} X_{i}^{\prime}V_{i}^{-1}(y_{i}-A\nu_{i}-C|\gamma|h_{i})\] \[\qquad\qquad\qquad+\beta^{\prime}\left(\sum_{i=1}^{n}X_{i}^{ \prime}V_{i}^{-1}X_{i}\right)\beta+\beta^{\prime}B_{0}^{-1}\beta-\beta^{ \prime}B_{0}^{-1}\beta_{0}-\beta_{0}^{\prime}B_{0}^{-1}\beta\bigg{]}\bigg{\}}\] \[\propto\exp\bigg{\{}-\frac{1}{2}\bigg{[}\beta^{\prime}\tilde{B}^ {-1}\beta-\beta^{\prime}\tilde{B}^{-1}\tilde{\beta}-\tilde{\beta}^{\prime} \tilde{B}^{-1}\beta+\tilde{\beta}^{\prime}\tilde{B}^{-1}\tilde{\beta}-\tilde{ \beta}^{\prime}\tilde{B}^{-1}\tilde{\beta}\bigg{]}\bigg{\}}\] \[\propto\exp\Big{\{}-\frac{1}{2}(\beta-\tilde{\beta})^{\prime} \tilde{B}^{-1}(\beta-\tilde{\beta})\Big{\}},\] where the posterior precision matrix \(\tilde{B}^{-1}\) and the posterior mean \(\tilde{\beta}\) are defined as follows: \[\tilde{B}^{-1}=\bigg{(}\sum_{i=1}^{n}X_{i}^{\prime}V_{i}^{-1}X_{i}+B_{0}^{-1 }\bigg{)}\ \ \text{and}\ \ \tilde{\beta}=\tilde{B}\bigg{(}\sum_{i=1}^{n}X_{i}^{\prime}V_{i}^{-1}(y_{i}-A \nu_{i}-C|\gamma|h_{i})+B_{0}^{-1}\beta_{0}\bigg{)}.\] Hence, the conditional posterior is a normal distribution and \(\beta|y,\nu,h,\sigma,\gamma,\varphi^{2}\sim N(\tilde{\beta},\tilde{B})\). (b) The conditional posterior distribution of \(\alpha_{i}\) can be derived as, \[\pi(\alpha_{i}|y,\beta,\nu,h,\sigma,\gamma,,\varphi^{2})\propto f (y_{i}|\beta,\alpha_{i},\nu,h,\sigma,\gamma,\varphi^{2})\times\pi(\alpha_{i}| \varphi^{2})\] \[\propto\exp\bigg{\{}-\frac{1}{2}\big{[}(y_{i}-X_{i}\beta-A\nu_{i} -C|\gamma|h_{i})^{\prime}\Lambda_{i}^{-1}(y_{i}-X_{i}\beta-Z_{i}\alpha_{i}-A \nu_{i}-C|\gamma|h_{i})\big{]}\] \[\qquad\qquad\qquad-\frac{1}{2}\frac{\alpha_{i}^{\prime}\alpha_{i }}{\varphi^{2}}\bigg{\}}\] \[\propto\exp\bigg{\{}-\frac{1}{2}\bigg{[}(y_{i}-X_{i}\beta-A\nu_{ i}-C|\gamma|h_{i})^{\prime}\Lambda_{i}^{-1}Z_{i}\alpha_{1}+\alpha_{i}^{\prime}Z_{i}^{ \prime}\Lambda_{i}^{-1}Z_{i}\alpha_{i}\] \[\qquad\qquad\qquad-\alpha_{i}^{\prime}Z_{i}^{\prime}\Lambda_{i}^{ -1}(y_{i}-X_{i}\beta-A\nu_{i}-C|\gamma|h_{i})+\frac{\alpha_{i}^{\prime}\alpha_{ i}}{\varphi^{2}}\bigg{]}\bigg{\}}\] \[\propto\exp\bigg{\{}-\frac{1}{2}\Big{[}\alpha_{i}\tilde{A}_{i}^{ -1}\alpha_{i}-\alpha_{i}^{\prime}\tilde{A}_{i}^{-1}\tilde{a}_{i}-\tilde{a}_{i}^ {\prime}\tilde{A}_{i}^{-1}\alpha_{i}+\tilde{a}_{i}^{\prime}\tilde{A}_{i}^{-1 }\tilde{a}_{i}-\tilde{a}_{i}^{\prime}\tilde{A}_{i}^{-1}\tilde{a}_{i}\Big{]} \bigg{\}}\] \[\propto\exp\bigg{\{}-\frac{1}{2}(\alpha_{i}-\tilde{a}_{i})^{ \prime}\tilde{A}_{i}^{-1}(\alpha_{i}-\tilde{a}_{i})\bigg{\}}\,,\] where the posterior precision \(\tilde{A}_{i}^{-1}\) and the posterior mean \(\tilde{a}_{i}\) are as follows: \[\tilde{A}_{i}^{-1}=\bigg{(}Z_{i}^{\prime}\Lambda_{i}^{-1}Z_{i}+\frac{I_{i}}{ \varphi^{2}}\bigg{)}\ \ \ \text{and}\ \ \ \tilde{a}_{i}=\tilde{A}_{i}\Big{(}Z_{i}^{\prime}\Lambda_{i}^{-1}(y_{i}-X_{i} \beta-A\nu_{i}-C|\gamma|h_{i})\Big{)}.\] Hence, the conditional posterior is a normal distribution and \(\alpha_{i}|y,\beta,\nu,h,\sigma,\gamma,\varphi^{2}\sim N(\tilde{a}_{i},\tilde{A}_ {i})\) for \(i=1,2,\cdots,n\). **(2)** The conditional posterior distribution of \(\varphi^{2}\) is relatively simple and is derived as shown below, \[\pi(\varphi^{2}|y,\alpha) \propto\prod_{i=1}^{n}\left[\pi(\alpha_{i}|\varphi^{2})\right] \times\pi(\varphi^{2})\] \[\propto\prod_{i=1}^{n}\left[(\varphi^{2})^{-l/2}\exp\left\{- \frac{1}{2}\frac{\alpha_{i}^{\prime}\alpha_{i}}{\varphi^{2}}\right\}\right] \times\left(\frac{1}{\varphi^{2}}\right)^{\frac{c_{1}}{2}+1}\exp\left\{-\frac {d_{1}}{2\varphi^{2}}\right\}\] \[\propto(\varphi^{2})^{-(nl+c_{1}+2)/2}\exp\left\{-\frac{1}{2 \varphi^{2}}\left(\sum_{i=1}^{n}\alpha_{i}^{\prime}\alpha_{i}+d_{1}\right) \right\},\] which is recognized as the kernel of an inverse-Gamma distribution. Hence, \(\varphi^{2}|y,\alpha\sim IG(\tilde{c}_{1}/2,\tilde{d}_{1}/2)\), where, \(\tilde{c}_{1}=nl+c_{1}\) and \(\tilde{d}_{1}=\sum_{i=1}^{n}\alpha_{i}^{\prime}\alpha_{i}+d_{1}\). **(3)** The parameters \((\sigma,\gamma)\) are jointly sampled marginally of \((\nu,h)\) from the joint posterior which is proportional to the likelihood \(f_{GAL}(y|\beta,\alpha,\sigma,\gamma)\) times the prior distributions \(\pi(\beta,\alpha,\sigma,\gamma)\) given by Equation (3) and Equation (7), respectively. Collecting terms involving \((\sigma,\gamma)\) do not yield a tractable distribution, so \((\sigma,\gamma)\) are sampled using a random-walk MH algorithm. Here, joint sampling increases algorithmic efficiency by reducing the autocorrelation in the MCMC draws of \((\sigma,\gamma)\). The proposed draw \((\sigma^{\prime},\gamma^{\prime})\) are generated from a bivariate truncated normal distribution \(BTN_{(0,\infty)\times(L,U)}\big{(}(\sigma_{c},\gamma_{c}),\iota^{2}\hat{D} \big{)}\), where \((\sigma_{c},\gamma_{c})\) are the current values, \(\iota\) is the tuning factor and \(\hat{D}\) is the negative inverse of the Hessian obtained by maximizing the logarithm of the likelihood with respect to \((\sigma,\gamma)\) with \(\beta\) set at the pooled ordinal least squares estimate. We accept \((\sigma^{\prime},\gamma^{\prime})\) with MH probability, \[\alpha_{MH}(\sigma_{c},\gamma_{c};\sigma^{\prime},\gamma^{\prime})=\min\bigg{\{} 0,\ln\left[\frac{f_{GAL}(y|\beta,\alpha,\sigma^{\prime},\gamma^{\prime})\,\pi( \beta,\alpha,\sigma^{\prime},\gamma^{\prime})}{f_{GAL}(y|\beta,\alpha,\sigma_ {c},\gamma_{c})\,\pi(\beta,\alpha,\sigma_{c},\gamma_{c})}\,\frac{\pi(\sigma_ {c},\gamma_{c}|(\sigma^{\prime},\gamma^{\prime}),\iota^{2}\hat{D})}{\pi(\sigma ^{\prime},\gamma^{\prime}|(\sigma_{c},\gamma_{c}),\iota^{2}\hat{D})}\right] \bigg{\}};\] where, \(f_{GAL}(\cdot)\) denotes the full likelihood given by Equation (3), \(\pi(\beta,\alpha,\sigma,\gamma)\) denotes the prior distributions given in Equation (7), and \(\pi(\sigma_{c},\gamma_{c}|(\sigma^{\prime},\gamma^{\prime}),\iota^{2}\hat{D})\) denotes the bivariate truncated normal probability with mean \((\sigma^{\prime},\gamma^{\prime})\) and covariance \(\iota^{2}\hat{D}\) and _vice-versa_; otherwise, the current value \((\sigma_{c},\gamma_{c})\) is repeated in the next MCMC iteration. Note that the parameters \((A,B,C)\) is a function of \(p\) which in turn is dependent on \(p_{0}\) and \(\gamma\). **(4)** To derive the conditional posterior distribution of \(\nu_{it}\), we need to work element wise as follows: \[\pi(\nu_{it}|y_{it},\beta,\alpha_{i},h_{it},\sigma,\gamma)\] \[\propto\nu_{it}^{-\frac{1}{2}}\exp\bigg{\{}-\frac{1}{2}\bigg{[} \frac{(y_{it}-x^{\prime}_{it}\beta-z^{\prime}_{it}\alpha_{i}-A\nu_{it}-C|\gamma |h_{it})^{2}}{\sigma B\nu_{it}}\bigg{]}-\frac{\nu_{it}}{\sigma}\bigg{\}}\] \[\propto\nu_{it}^{-\frac{1}{2}}\exp\bigg{\{}-\frac{1}{2}\bigg{[} \frac{(y_{it}-x^{\prime}_{it}\beta-z^{\prime}_{it}\alpha_{i}-C|\gamma|h_{it})^ {2}}{\sigma B}\,\nu_{it}^{-1}+\bigg{(}\frac{A^{2}}{\sigma B}+\frac{2}{\sigma} \bigg{)}\nu_{it}\bigg{]}\bigg{\}}\] \[\propto\nu_{it}^{-\frac{1}{2}}\exp\bigg{\{}-\frac{1}{2}\bigg{[} \chi\nu_{it}^{-1}+\psi_{\nu_{it}}\nu_{it}\bigg{]}\bigg{\}},\] where we have used the following notations, \[\chi=\bigg{(}\frac{A^{2}}{\sigma B}+\frac{2}{\sigma}\bigg{)}\ \ \ \ \ \text{and}\ \ \ \ \ \psi_{\nu_{it}}=\frac{(y_{it}-x^{\prime}_{it}\beta-s^{\prime}_{it}\alpha_{i}-C| \gamma|h_{it})^{2}}{\sigma B}.\] Therefore, we have \(\nu_{it}|y_{it},\beta,\alpha_{i},h_{it},\sigma,\gamma\sim GIG(\frac{1}{2}, \chi,\psi_{\nu_{it}})\) for all values of \(i\) and \(t\). **(5)** Similar to \(\nu_{it}\), the conditional posterior of \(h_{it}\) is derived element wise as follows: \[\pi(h_{it}|y_{it},\beta,\nu_{it},\sigma,\gamma)\] \[\propto\exp\bigg{\{}-\frac{1}{2}\bigg{[}\frac{(y_{it}-x^{\prime} _{it}\beta-z_{it}\alpha_{i}-A\nu_{it}-C|\gamma|h_{it})^{2}}{\sigma B\nu_{it}}+ \frac{h_{it}^{2}}{\sigma^{2}}\bigg{]}\bigg{\}}\] \[\propto\exp\bigg{\{}-\frac{1}{2}\bigg{[}\bigg{(}\frac{1}{\sigma^ {2}}+\frac{C^{2}\gamma^{2}}{\sigma B\nu_{it}}\bigg{)}h_{it}^{2}-\frac{2C| \gamma|(y_{i}-x^{\prime}_{it}\beta-z^{\prime}_{it}\alpha_{i}-A\nu_{it})}{ \sigma B\nu_{it}}h_{it}\bigg{]}\bigg{\}}\] \[\propto\exp\bigg{\{}-\frac{1}{2}\bigg{[}(\sigma^{2}_{h_{it}})^{-1 }h_{it}^{2}-2(\sigma^{2}_{h_{it}})^{-1}\mu_{h_{it}}h_{it}\bigg{]}\bigg{\}}\] \[\propto\exp\bigg{\{}-\frac{1}{2}(\sigma^{2}_{h_{it}})^{-1}(h_{it} -\mu_{h_{it}})^{2}\bigg{\}},\] where, the second last line utilizes the notations, \[\sigma^{2}_{h_{it}}=\bigg{(}\frac{1}{\sigma^{2}}+\frac{C^{2}\gamma^{2}}{\sigma B \nu_{it}}\bigg{)}^{-1}\ \ \ \ \text{and}\ \ \ \ \ \mu_{h_{it}}=\sigma^{2}_{h_{it}}\bigg{(}\frac{C|\gamma|(y_{it}-x^{\prime}_{it }\beta-z^{\prime}_{it}\alpha_{i}-A\nu_{it})}{\sigma B\nu_{it}}\bigg{)}.\] and the last expression is recognized as the kernel of a half-normal distribution. Hence, we have \(h_{it}|y_{it},\beta,\nu_{it},\sigma,\gamma\sim N^{+}(\mu_{h_{it}},\sigma^{2}_{ h_{it}})\) for all values of \(i\) and \(t\).
2305.02952
Ultrahigh oxygen ion mobility in ferroelectric hafnia
Ferroelectrics and ionic conductors are important functional materials, each supporting a plethora of applications in information and energy technology. The underlying physics governing their functional properties is ionic motion, and yet studies of ferroelectrics and ionic conductors are often considered separate fields. Based on first-principles calculations and deep-learning-assisted large-scale molecular dynamics (MD) simulations, we report ferroelectric-switching-promoted oxygen ion transport in HfO$_2$, a wide-band-gap insulator with both ferroelectricity and ionic conductivity. Applying a unidirectional bias can activate multiple switching pathways in ferroelectric HfO$_2$, leading to polar-antipolar phase cycling that appears to contradict classical electrodynamics. This apparent conflict is resolved by the geometric-quantum-phase nature of electric polarization that carries no definite direction. Our MD simulations demonstrate bias-driven successive ferroelectric transitions facilitate ultrahigh oxygen ion mobility at moderate temperatures, highlighting the potential of combining ferroelectricity and ionic conductivity for the development of advanced materials and technologies.
Liyang Ma, Jing Wu, Tianyuan Zhu, Yiwei Huang, Qiyang Lu, Shi Liu
2023-05-04T15:56:01Z
http://arxiv.org/abs/2305.02952v2
# Ultrahigh oxygen ion mobility in ferroelectric hafnia ###### Abstract Ferroelectrics and ionic conductors are important functional materials, each supporting a plethora of applications in information and energy technology. The underlying physics governing their functional properties is ionic motion, and yet studies of ferroelectrics and ionic conductors are often considered separate fields. Based on first-principles calculations and deep-learning-assisted large-scale molecular dynamics (MD) simulations, we report ferroelectric-switching-promoted oxygen ion transport in HfO\({}_{2}\), a wide-band-gap insulator with both ferroelectricity and ionic conductivity. Applying a unidirectional bias can activate multiple switching pathways in ferroelectric HfO\({}_{2}\), leading to polar-antipolar phase cycling that appears to contradict classical electrodynamics. This apparent conflict is resolved by the geometric-quantum-phase nature of electric polarization that carries no definite direction. Our MD simulations demonstrate bias-driven successive ferroelectric transitions facilitate ultrahigh oxygen ion mobility at moderate temperatures, highlighting the potential of combining ferroelectricity and ionic conductivity for the development of advanced materials and technologies. Owing to the robust nanoscale ferroelectricity and industry-validated silicon compatibility, HfO\({}_{2}\)-based ferroelectrics have emerged as an excellent choice for incorporating ferroelectric functionalities into integrated circuits [1; 2]. The observed ferroelectricity in hafnia thin films has been attributed to the \(Pca2_{1}\) phase, which is higher in energy than the ground-state monoclinic (\(M\)) phase. A striking structural characteristic of this polar orthorhombic phase is the presence of a spacing layer consisted of fourfold-coordinated nonpolar oxygen ions (O\({}^{np}\)) that separates polar threefold-coordinated oxygen ions (O\({}^{p}\)), and the polar and nonpolar oxygen ions are ordered alternately along the direction perpendicular the polarization (\(P\), see Fig. 1**a-b**). Long before the discovery of ferroelectric HfO\({}_{2}\), nonpolar oxygen-deficient hafnia, HfO\({}_{2-x}\), was actively investigated as a resistive switching material for nonvolatile resistive random access memory [3] where the reversible formation and disruption of conducting filaments composed of chain-like oxygen vacancies are considered to be critical [4]. Thus, HfO\({}_{2}\) is a material system that supports ferroelectricity and ionic conductivity, with both phenomena involving the motion of oxygen ions [5]. Specifically, the polarization switching in \(Pca2_{1}\) HfO\({}_{2}\) is characterized by the collective and coordinated local motions of oxygen ions driven by an external electric field (\(\mathcal{E}\)), whereas the ionic conductivity of HfO\({}_{2-x}\) features thermally-excited long-distance travel of oxygen ions. Since hafnia films as thin as \(\approx\)1 nanometer can still retain ferroelectric properties [6], and applying voltages of a few volts across such films can generate giant electric fields (up to 9 MV/cm) [7], exploring the potential interplay between ferroelectric switching and ion transport at high fields is thus important for developing reliable, ultra-dense HfO\({}_{2}\)-based nanoelectronics, and could also offer insights for the design of field-assisted fast ion conductors. The atomistic mechanism of polarization switching in \(Pca2_{1}\) HfO\({}_{2}\) remains illusive, partly due to the unusual structural characteristic discussed above and the existence of multiple switching pathways [8; 9]. A useful guide is the X\({}_{2}^{-}\)-mode-matching criterion. The X\({}_{2}^{-}\) lattice mode features antiparallel \(x\)-displacements of neighboring oxygen ions perpendicular to the polar axis along \(z\) (Fig. 1**a-b**), and a pathway conserving the sign of X\({}_{2}^{-}\) mode generally has a lower barrier [10]. The switching pathways in HfO\({}_{2}\) at the unit cell level can be categorized as shift-inside (SI) and shift-across (SA). As shown in Fig. 1**c**, the SI pathways have oxygen ions moving between two Hf atomic planes. Specifically, the SI-1 pathway only involves the displacement of O\({}^{p}\) against \(\mathcal{E}\), and the transition state acquires a tetragonal phase (space group \(P4_{2}/nmc\)); the SI-2 pathway has both O\({}^{p}\) and O\({}^{np}\) atoms moving against \(\mathcal{E}\), resulting in concerted O\({}^{p}\)\(\rightarrow\)O\({}^{np}\) and O\({}^{np}\)\(\rightarrow\)O\({}^{p}\). In comparison, O\({}^{p}\) ions move across Hf planes in the SA pathway, accompanied by the X\({}_{2}^{-}\) mode reversal of O\({}^{np}\) ions. The switching barriers calculated with the variable-cell nudged elastic band (VCNEB) technique based on density functional theory (DFT) are 0.39, 0.22, and 0.79 eV per unit cell (u.c.) for SI-1, SI-2, and SA, respectively (see computational details in below). These values are reproduced by a deep neural network-based classical force field of HfO\({}_{2}\) (Fig. 1**d**) that is used for MD simulations in this work (Supplementary Sect. I). The \(\mathcal{E}\)-dependent switching barriers estimated with VCNEB zero-field barriers are displayed in Fig. 1**e**. We find that the critical switching fields (which reduce the barriers to zero) range from 2-4 MV/cm, consistent with experimentally observed coercive fields (1-5 MV/cm) [2; 7; 11]. As we will discuss further, MD simulations employing a large supercell of HfO\({}_{2}\) consisting of 28,800 atoms confirm that all three mechanisms are activated at room temperatures when exposed to an electric field of a strength relevant to thin-film device operating conditions. Moreover, applying a unidirectional bias can drive successive ferroelectric switching that supports a continuous flow of oxygen ions even in the absence of oxygen vacancies. All first-principles DFT calculations are performed using Vienna _ab initio_ simulation package (VASP) [12; 13] with Perdew-Burke-ErnZerhof (PBE) density functional [14]. The optimized lattice constants of \(Pca2_{1}\) HfO\({}_{2}\) are \(a=5.266\) A, \(b=5.048\) A, and \(c=5.077\) A, and the polarization is along the \(c\)-axis (\(z\)-axis). The polarization switching pathways reported in Fig. 1**d** are based on a 12-atom unit cell consisting of four hafnium and eight oxygen atoms. The minimum energy paths (MEPs) of SI-1, SI-2 and SA processes are determined using the VCNEB technique implemented in the USPEX code [15; 16; 17], during which the lattice constants are allowed to relax. The plane-wave cutoff is set to 600 eV. A \(4\times 4\times 4\) Monkhorst-Pack \(k\)-point grid is used for structural optimizations and VCNEB calculations. The stopping criterion for searching the MEP is when the root-mean-square forces on images are less than 0.03 eV/A. The variable elastic constant scheme is employed in VCNEB, and the spring constant between neighboring images is set within a range of 3.0 to 6.0 eV/A\({}^{2}\). Energy and polarization values for configurations along the MEP are calculated, with polarization determined using the Berry phase method. The zero-field energy profile for a MEP is subsequently corrected by the \(-P\cdot\mathcal{E}\) term,providing an estimated switching barrier under a specific \(\mathcal{E}\). To investigate the intrinsic mechanisms of field-driven ferroelectric switching in \(Pca2_{1}\) HfO\({}_{2}\), a defect-free single-domain supercell with 12,000 atoms is chosen as the initial configuration for MD simulations. We perform isobaric-isothermal ensemble (\(NPT\)) MD simulations over a wide range of electric fields from 0 to 12 MV/cm at 400 K, 500 K and 600 K, utilizing a deep neural network-based force field. The model potential is obtained by deep learning from a database of energies and atomic forces for \(\approx\)55,000 configurations computed with DFT (see details in Supplementary Sect. I) [18]. All \(NPT\) MD simulations are carried out using LAMMPS [19], with temperature controlled via the Nose-Hoover thermostat and the pressure controlled by the Parrinello-Rahman barostat. The integration timestep for the equation of motion is 1 fs in all MD simulations. At a given temperature, the equilibrium run is 20 ps with pressure maintained at 1.0 bar, followed by a production run of 500 ps at the specified temperature and electric field, ensuring reliable estimation of mean square displacement (MSD) of all oxygen ions and the mobility \(u_{\rm O}\) (Supplementary Sect. IV). Upon closely examining the SI and SA pathways, a perplexing behavior becomes evident. For the same starting configuration depicted in Fig. 1**c**, external electric fields in opposing directions can both drive ferroelectric switching. Consequently, in order to conform with classical electrodynamics, the same configuration would exhibit a downward polarization in SI but an upward polarization in SA. We emphasize that the macroscopic electric polarization of a crystalline solid is a geometric quantum phase, which should be viewed as a multi-valued lattice property with no definite direction [20; 21]. However, for practicality and compatibility with classical electrodynamics, electric polarization is often treated as a vector with a specific direction. We calculate the polarization with the Berry phase approach by tracking the Berry phase variation during SI-1 and SA pathways. The results for SI-2 are similar to SI-1 (Supplementary Fig. S2). Here, the upward electric field is defined (arbitrarily) as \(+\mathcal{E}\) that aligns along the \(-z\) direction. As illustrated in Fig. 2**a**, SI-1 and SA pathways correspond to two different branches of the polarization lattice, each associated with a definite change in polarization (\(\Delta P\)) without ambiguity. The magnitude of \(\Delta P_{\rm SI}\) for the SI-1 pathway driven by \(+\mathcal{E}\) is 1.0 C/m\({}^{2}\), and the polar state of the initial configuration can be _labeled_ as \(P_{s}^{\rm SI}=-0.5\) C/m\({}^{2}\) to be consistent with classical electrodynamics. Similarly, the SA pathway driven by \(-\mathcal{E}\) results in \(|\Delta P|\) of 1.4 C/m\({}^{2}\), and we can label the same starting configuration as \(P_{s}^{\rm SA}=+0.7\) C/m\({}^{2}\). Because the polarization change in each pathway is well defined and can be connected to experimentally measurable observables such as switching current, HfO\({}_{2}\) is a unique ferroelectric with dual-valued remnant polarization (\(P_{s}^{\text{SI}}\) and \(P_{s}^{\text{SA}}\)) characterized by two intrinsic \(P\)-\(\mathcal{E}\) hysteresis loops (Fig. 2**b**). We note that experimentally giant polarization magnitudes of 0.5-0.64 C/m\({}^{2}\) in polycrystalline films of hafnia have been reported [11; 22], suggesting the realization of SA switching mechanism and \(P_{s}^{\text{SA}}\). Because all ferroelectrics are piezoelectric and piezoelectricity is typically gauged by the piezoelectric strain coefficient (\(d\)) that links strain (\(\eta\)) and \(\mathcal{E}\) via \(\eta_{i}=d_{ij}\mathcal{E}_{j}\), an interesting question arises: does ferroelectric HfO\({}_{2}\) with two remnant polarization values (switching pathways) also possess two values of \(d_{33}\)? We discover that despite the dual-valued nature of \(P_{s}\), HfO\({}_{2}\) exhibits an unambiguous piezoelectric response, as hinted by the parallel SA and SI branches with identical slopes in Fig. 2**a**. Our finite-field MD simulations reveal that an electric field applied along the \(z\)-axis (\(\mathcal{E}_{3}\)) that drives O\({}^{p}\) ions away from the nearest Hf atomic plane leads to lattice expansion (\(\eta_{3}>0\)) and vice versa (Fig. 2**c**). Consequently, for a given crystal orientation and \(\mathcal{E}_{3}\), the field-induced \(\eta_{3}\) is unique; the absolute value of \(d_{33}\) computed with \(|\partial\mathcal{E}_{3}/\partial\eta_{3}|\) is single-valued, while the sign of \(d_{33}\) depends solely on the sign of \(\mathcal{E}_{3}\) (the arbitrary choice of the positive field direction). The estimated \(|d_{33}|\) is 5.83 pm/V, comparable with both DFT (2.59 pm/V) [23] and experimental (2-5 pm/V) [24] values. Importantly, the process of oxygen ions traversing unit cells by SA following SI can be viewed as a classical analogue of adiabatic Thouless pumping, and can be achieved by applying a constant bias. We perform large-scale finite-field MD simulations and confirm that a unidirectional \(\mathcal{E}\) can indeed drive successive SI and SA ferroelectric transitions that support a continuous flow of oxygen ions even in the absence of oxygen vacancies. Figure 3**a** illustrates a typical local switching process extracted from MD simulations. The initial configuration has O\({}^{p}\) ions situated near the bottom Hf planes, and a negative \(\mathcal{E}\) (aligned along \(+z\)) drives the SI pathway during which negatively charged oxygen ions move against \(-\mathcal{E}\). Notably, unit cells can further transform to an antipolar \(Pbca\) phase and subsequently undergo another transition from \(Pbca\) back to \(Pca2_{1}\) under the same bias, each through the SA mechanism. Locally, unit cells return to their original configuration albeit translated by half of a \(Pca2_{1}\) unit cell along the \(y\)-axis. This phase cycling would be difficult to comprehend if a fixed polarization direction is assigned to a particular crystal configuration; it is again a manifestation of the geometric-quantum-phase nature of electric polarization that does not possess a definite direction. Microscopically, this phenomenon is a natural consequence of continuous flow of oxygen ions against the direction of the applied external electric field. We find that the transport of oxygen ions is directly coupled to the nucleation-and-growth mechanism of ferroelectric switching. The SA step that is associated with a larger barrier than SI (Fig. 1**c**) serves as the rate limiting step. The nucleus is then characterized by a domain of unit cells that have completed the SA step. MD simulations reveal several microscopic features of the nucleation-and-growth mechanism, sketched in Fig. 3**b**. First, even though the switching process occurs in a three-dimensional (3D) bulk, the nucleus formed in the presence of \(-\mathcal{E}\) is nearly two dimensional (2D), as opposed to small 3D clusters typically observed in ferroelectric perovskites [25]. The nucleus has a thickness of merely half a unit cell along the \(y\)-axis and assumes a slim diamond shape in the \(xz\) plane (Fig. 3**c**). This is surprisingly similar to the nucleus formed at a moving domain wall in perovskite ferroelectrics [26]. The ability to form a 2D nucleus in 3D can be attributed to the weak dipole-dipole interactions along the \(y\)-axis resulting from O\({}^{np}\) spacing layers [27]. Second, the nucleus exhibits anisotropic diffusive interfaces. The nucleus profile is determined based on the displacement (\(\delta\)) of O\({}^{p}\) ions, and the unit cells with SA completed have \(\delta=-\delta_{0}\) (Fig. 3**a**). The interfacial profile is fitted to \(\delta_{0}\tanh\left(\frac{\delta}{\gamma_{i}/2}\right)\) with \(\gamma_{i}\) characterizing the diffusiveness of the nucleus along direction \(i\) (\(i\)=\(x\),\(z\)). As presented in Fig. 3**d**, the longitudinal diffusiveness parameter, \(\gamma_{z}\), is 8.9 A at one side but becomes zero at the other side. In comparison, the lateral diffusiveness parameters, \(\gamma_{x}\), are roughly of the same value (3.7 A) at both sides. The considerable diffusiveness reduces the interface energy, which in turn decreases the nucleation barrier. Lastly, nucleation events exhibit stochastic behavior. Nuclei of varying sizes randomly emerge throughout the system, and only those exceeding a critical size continue to expand, eventually leading to the switch of the entire \(xz\) layer (Supplementary Fig. S4). We quantitatively estimate the mobility of oxygen ions in defect-free HfO\({}_{2}\) under moderate temperatures and over a range of electric fields using MD simulations. By utilizing a vacancy-free model, the occurrence of vacancy-mediated ion diffusion processes is eliminated. Figure 4**a** plots the mobility of oxygen ion (\(u_{\mathrm{O}}\)) in \(Pca2_{1}\) as a function of \(\mathcal{E}\) at 400, 500, and 600 K, respectively, compared to that in the nonpolar \(M\) phase at 600 K. The \(u_{\mathrm{O}}\)-\(\mathcal{E}\) relationships in \(Pca2_{1}\) reveal a temperature-dependent critical field (\(\mathcal{E}_{t}\)), below which the mobility is strictly zero because only local SI switching events are activated. Above \(\mathcal{E}_{t}\), \(u_{\mathrm{O}}\) quickly jumps to a giant value of \(\approx\)10\({}^{-3}\) cm\({}^{2}\)V\({}^{-1}\)s\({}^{-1}\). We observe that an increase in temperature leads to a reduction in the magnitude of \(\mathcal{E}_{t}\) which represents the field required to activate the SA step. Interestingly, \(u_{\rm O}\) shows a weak temperature dependence when above \(\mathcal{E}_{t}\) and mainly depends on the strength of the driving field, indicating a depinning-like behavior [26; 28]. In comparison, the value of \(u_{\rm O}\) in \(M\) at 600 K remains strictly zero due to the absence of ferroelectricity. These results demonstrate the considerable influence of ferroelectricity on oxygen ion mobility in HfO\({}_{2}\), particularly in the high-field region. One of the potential applications of hafnia thin films with ultrahigh oxygen ion mobility enabled by successive ferroelectric switching is electrochemical ionic synapses (EIS) based on oxide ion migration, which are emerging neuromorphic computing devices for artificial neural networks. EIS devices function like nano-batteries, utilizing ion migration for computing-in-memory operations. An EIS device structure is shown in Fig. 4**b**, which includes a channel layer with conductance that varies based on oxygen ion concentration, an oxygen-storing reservoir, and an electrolyte layer connecting the channel and reservoir for oxygen ion migration [29; 30]. The conductance of the channel material can be modulated stepwise by applying an electrical bias across the tri-layer device which triggers the oxygen ion transport. Therefore, the ultrahigh oxygen ion mobility in silicon-compatible ferroelectric HfO\({}_{2}\) under an electric field can potentially enable scalable EIS devices with ultrafast speed. In summary, this study highlights the geometric-quantum-phase attribute of spontaneous electric polarization in ferroelectric \(Pca2_{1}\) HfO\({}_{2}\) that displays dual-valued remnant polarization and single-valued piezoelectric response. Successive ferroelectric switching, driven by constant bias and resembling Thouless pumping, can boost oxygen ion transport at moderate temperatures. Microscopically, the long-distance travel of oxygen ions is directly coupled to the nucleation-and-growth mechanism. Similar phenomena may occur in other ferroelectric systems that support successive switching pathways such as CuInP\({}_{2}\)S\({}_{2}\) and LaVO\({}_{3}\)-SrVO\({}_{3}\) superlattice [31]. The integration of ferroelectricity and ionic conductivity unlocks new possibilities for innovative device types, including ferro-electrochemical ionic synapses. ###### Acknowledgements. L.M., J.W., T.Z., and S.L. acknowledge the supports from National Key R&D Program of China (2021YFA1202100), National Natural Science Foundation of China (12074319), and Westlake Education Foundation. Y.H. and Q.L. acknowledge funding support from the Research Center for Industries of the Future at Westlake University and National Natural Science Foundation of China (NSFC, Grant No. 52202148). The computational resource is provided by Westlake HPC Center. Figure 1: **Polarization switching pathways in ferroelectric HfO\({}_{2}\).****a** X\({}_{2}^{-}\) mode in the unit cell of \(Pca2_{1}\) HfO\({}_{2}\) with outward- and inward-displaced oxygen atoms denoted by purple and salmon spheres, respectively. The polarization is along the \(z\)-axis. **b** Alternately arranged nonpolar oxygen ions (O\({}^{np}\)) and polar oxygen ions (O\({}^{p}\)) in \(Pca2_{1}\) HfO\({}_{2}\). The grey shaded area marks the polar region. **c** Schematics of shift-inside (SI) and shift-across (SA) switching pathways driven by an external electric field (\(\mathcal{E}\)). The SA pathway has O\({}^{np}\) ions reversing the sign of the X\({}_{2}^{-}\) mode (colored in gray during the transition). The SI and SA pathways can start from the same configuration which should be identified by polarization (\(P\)) vectors (represented as green arrows) pointing in opposite directions to ensure compatibility with classical electrodynamics. **d** Calculated minimum energy paths for different switching pathways with DFT (lines) and a deep neural network-based force field (scatters). **e** Switching barrier as a function of field strength. Figure 2: **Dual-valued remnant polarization and single-valued piezoelectric response in HfO\({}_{2}\).****a** Polarization variation along SA and SI-2 switching pathways from the same starting configuration (the center insert) in response to opposing electric fields. The upward electric field aligned along the \(-z\) direction is defined arbitrarily as \(+\mathcal{E}\). Configurations are labeled by the displacement (\(\delta\)) of the salmon-colored O\({}^{p}\) ion relative to the top Hf plane; \(l\) is the distance between neighboring Hf planes along \(z\) and \(\delta_{0}\) is the O\({}^{p}\) displacement at the ground state. Oxygen ions always move against \(\mathcal{E}\). **b** Schematics of \(P\)-\(\mathcal{E}\) hysteresis loops for SA and SI and the corresponding switching currents. **c** Strain (\(\eta_{3}\), empty circles) as a function of an electric field applied along the \(z\)-axis (\(\mathcal{E}_{3}\)) and the corresponding O\({}^{p}\) displacements (\(\delta\), filled squares) obtained with finite-field MD simulations at 300 K. The slop of \(\eta_{3}\)-\(\mathcal{E}_{3}\) gives the absolute value of \(d_{33}\) whose sign depends solely on the arbitrary sign of \(\mathcal{E}_{3}\). Figure 3: **Oxygen ion transport coupled to the nucleation-and-growth mechanism of ferroelectric switching.****a** Polar-antipolar phase cycling arising from successive SI and SA ferroelectric transitions. The highlight O\({}^{np}\) in the initial configuration becomes O\({}^{p}\) with \(\delta=\delta_{0}\) after SI-2 and then O\({}^{p}\) with \(\delta=-\delta_{0}\) after SA. **b** Schematic illustration of stochastic nucleation events in a three-dimensional (3D) bulk. The nucleus is two dimensional (2D) within the \(xz\)-plane, featuring a thickness equivalent to half a unit cell of \(Pca2_{1}\) HfO\({}_{2}\) along the \(y\)-axis. **c** A 2D slim-diamond-shaped nucleus extracted from MD simulations using a \(10\times 10\times 24\) supercell of 28,800 atoms. The nucleus profile is determined based on the \(\delta\) values of O\({}^{p}\) ions. **d** Line profiles of \(\delta\) along the \(z\) and \(x\) directions marked in **c**. Figure 4: **Ferroelectricity-promoted oxygen ion mobility in HfO\({}_{2}\).****a** Mobility of oxygen ions (\(u_{\rm O}\)) in \(Pca2_{1}\) HfO\({}_{2}\) as a function of \(\mathcal{E}\) at different temperatures from MD simulations. The results in nonpolar \(M\) phase at 600 K are shown for comparison. The shaded area indicates the transition region where the critical electric field \(\mathcal{E}_{t}\) is located. **b** Left: Device structure of an electrochemical ionic synapse with \(Pca2_{1}\) HfO\({}_{2}\) as the electrolyte layer, which is sandwiched between a channel layer connected to source (S) and drain (D) electrodes and an oxygen-storing reservoir layer. The channel layer has its conductance depending on the oxygen ion concentration. Right: Schematic showing the ultrafast oxygen ion transport in HfO\({}_{2}\) electrolyte layer under an electric field.
2310.10138
Node-based Knowledge Graph Contrastive Learning for Medical Relationship Prediction
The embedding of Biomedical Knowledge Graphs (BKGs) generates robust representations, valuable for a variety of artificial intelligence applications, including predicting drug combinations and reasoning disease-drug relationships. Meanwhile, contrastive learning (CL) is widely employed to enhance the distinctiveness of these representations. However, constructing suitable contrastive pairs for CL, especially within Knowledge Graphs (KGs), has been challenging. In this paper, we proposed a novel node-based contrastive learning method for knowledge graph embedding, NC-KGE. NC-KGE enhances knowledge extraction in embeddings and speeds up training convergence by constructing appropriate contrastive node pairs on KGs. This scheme can be easily integrated with other knowledge graph embedding (KGE) methods. For downstream task such as biochemical relationship prediction, we have incorporated a relation-aware attention mechanism into NC-KGE, focusing on the semantic relationships and node interactions. Extensive experiments show that NC-KGE performs competitively with state-of-the-art models on public datasets like FB15k-237 and WN18RR. Particularly in biomedical relationship prediction tasks, NC-KGE outperforms all baselines on datasets such as PharmKG8k-28, DRKG17k-21, and BioKG72k-14, especially in predicting drug combination relationships. We release our code at https://github.com/zhi520/NC-KGE.
Zhiguang Fan, Yuedong Yang, Mingyuan Xu, Hongming Chen
2023-10-16T07:27:43Z
http://arxiv.org/abs/2310.10138v1
# Node-based Knowledge Graph Contrastive Learning for Medical Relationship Prediction ###### Abstract. The embedding of Biomedical Knowledge Graphs (BKGs) generates robust representations, valuable for a variety of artificial intelligence applications, including predicting drug combinations and reasoning disease-drug relationships. Meanwhile, contrastive learning (CL) is widely employed to enhance the distinctiveness of these representations. However, constructing suitable contrastive pairs for CL, especially within Knowledge Graphs (KGs), has been challenging. In this paper, we proposed a novel node-based contrastive learning method for knowledge graph embedding, NC-KGE. NC-KGE enhances knowledge extraction in embeddings and speeds up training convergence by constructing appropriate contrastive node pairs on KGs. This scheme can be easily integrated with other knowledge graph embedding (KGE) methods. For downstream task such as biochemical relationship prediction, we have incorporated a relation-aware attention mechanism into NC-KGE, focusing on the semantic relationships and node interactions. Extensive experiments show that NC-KGE performs competitively with state-of-the-art models on public datasets like FB15k-237 and WN18RR. Particularly in biomedical relationship prediction tasks, NC-KGE outperforms all baselines on datasets such as PharmKGSk-28, DRKG17k-21, and BioKG72k-14, especially in predicting drug combination relationships. We release our code at [https://github.com/zhi520/NC-KGE](https://github.com/zhi520/NC-KGE). Contrastive Learning, Graph Neural Network, Medical Relationship Prediction, Biomedical Knowledge Graph + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. + Footnote †: journal: Acm J. Mach. Learn. Learn. + it is still challenging for biomedical relationship predictions due to knowledge bases from the biomedical domain are usually sparse, redundant and incomplete. In the past, graph contrastive learning, a self-supervised method, has achieved significant success in generating generalized, transferable, and robust representations for graph structured data. This success has illuminated the path for learning knowledge graph embeddings. Essentially, contrastive learning aims to extract hidden information between samples by bringing similar samples closer and pushing dissimilar ones apart in latent space. Its core objective is to distinguish between a pair of representations from the two augmentations of the same sample (positives) apart from the k pairs of representation from the other (negative) samples. Constructing highly confident contrastive pairs is crucial for the discriminative power of contrastive learning models. However, due to the intricate structures within knowledge graphs, defining these contrastive pairs is challenging. Consequently, there are only limited attempts to integrate contrastive learning strategies with Knowledge Graph Embedding (KGE) methods. SimKGC (Wang et al., 2017), for instance, creates contrastive pairs using semantic similarity through language models, deviating from the previous graph contrastive learning models that fully mined information underlying graph structures. However, the effectiveness of contrastive KGE methods based on semantic similarity heavily relies on the specific language models used. Another approach, KGE-SymCL (Wang et al., 2017), utilizes the semantic similarity of entities in relation-symmetrical positions to construct positive contrastive samples. However, extracting the structure of symmetric relations is a tedious process and there is no significant improvement compared to SimKGC. In biomedical knowledge graphs, entities represent gene codes, targets, or chemical compounds. The features generated by language models may lead to inaccurate semantic estimation in this context. Therefore, there is a pressing need for a more stable and universally applicable contrastive learning criterion tailored for biomedical knowledge graphs. In our study, we introduced a new and versatile node-based contrastive method for knowledge graph embeddings, termed NC-KGE, specifically designed for predicting biomedical relationships. For a given fact triplet in knowledge graph, NC-KGE identifies triplets in the knowledge graph where one entity and the relation type remain the same as positive samples, while all other triplets are considered negative samples. By maximizing the similarity score between positive samples and minimizing it between negative ones using a modified classic contrastive learning loss, NC-KGE enhances the convergence speed during training and improve the performance of non-contrastive methods, such as CompGCN and SE-GNN. Additionally, we incorporated a relation-aware multi-head attention (RAMHA) mechanism into NC-KGE to enhance the utilization of relation semantics and interactions among relations and entities. We evaluated NC-KGE's performance in relation prediction on both general public datasets (FB15k-237 and WN18RR) and biomedical-focused knowledge graphs (PharmKG8k-28, DRKG17k-21, and BioKG72k-14). Our extensive experiments revealed that NC-KGE competes effectively with state-of-the-art models on general knowledge graphs, and surpasses all baselines on biomedical knowledge graphs, particularly excelling in predicting drug combination relationships. ## 2. Related Work Knowledge Graph Embedding (KGE) aims to encode the entities and relations to the low dimensional vector or matrix space while maximally preserving its topological properties. Recent existing KGE models can be roughly categorized into structure based embedding and enhanced knowledge embeddings, reviewed by Singh et al. (Singh et al., 2017) and MINERVINI et al. (Pham et al., 2017). In this work, structure-based knowledge graph embedding methods are more related than enhanced knowledge graph embeddings. ### Structure-based Knowledge graph embedding methods **Translation distance models** interpret relations as translations operation from a head node to an tailor node in latent space, e.g., TransE (Chen et al., 2017), TransH (Wang et al., 2017), TransR (Wang et al., 2017) and etc. TransE was among the initial attempts to represent relations as addition operations between entities. TransH projects entities onto relation-specific hyperplanes, allowing entities to play different roles in various relations. TransR maps nodes and relations into distinct entities spaces and relation-specific spaces. RotaE (RotaE, 2017) treats the relation as a rotation operation. PairRE (Chen et al., 2017) can encode complex relationships and multiple relationship patterns at the same time. Moreover, HousE (Wang et al., 2017) involves a novel parameterization based on the designed Householder transformations for rotation and projection. **Semantic matching models**, including RESCAL (Wang et al., 2017), DistMul (Wang et al., 2017), ComplEx (Wang et al., 2017), ConvE (Chen et al., 2017), SimplE (Wang et al., 2017), CrossSE (Wang et al., 2017), QuatE (Wang et al., 2017), DualE (Wang et al., 2017), are developed based on similarity scoring functions. RESCAL utilizes a bilinear similarity function to compute the scores of knowledge triples and assumes that positive triples have higher scores than negative ones. DisMult simplifies the bilinear similarity function through using diagonal matrix. ComplEx further generalized DisMult by using complex embeddings and Hermitian dot products. Besides, the advantage of quaternion representations is leveraged by QuatE to enrich the correlation information between head and tail entities based on relational rotation quaternions. Inspired by it, DualE is proposed to gain a better expressive ability by projecting the embeddings in dual quaternion space. **Neural-based methods**, including ConvE (Chen et al., 2017), RGCN (Wang et al., 2017), SACN (Wang et al., 2017), KBGAT (Wang et al., 2017), A2N (Chen et al., 2017), CompGCN (Wang et al., 2017) and SE-GNN (Wang et al., 2017). For example, ConvE introduces the use of convolutional layers to extract information. RGCN introduces a relation-specific transformation to integrate relation information with message aggregation. RGHAT (Wang et al., 2017) incorporates a two-level attention mechanism, addressing relations and entities separately. KE-GCN (Wang et al., 2017) introduces a joint propagation method to update node and edge embeddings simultaneously. CompGCN proposes various composition operations for neighbor aggregation to model the structure pattern of multi-relational graph. RAGAT (Rao et al., 2017) constructs separate message functions for different relations, which aims at exploiting the heterogeneous characteristics of knowledge graphs. SE-GNN (Wang et al., 2017), with its three levels of semantic evidence, achieves in-depth knowledge representation by meticulously merging these layers through multi-layer aggregation, leading to highly extrapolative knowledge representations. ### Contrastive Learning on Knowledge Graph Graph Contrastive learning (GCL) operates by mining the hidden information in intra-data in a self-supervised manner. These methods, such as GRACE (Wang et al., 2017), GraphCL (Wang et al., 2018), AutoGCL (Wang et al., 2018) and iGCL (Wang et al., 2019), etc, has been proven highly successful in both nodes representing learning, relation prediction, classification, graph generation and anomaly detections. Recently, only a few researchers attempt to extend graph contrastive learning into knowledge graph embedding learning. SimKGC (Sim et al., 2018) tend to combine samples with high semantic similarity as positive pairs. This method heavily rely on semantic similarity predicted by the specific language models. However, the language model may estimate semantic similarity inaccurately for the biomedical entities including gene codes, targets, or chemical compounds, making these methods fails for biomedical knowledge graphs. KGE-SymCL (Liu et al., 2019) utilizes the semantic similarity of entities in relation-symmetrical positions to construct positive contrastive samples. However, extracting the structure of symmetric relations is a tedious process and there is no significant improvement compared to SimKGC. ## 3. Preliminary Knowledge Graph (KG) is composed of the fact triplets, denoted as \(\mathcal{G}=\{(e_{h},r,e_{t})\mid e_{h},e_{t}\in\mathcal{E},r\in\mathcal{R}\}\), where \(\mathcal{E}\) is the set of entities (i.e., nodes), \(\mathcal{R}\) is the set of relations (i.e., edge types), \(e_{h},e_{t}\) represents the head and tail entity, respectively, and \(r\) represents the relation between them. Relation Prediction, also known as Link Prediction or Knowledge Completion, involves predicting missing links or relationships in a knowledge graph. For a given head entity \(e_{h}\in\mathcal{E}\) and a relation \(r\in\mathcal{R}\), the objective is to identify the most suitable tail entity \(e_{t}\in\mathcal{E}\), forming a new plausible triple \((e_{h},r,e_{t})\) within \(\mathcal{G}\). Here, we approach this task by scoring all the candidates \(\{(e_{h},r,e_{t}^{\prime})\mid e_{t}^{\prime}\in\mathcal{E}\}\), maximizing the scores for genuine triples and minimizing the scores for all the other candidates. ## 4. NC-KGE Methods As shown in Figure 1, we show the overall process of NC-KGE. For a given fact triple \((e_{h},r,e_{t})\) in biomedical knowledge graph \(\mathcal{G}\), NC-KGE first constructs positive and negative augmented samples for contrastive learning. Then, a learnable KGE model was used to generate the embeddings for entities and relations in \(\mathcal{G}\). Thirdly, a similarity function \(S\) is used to score the triple embeddings \(S\left(z_{h},x_{r},z_{t}^{\prime}\right)\) of both positive and negative samples, and calculates the contrastive loss to optimize KGE model by maximizing the scores of genuine triples and minimizing the scores for all the other candidates. During inference, NC-KGE measures the scores of triple embeddings \(S\left(z_{h},x_{r},z_{t}^{*}\right)\) for all candidates \(e_{t}^{*}\) when provided \((e_{h},r,?)\), and the candidate \(e_{t}\) with highest scores will form a plausible triple with \(e_{h}\) and \(r\) in \(\mathcal{G}\). Additionally, finding the head entity \(e_{h}\) for the provided \((?,r,e_{t})\) can be effortlessly transformed into a similar process. ### Construction of node-based contrastive pairs In biomedical knowledge graph \(\mathcal{G}\), the entity \(e_{h}\) can form the same relationship with multiple other entities \(\{e_{t}\mid e_{t}\in\mathcal{E}\}\). For example, a target entity can correspond to multiple inhibitors, and one disease may be associated with multiple genes. For a given triple \((e_{h},r,e_{t})\), supporting another entity \(e_{t}^{*}\) forms the same relation \(r\) with the head entity \(e_{h}\), \(e_{t}^{*}\) is defined as a positive entity for \(e_{h}\) and \(r_{t}\), \((e_{h},r,e_{t}^{*})\) is a positive triple pair for \((e_{h},r,e_{t})\). Additionally, the triple \((e_{h},r,e_{t})\) is also regarded as a positive pair for itself. All the other entities \(\left\{e_{t}^{-}\right\}\) which don't form the relation \(r\) with \(e_{h}\) are defined as negative entity of \(e_{h}\), and \((e_{h},r,e_{t}^{-})\) is also regarded as a negative pair for \((e_{h},r,e_{t})\). ### Relation-aware multi-head attention based KGE model Here, we proposed a relation-aware multi-head attention mechanism (RAMHA) by integrating the relations between entities into attention computations to enhance the utilization of relation semantics and interactions among relations and entities in NC-KGE. As shown in Figure 2, RAMHA mainly includes three phases: attention computing, message passing and information aggregating. By stacking multiple layers of RAMHA, NC-KGE generates the knowledge graph embeddings for both entities and relations. Supporting the embedding of entity \(e_{u}\) is denoted as \(z_{u}^{l}\) and \(x_{r}^{l}\) represents the embedding of relation type \(r\) between \(e_{u}\) and \(e_{v}\) in \(l\)th layer of GAN, the next layer will aggregate information to \(e_{u}\) from its neighbor \(e_{v}\) according to their relations \(r\) in an attention-based message passing process as shown in Equation 1. \[z_{u}^{l+1}=\left\|{}_{\mathcal{L}=1}^{C}\left(\sum_{u\in\mathcal{N}(u)}\sum_{ r\in\mathcal{R}(u,v)}a_{u,r,o}^{l+1,c}*Message_{u,r,o}^{l+1,c}\right)\right. \tag{1}\] where \(a_{u,r,o}^{l+1,c}\) is the attention weight for the embedding triple \(\left(z_{u}^{l},x_{r}^{l},z_{v}^{l}\right)\) for \(c\)th head in multi-head attentions, \(Message_{u,r,o}\) denotes the information aggregated from \(e_{v}\) to \(e_{u}\) with relation \(r\), \(\left\|\) represents the concatenation operation between the total \(C\) heads attentions. The embedding of relation r is updated in \(l+1\) layer with Equation 2 and 3. \[x_{r}^{l+1}=\left\|{}_{\mathcal{L}=1}^{C}\left(x_{r}^{l+1,c}\right)\right. \tag{2}\] \[x_{r}^{l+1,c}=MLP_{r}^{l,c}\left(x_{r}^{l}\right) \tag{3}\] where \(MLP_{r}^{l,c}\) is a relation-type specific MLP network in \(l\)th layer for \(c\)th heads. The attention weight \(a_{u,r,o}^{l,c}\) is as shown in Equation 4\(\sim\) 7 \[a_{u,r,o}^{l+1,c}=\frac{\left\langle q_{u}^{l,c},MLP_{r,1}^{l,c} \left(k_{v}^{l,c}\star x_{r}^{l+1,c}\right)\right\rangle}{\sum_{w\in\mathcal{N} (u)}\sum_{r^{\prime}\in\mathcal{R}(u,w)}\left\langle q_{u}^{l,c},MLP_{r^{ \prime},1}^{l,c}\left(k_{w}^{l,c}\star x_{r^{\prime}}^{l+1,c}\right)\right\rangle} \tag{4}\] \[q_{u}^{l,c}=MLP_{q}^{l,c}\left(z_{u}^{l}\right) \tag{5}\] \[k_{u}^{l,c}=MLP_{k}^{l,c}\left(z_{u}^{l}\right) \tag{6}\] \[v_{u}^{l,c}=MLP_{v}^{l,c}\left(z_{u}^{l}\right) \tag{7}\] where \(q_{u}^{l,c}\) and \(k^{l,c}\) is the query vector and key vector for entities in multi-head attentions, \(MLP_{r,1}^{l,c}\) is a relation-type specific MLP without any bias terms, \(\mathcal{N}(u)\) represents the set of neighbors of entity \(e_{u}\), \(\mathcal{R}(u,w)\) denotes the set of relations between \(e_{u}\) and \(e_{w}\) \(\left(q,k\right)=\exp\left(\frac{q^{T_{k}}}{\sqrt{d}}\right)\), and \(\star\) represents the circular-correlation mentioned in HolE (Holler, 2018). The message passing information Message \(e_{u,r,v}^{l,c}\) is as follows: \[Message_{u,r,0}^{l+1,c}=MLP_{r,2}^{J,c}\left(v_{t}^{l,c}\star x_{r}^{l+1,c}\right) \tag{8}\] where \(MLP_{r,2}^{l,c}\) is also an unbiased relation-type specific MLP as \(MLP_{r,1}^{l,c}\). ### Similarity scoring functions for relation predictions Once embeddings of entities and relations are obtained, NC-KGE introduced a learnable similarity function to score the embeddings of triplets in the knowledge graph as same as ConvE. It is a convolution over 2D shaped embeddings as formulated in Equation 9. \[\psi\left(e_{u},r,e_{v}\right)=f\left(\text{reshape }\left(f\left(\left[ \overline{e_{u}},r\right]\otimes w\right)\right)\ast W\right)\cdot e_{v} \tag{9}\] Where \(\overline{e_{u}},r\) denote a 2D reshaping of \(e_{u}\) and relation type \(r\), [] denotes a concatenation operation, \(\otimes w\) represents a 2D-convolutional layer with filters \(w\), \(f\) denotes a non-linear function, \(W\) is a linear transformation matrix, and \(\cdot\) represents the inner product operation. We need to note that NC-KGE can also integrate the classic statistic similarity measure functions as TransE, DisMult, ComplEx and SimplE as listed Table 1, but extensive experiments have shown that a learnable similarity function in Equation 9 outperforms these traditional similarity measures, as discussed in Section 5.4 Results. ### Contrastive learning objective NC-KGE aims to mining the hidden information between entities and relations by maximizing the similarity score between positive samples and minimizing it between negative ones. Thus, a classic contrastive training objective function are introduced into NC-KGE for a given triplet \(\left(e_{h},r,e_{t}\right)\) as shown in Equation 10. \[\mathcal{L}=-\log\frac{\sum_{k=1}^{K^{+}}\exp\left(S\left(z_{h},x_{r},z_{t}^{ *}\right)/r\right)}{\sum_{k=1}^{K^{+}}\exp\left(S\left(z_{h},x_{r},z_{t}^{*} \right)/r\right)+\mathcal{Q}\sum_{k=1}^{K^{-}}\exp\left(S\left(z_{h},x_{r},z_ {t}^{*}\right)/r\right)} \tag{10}\] where the construction \(K^{+}\)and \(K^{-}\) are the number of positive triple pairs and negative triple pairs, respectively, \(Q\) is a scaling weight for negative pairs, \(r\) is a temperature factor controlling the discriminable capability of KGE model to negative pairs. To avoid extreme temperature coefficients affecting contrastive learning effects, we Figure 1. The overall framework of NC-KGE. Figure 2. Relation-aware multi-head attention based KGE model. adopt a simulated annealing strategy to adjust the temperature factor dynamically in the range of [0.1,1.5] according to the MRR metrics. Additionally, we need to note that the similarity scores \(S\) were layer-normalized to compute the contrastive loss, denoted as \(S\), to avoid the numerical overflow in exponential operation and underflow in logarithmic operations. ## 5. Experiments Two commonly used benchmark datasets FB15k-237 and WN18RR (Wang et al., 2018) for KGE methods are utilized to evaluate the performance of NC-KGE on relation-predictions. We also perform benchmarks on three Bio-medical specific datasets including PharmKG8k-28, DRKG17k-21 and BioKG72k-14, derived from PharmKG, Drug Repositioning Knowledge Graph, and BioKG datasets. In these benchmarks, we could further evaluate the performance of NC-KGE on relation-predictions between different bio-meaningful entities. The detailed description of datasets is shown in Table 8 of Appendix A. ### Baselines The baselines are composed of three types: Translation Model, Semantic Matching Model and GNN-based Model. Translation Models include TransE (Beng et al., 2017), RotatE (Rao et al., 2018) and PaiRE (Pai et al., 2019). Semantic Matching Models include DistMult (Wang et al., 2018), ComplEx (Wang et al., 2018), TuckER (Beng et al., 2017), ConvE (Chen et al., 2018), InteractE (Wang et al., 2018) and PROCRUSTES (Wang et al., 2018). GNN-based Models include HyConvE (Wang et al., 2018), MEKER (Wang et al., 2018), RAGAT (Wang et al., 2018), HRGAT (Wang et al., 2018), R-GCN (Wang et al., 2018), KBGAT (Wang et al., 2018), A2N (Chen et al., 2018), SACN (Wang et al., 2018), CompGCN (Wang et al., 2018) and SE-GNN (Wang et al., 2018) ### Task and Evaluation Relation Prediction, also termed Link Prediction, aims at inferring missing facts based on the facts in Knowledge Graph. Similar to question answering, we assess the quality of relation-predictions using the following ranking task: for all triplets \((e_{h},r,e_{t})\) in both training sets and test sets, (1) we hidden the tail entity \(e_{t}\). (2) we compute the similarity scores \(S\left(e_{h},r,e_{t}^{*}\right)\) for all \(e_{t}^{*}\in\mathcal{E}\) as discussed in Section 4.3. (3) we sort values by decreasing order and (4) record the rank of the correct entity \(e_{t}\). An identical process is repeated for prediction \(e_{h}\). Two kinds of metrics are introduced for evaluation, including the proportion of correct entities ranked in the top (such as top 1,3,10, denoted as Hits@1, Hits@3, Hits@10), and the mean reciprocal rank (MRR). Let \(r_{th}\) be the rank of the correct triplet \(t=(e_{h},r,e_{t})\) among all possible triples when hidden the head entity, while \(r_{tt}\) represent its rank when hidden the tail entity. MRR is the average of reciprocal rank of a set of correct fact triplets \(\mathcal{T}\) as shown in Equation 11. \[MRR=\frac{1}{2|\mathcal{T}|}\sum_{t\in\mathcal{T}}\frac{1}{r_{th}}+\frac{1}{r _{tt}} \tag{11}\] The Hits at 1 metric (H@1) is obtained by counting the number of times the correct triple appears at position 1. The H@3 and H@10 are computed similarly, considering the first 3 and 10 positions, respectively. ### Experimental Setup In our benchmark experiments, we employed a KGE encoder in NC-KGE comprising two layers of RAMHA. The RAMHA model utilized 10 heads, and the hidden layer dimension for all MLP was 200. For Equation 10, the positive pair number for each fact triplet \(K^{+}\) was set to 1 while all the negative pairs are used in node-based contrastive training. To stabilize the training phase and prevent overfitting, we introduced a dropout rate of 0.2 and applied batch layer normalization between the RAMHA layers. The AdamW (Kingmae et al., 2014) optimizer and cosine annealing learning rate scheduler (Kingmae et al., 2014) is used in training. The patience of simulated annealing strategy for temperature factor is set to 50 according to MRR metric. ### Results Here, we first evaluate the performance of NC-KGE on general knowledge graph FB15k-237 and WN18RR, the benchmark results are as shown in Table2. Compared with 17 baselines, NC-KGE outperforms all the translation distance-based and semantic matching-based models, and is competitive with the SOTA method, SE-GNN. NC-KGE obtains obvious improvement compared to CompGCN, a typical GNN-based model, indicating node-based contrastive learning and RAMHA sufficient for better knowledge graph embedding. For bio-medical specific knowledge graphs, NC-KGE outperforms all the baselines as shown in Table 3. In PharmKG8k-28, the knowledge entity can be clearly categorized into four bio-medical meaningful types: gene, disease and chemical compounds, Thus, the relation prediction can be divided into 6 types according to the types of head and tail entity of triplet. Then, we further evaluate the relation prediction performance of NC-KGE on these subdomains. As shown in Table 4, NC-KGE give obviously better results for Chemical-Chemical and Disease-Disease relation predictions, such as drug combination relationship and disease complications, which MRR is 0.483 and 0.471, respectively. Both of these subdomains are belonging to "Interactions" category in PharmKG. Similarly, NC-KGE also achieve the best results, MRR is 0.640 on Chemical-Chemical relation predictions, since no Chemical-Disease and Disease-Disease relations exists as shown in Table 5. \begin{table} \begin{tabular}{l c} \hline \hline Model & Function \\ \hline TransE (Beng et al., 2017) & \(-\left\lVert\epsilon_{i}+r_{j}-e_{k}\right\rVert_{p}^{\text{}}\) \\ DistMult (Wang et al., 2018) & \(\left\langle e_{i},r_{j},e_{k}\right\rangle\) \\ ComplEx (Wang et al., 2018) & \(\text{Re}\left\langle\left(e_{i},r_{j},\overline{e}_{k}\right)\right\rangle\) \\ SimplE (Wang et al., 2018) & \(\frac{1}{2}\left(\left\langle e_{i1},r_{j1},e_{k1}\right\rangle+\left\langle e _{i2},r_{j2},e_{k2}\right\rangle\right)\) \\ ConvE (Chen et al., 2018) & \(f\left(\text{\emph{occ}}\left(f\left(\text{\emph{concat}}\left(\overline{e_{i},r_{j}} \right)*\alpha\right)\right)W\right)\epsilon_{k}\) \\ \hline \hline \end{tabular} \end{table} Table 1. Examples of triple similarity measure functions proposed by TransE, DistMult, ComplEx, SimplE and ConvE. For triples \((e_{i},r_{j},e_{k})\), we use \(e_{i},r_{j},e_{k}\) to represent the embedding of its components (in Simple, they have two parts, which we use index to represent). \(\|\cdot\|_{p}\) represents the \(p\)-norm; \(\left\langle\cdot,\cdot\right\rangle\) is the generalized three-way dot product; \(\text{Re}(\cdot)\) is the real part of the complex number; \(\epsilon_{k}\) is the complex conjugate of the complex-valued vector \(e_{k}\). Additionally, NC-KGE proves to be more efficient for training, as illustrated in Figure 3, Figure 5, and Table 7. The Mean Reciprocal Rank (MRR) on test sets converges faster with NC-KGE than with any other methods. When we substitute the node-based contrastive objective with other functions like BCELoss (Binary CrossEntropy), MPLoss (Maximize Positive Sample Softmax Loss) and MRLoss (Margin Ranking Loss), both accuracy and efficiency decrease. These findings highlight the efficiency and potential applications of NC-KGE in biomedical knowledge graphs. \begin{table} \begin{tabular}{l c c c c|c c c c} \hline \hline **Dataset** & \multicolumn{3}{c}{**FB15k-237**} & \multicolumn{3}{c}{**WN18RR**} \\ \hline **Task** & \multicolumn{3}{c}{**Link Prediction**} & \multicolumn{3}{c}{**Link Prediction**} \\ \hline **Metric \(\rightarrow\) Model \(\downarrow\)** & MRR & H@1 & H@3 & H@10 & MRR & H@1 & H@3 & H@10 \\ \hline TransE & 0.294 & - & - & 0.465 & 0.226 & - & - & 0.501 \\ RotatE & 0.338 & 0.241 & 0.375 & 0.533 & 0.476 & 0.428 & 0.492 & 0.571 \\ PaiRE & 0.351 & 0.256 & 0.387 & 0.544 & - & - & - & - \\ \hline DistMult & 0.241 & 0.155 & 0.263 & 0.419 & 0.430 & 0.390 & 0.440 & 0.490 \\ ComplEx & 0.247 & 0.158 & 0.275 & 0.428 & 0.440 & 0.410 & 0.460 & 0.510 \\ TuckER & 0.358 & 0.266 & 0.394 & 0.544 & 0.470 & 0.443 & 0.482 & 0.526 \\ ConvE & 0.325 & 0.237 & 0.356 & 0.501 & 0.430 & 0.400 & 0.440 & 0.520 \\ InteractE & 0.354 & 0.263 & - & 0.535 & 0.463 & 0.430 & - & 0.528 \\ PROCRUSTES & 0.345 & 0.249 & 0.379 & 0.541 & 0.474 & 0.421 & 0.502 & 0.569 \\ \hline HyConvE\({}^{\dagger}\) & 0.339 & 0.212 & - & 0.458 & 0.461 & 0.432 & - & 0.534 \\ MEKER\({}^{\dagger}\) & 0.359 & 0.268 & 0.392 & 0.539 & 0.477 & 0.437 & 0.488 & 0.545 \\ R-GCN & 0.248 & 0.151 & - & 0.417 & - & - & - & - \\ KBGAT & 0.157 & - & - & 0.331 & 0.412 & - & - & 0.554 \\ A2N & 0.317 & 0.232 & 0.348 & 0.486 & 0.450 & 0.420 & 0.460 & 0.510 \\ SACN & 0.350 & 0.260 & 0.390 & 0.540 & 0.470 & 0.430 & 0.480 & 0.540 \\ CompGCN & 0.355 & 0.264 & 0.390 & 0.535 & 0.479 & 0.443 & 0.494 & 0.546 \\ SE-GNN & 0.365 & 0.271 & **0.399** & **0.549** & 0.484 & 0.446 & **0.509** & **0.572** \\ \hline **NC-KGE (our)** & **0.366** & **0.273** & 0.392 & 0.542 & **0.486** & **0.447** & 0.499 & 0.556 \\ \hline \hline \end{tabular} \end{table} Table 2. Model reports on FB15k-237 and WN18RR test set. The best results are in bold. \({}^{\dagger}\) denotes that results are from the published paper. Other results are from SE-GNN [15]. \begin{table} \begin{tabular}{l|c c c c|c c c|c c c c} \hline \hline **Dataset** & \multicolumn{3}{c}{**DRKG17k-21**} & \multicolumn{3}{c}{**BioKG72k-14**} & \multicolumn{3}{c}{**PharmKG8k-28**} \\ \hline **Task** & \multicolumn{3}{c}{**Link Prediction**} & \multicolumn{3}{c}{**Link Prediction**} & \multicolumn{3}{c}{**Link Prediction**} \\ \hline **Metric \(\rightarrow\) Model \(\downarrow\)** & MRR & H@1 & H@3 & H@10 & MRR & H@1 & H@3 & H@10 & MRR & H@1 & H@3 & H@10 \\ \hline TransE & 0.321 & 0.035 & 0.558 & 0.744 & 0.116 & 0.026 & 0.149 & 0.276 & 0.116 & 0.038 & 0.127 & 0.269 \\ DistMult & 0.240 & 0.175 & 0.254 & 0.371 & 0.045 & 0.021 & 0.041 & 0.083 & 0.218 & 0.152 & 0.237 & 0.335 \\ ComplEx & 0.099 & 0.036 & 0.087 & 0.227 & 0.111 & 0.073 & 0.118 & 0.174 & 0.124 & 0.064 & 0.128 & 0.244 \\ TruckER & 0.460 & 0.411 & 0.535 & 0.557 & 0.226 & 0.174 & 0.237 & 0.327 & 0.182 & 0.103 & 0.202 & 0.336 \\ HRGAT & 0.540 & 0.483 & 0.582 & 0.720 & 0.103 & 0.061 & 0.105 & 0.185 & 0.134 & 0.063 & 0.144 & 0.271 \\ SACN & 0.487 & 0.393 & 0.534 & 0.665 & 0.179 & 0.118 & 0.192 & 0.299 & 0.156 & 0.085 & 0.170 & 0.296 \\ CompGCN & 0.562 & 0.466 & 0.619 & 0.739 & 0.221 & 0.170 & 0.230 & 0.321 & 0.193 & 0.110 & 0.216 & 0.352 \\ SE-GNN & 0.575 & 0.481 & 0.631 & 0.746 & 0.237 & 0.183 & 0.248 & 0.343 & 0.206 & 0.120 & 0.232 & 0.374 \\ \hline **NC-KGE (our)** & **0.590** & **0.505** & **0.637** & **0.747** & **0.240** & **0.185** & **0.256** & **0.344** & **0.228** & **0.145** & **0.252** & **0.390** \\ \hline \hline \end{tabular} \end{table} Table 3. The results of models on DRKG17k-21, BioKG72k-14 and PharmKG8k-28 Datasets. \({}^{\dagger}\) denotes that we reproduce the results using the codes \({}^{1}\). For other results, we implemented the official codes. ### Discussion **The Universality of NC-KGE.** NC-KGE is a straightforward yet effective method that integrates contrastive learning with non-contrastive Knowledge Graph Embedding (KGE) techniques to enhance performance on biomedical knowledge graphs. In our study, we have integrated NC-KGE with existing models like CompGCN and SE-GNN, using PharmKG8k-28 as a representative example. The evaluation metrics for both CompGCN and SE-GNN demonstrated improvements, as outlined in Table 6. Furthermore, the evolution of Mean Reciprocal Rank (MRR) on the test set is depicted in Figure 4. NC-KGE not only enhances the accuracy of the models but also accelerates the convergence of CompGCN and SE-GNN. This suggests a general improvement resulting from node-based contrastive learning. **The performance of NC-KGE with different similarity measure functions.** In this paper, we consider the effect of different Figure 4. Performance of the baseline models on PharmKG8k-28. Figure 3. Performance of the models on PharmKG8k-28. similarity measure functions on NC-KGE. As shown in Table 9 of Appendix A, we consider 3 common classes of metric functions, including TransE based on translation, DistMult, SimplE and ComplEx based on tensor decomposition, ConvE with parameters based on convolutional networks. Among the five similarity measure functions, NC-KGE combined with ConvE has the best performance. **The performance of NC-KGE under various temperature coefficients.** In this section, we discuss the temperature hyperparameter for NC-KGE. From previous experience, smaller temperature coefficient usually makes the model tend to distinguish more difficult negative samples, and the learned embedding will be smoother. However, this may also lead to the model paying too much attention to difficult negative samples, resulting in insufficient attention to common negative samples, which is not conducive to learning good embeddings. As shown in Table 9 of Appendix A, we show the temperature coefficient is set to 0.1, 0.2, 0.3, 0.5, 0.7, 0.8, 1.0, 1.1, 1.2, 1.3, 1.5 and dynamic adjustment. Among them, the temperature coefficient based on the dynamic adjustment strategy achieves better performance. However, when the temperature coefficient is set to a certain value, the performance of the model does not change significantly. **The performance of NC-KGE with different number of negative samples.** The number of negative samples is always an important discussion topic for self-supervised learning or unsupervised learning. In general, more negative samples will allow the model to see more diverse data points, which will improve performance. However, more negative samples also mean higher sampling cost. So we need to make a trade-off between the two. Here, we also show the performance of NC-KGE with different number of negative samples. As shown in Table 9 of Appendix A, we find that NC-KGE achieves a decent performance when the number of negative samples for contrastive learning is 1000, and it performs best when the number is the whole. ## 6. Conclusion In this study, we introduced a node-based contrastive knowledge graph embedding method called NC-KGE for relation prediction in biomedical knowledge graphs. NC-KGE is a simple and versatile approach that can be seamlessly integrated into existing KGE methods to enhance training convergence and improve overall performance. This distinctive feature allows NC-KGE to fully leverage advancements in state-of-the-art models. In the future work, we will expand the application of NC-KGE to diverse downstream tasks, including but not limited to knowledge-guided molecular property prediction and the prediction of higher-order biomedical n-ary relationships. By leveraging semantically rich pre-trained node and relation embeddings, we aim to maximize the potential of NC-KGE in these domains. ###### Acknowledgements. Thanks to the readers.
2302.07655
Fault Injection in Native Logic-in-Memory Computation on Neuromorphic Hardware
Logic-in-memory (LIM) describes the execution of logic gates within memristive crossbar structures, promising to improve performance and energy efficiency. Utilizing only binary values, LIM particularly excels in accelerating binary neural networks, shifting it in the focus of edge applications. Considering its potential, the impact of faults on BNNs accelerated with LIM still lacks investigation. In this paper, we propose faulty logic-in-memory (FLIM), a fault injection platform capable of executing full-fledged BNNs on LIM while injecting in-field faults. The results show that FLIM runs a single MNIST picture 66754x faster than the state of the art by offering a fine-grained fault injection methodology.
Felix Staudigl, Thorben Fetz, Rebecca Pelke, Dominik Sisejkovic, Jan Moritz Joseph, Leticia Bolzani Pöhls, Rainer Leupers
2023-02-15T13:38:57Z
http://arxiv.org/abs/2302.07655v1
# Fault Injection in Native Logic-in-Memory Computation on Neuromorphic Hardware ###### Abstract Logic-in-memory (LIM) describes the execution of logic gates within memristive crossbar structures, promising to improve performance and energy efficiency. Utilizing only binary values, LIM particularly excels in accelerating binary neural networks, shifting it in the focus of edge applications. Considering its potential, the impact of faults on BNNs accelerated with LIM still lacks investigation. In this paper, we propose faulty logic-in-memory (FLIM), a fault injection platform capable of executing full-fledged BNNs on LIM while injecting in-field faults. The results show that FLIM runs a single MNIST picture 66754\(\times\) faster than the state of the art by offering a fine-grained fault injection methodology. ReRAM, memristor, faults, reliability, logic-in-memory ## I Introduction The von Neumann architecture describes a computing system consisting of two main distinct components: the memory and the computing unit. The computing unit must fetch/push from/to the memory in order to process data, representing the so-called von Neumann bottleneck. The bottleneck drastically limits conventional computing systems' performance and energy efficiency. Consequently, novel computing paradigms are being investigated to overcome this limitation [1]. Emerging non-volatile memories such as spin-torque-transfer memory (STT-RAM/MRAM), phase-change random-access memory (PCRAM), and resistive random-access memory (ReRAM) provide an ideal substrate for high-density memories by also enabling the computing-in-memory (CIM) paradigm. CIM executes operations within the memory without moving data to the processing unit. Implementing these operations in an analog fashion requires expensive ADCs/DACs but accomplishes the best performance [2]. In comparison, logic-in-memory (LIM) uses binary values to perform logic operations within memory, omitting the conversion from the analog to the digital domain, while being more resilient against technology-specific non-idealities [3, 4]. Fig. 1 exemplifies a memristive crossbar array executing parallel XNOR operations. Binary neural networks (BNNs) represent a set of machine learning models that replace the typically used full-precision weights with binary values. These networks trade a lower overall accuracy with a significant performance improvement and a lower memory footprint. Due to the quantification of its internal layers, the inference is dominantly computed through the XNOR operation [5]. Hence, BNNs benefit from the massive parallelization of LIM, particularly in the context of edge applications. However, the benefit of non-volatile memories for implementing these emerging applications depends on being able to guarantee reliability during their lifetime. In more detail, as observed in CMOS-based memories, these novel memories are susceptible to time-dependent deviations, causing in-field faults that affect their lifetime reliability [6, 7]. Time-dependent deviations are primarily a result of environmental variations, causing transient faults, such as bit-flips, and temporal variations, causing degradation over a lifetime. Furthermore, towards the end of their life cycle, memories encounter stuck-at faults. The impact of transient faults has been thoroughly investigated for analog CIM [8]. Unfortunately, there is only limited work on the effect on LIM. X-Fault [9] describes the most detailed end-to-end fault injection platform injecting different traditional faults at the device level. However, this approach limits the platform's performance, dramatically lowering the feasibility of real-world models and datasets. **Contributions:** In this paper, we propose an ultra-fast fault injection platform called FLIM, capable of simulating full-fledged BNN models. FLIM processes an MNIST data frame 66754\(\times\) faster than X-Fault while injecting different faults related to time-dependent deviations. In detail, we present the following investigations. (1) First, we develop a simulation methodology that abstracts in-field faults toward a high Fig. 1: Memristive crossbar array executing parallel XNOR operations. performance fault model. (2) Second, we introduce a notion of time within our simulator, which allows the injection of faults per layer. (3) Finally, we perform a reliability assessment considering in-field faults of BNNs using different datasets and models. The rest of the paper is organized as follows. Section II summarizes the state-of-the-art fault injection platforms, as well as the background related to BNNs and LIM. Section III, describes the simulation methodology, including the implemented fault models. Experimental results and a detailed discussion are presented in Section IV. Section V concludes the paper. ## II Background This section summarizes the main approaches proposed for performing reliability and security assessments, considering different types of faults that can affect these novel applications after manufacturing and during their lifetime. In addition, it describes the background of BNNs and LIM. ### _Related Work_ Fault injection platforms allow for a hands-on investigation of the impact of faults on various aspects of computing systems. For instance, these platforms are heavily used to investigate hardware security primitives [10, 11, 12]. Especially analyzing faults in machine learning algorithms has become a vital field of research considering their widespread usage [13, 14, 15]. Non-idealities of emerging non-volatile memories and their impact on CIM have been thoroughly investigated [16, 17]. Chakraborty et al. [18] propose a general approach to model faults on memristive crossbars executing neural networks. The framework is capable of simulating linear and non-linear non-idealities at an architectural level. PytorX [19] presents an end-to-end neural network tool based on PyTorch. The tool adjusts the mapping and optimizes the training to overcome the effect of non-ideal crossbars, and drastically limits the impact of faults. In general, existing research has mainly focused on analog-based CIM. The only framework able to simulate LIM on memristive crossbar is presented by X-Fault [9]. The framework offers a wide range of features, including various fault models and injection mechanisms. However, the tool simulates faults on memristor level limiting performance significantly. Hence, X-Fault cannot simulate larger models or datasets due to performance issues. Consequently, we propose FLIM, which closes this gap and allows for an extensive investigation of faults on LIM-based machine learning algorithms. ### _Binary Neural Networks (BNNs)_ BNNs represent a class of neural networks using aggressive quantization, drastically improving power efficiency but reducing accuracy [20]. This approach is auspicious for deploying deep neural networks to resource-constrained devices. Compared to full-precision neural networks, BNNs are behind in terms of accuracy. However, simple classification tasks achieve competitive performance. The open-source library Larq [21] offers an easy entry to build and train BNNs. The library builds upon Tensorflow and provides pre-trained models. An XNOR operation within BNNs replaces the matrix-matrix-multiplication of convolutions in full-precision neural networks. Thus, BNNs map directly to LIM on memristive crossbars and position it as a preferred application for CIM on edge. Since BNNs still require some non-binary computation (e.g., activation and integer bit-count), only convolutional and dense layers are mapped on memristive crossbar arrays. We follow X-Fault's conservative approach by assuming that these non-binary operations are executed in CMOS. ### _Logic-in-Memory (LIM)_ Compared to conventional CIM, LIM utilizes the memristive crossbar array in a binary fashion omitting expensive ADCs/DACS. Due to its binary working mode, LIM trades a higher error resilience with lower latency. Internally, logical states (0 or 1) are represented as either high or low resistive values of the memristive cell. Logic gates are composed of multiple memristors. An operation voltage applied to the connecting word line calculates the respective output based on the given inputs. Kvatinsky et al. [22] classified logic families into three categories: statefulness, the proximity of computation, and flexibility. MAGIC [22] and IMPLY [23] describe two stateful logic families capable of implementing a complete set of logic operations. Within the scope of this work, we abstracted the computation to the application level. Hence, we assume the underlying usage of a logic family implementing the XNOR logic gate without modeling it in detail. ## III Faulty Logic-in-Memory (FLIM) FLIM embodies an end-to-end simulator capable of emulating the impact of faults on BNNs using binary memristive crossbar arrays. Fig. 2 depicts the internal structure of the fault injection platform, which consists of a _Fault Generator_ and a _Fault Injector_. The _Fault Generator_ constructs a set of fault vectors encoding the fault type, location, and injection rate. This Fig. 2: Overview of the simulation methodology: (a) Noise vector generator and (b) fault injector. tool is implemented in vanilla Python and hence, independent of the fault injection mechanism. The _Fault Injector_ extends the Tensorflow/Larq framework to dynamically inject faults in arbitrary BNN models. As depicted in Fig. 2b, the _Fault Injector_ employs the previously generated fault vectors and a defined dataset to initiate the inference procedure. **Fault masking:** FLIM implements bit-flip and stuck-at faults to investigate the impact of time-dependent deviations. In contrast to X-Fault, the proposed platform models and injects faults on the XNOR operation level yielding enhanced simulation performance. For bit-flip and stuck-at faults, a _fault mask_ is generated, encoding the fault's location and binary representation. The bit-flip mask defines a 2-dimensional Boolean array initialized with zeros. The injection rate specifies the number of elements within the array set to 1. In addition to these randomly distributed bit-flips, entire rows/columns may also be faulty. Thus, these rows/columns are set to 1, respectively. Furthermore, the platform supports _dynamic faults_ which occur every n-th XNOR operation [24]. To model dynamic faults, the fault mask has to be repeated over several layers. Therefore, multiple bit-flip masks are assembled, which are consecutively applied to the respective layers of the model during inference. Likewise, the stuck-at mask follows the same structure by initializing a 2-dimensional array with zeros and marking all faulty elements with ones. In general, mask generation happens as an offline process that significantly improves performance because the expensive mapping and distribution of faults are performed once and reused over the whole simulation. **Fault mapping:** The generated masks are assigned to specific layers within the BNN model in the next step. Therefore, the _Fault Generator_ has to be provided with the dimensions and the number of crossbars used during the simulation. First, the mapping tool calculates the number of parallel XNOR operations based on the crossbars. Considering the implementation of MAGIC [22] or IMPLY [23], four memristors are required to facilitate one XNOR operation. Second, the model extracts the total number of required XNOR operations. Within a BNN, only the 2-dimensional convolution (conv2D) layer and fully binarized dense layers are dominantly using the XNOR operation. Consequently, these layers would be mapped and accelerated onto memristive crossbar arrays, while all remaining layers are executed on conventional CMOS. Hence, the mapping tool extracts the dimensions of these layers and assigns the previously generated fault masks. **Fault vector extraction:** Finally, the required fault vectors are extracted from the virtual crossbar representation. The 2-dimensional arrays are flattened to 1 dimension. Furthermore, the vectors are stored in a binary file annotated with meta-information about the assigned layer and mask type. The binary file is independent of the dataset and reusable for a myriad of experiments. **Fault Injector:** The _Fault Injector_ represents the centerpiece of the FLIM platform. The tool is deeply integrated with the Larq and Tensorflow framework to aim for maximum performance by achieving a granular fault injection. Fig. 3 depicts the internals of the injection mechanism. The Larq library generally extends the Keras framework to facilitate BNNs [21]. Larq defines custom quantized layers as an extension of Keras layers. We extended this layer base class by adding an instance of the Fault Injector. To trigger the injection mechanism during the inference, the original convolution method has been overwritten. The following describes the procedure of the faulty convolution method. First, the standard Fig. 3: Internal structure of FLIM consisting of the Fault Generator and the Fault Injector module. convolution function calculates the feature map. The feature map does not yet take into account any faults and, therefore, represents the correct result of the computation. Second, before both fault masks are applied on the feature map, the vectors must be adjusted in length depending on the batch size and the input dimension. Finally, the fault masks are applied by performing another XNOR operation. ## IV Results and Discussion This section discusses the simulation results. Table I shows the system specifications used to conduct all experiments. We verified the functionality of FLIM in two distinct experiments. The fault injector extends the Tensorflow/Larq framework. Hence, we compared the inference results of FLIM (without injecting any faults) with the results of vanilla Tensorflow/Larq. The fault distribution and mapping have been verified with X-Fault. Our investigations exhibit the impact of faults on BNNs from various perspectives. First, the impact on individual layers is studied. Second, we compare the performance of our simulator to X-Fault and vanilla Tensorflow. Finally, we thoroughly explore the resilience of various models on bit-flip and stuck-at faults. **Layer resilience:** This experiment aims to investigate the resilience of individual layers of a BNN. We use a binary version of LeNet [25] trained on the MNIST dataset. LeNet represents a convolutional neural network which, in this experiment, consists of three convolutional layers and two dense layers. The former aims to extract the visual features from the input picture. The latter is responsible for the feature classification. The MNIST dataset embodies a set of 28\(\times\)28 greyscale pixel images depicting handwritten digits [26]. After training, the model achieves an accuracy of \(97.62\%\) without any injected faults. Throughout the experiment, each layer is mapped onto a single crossbar while sweeping the injection rate of bit-flips, dynamic faults, and stuck-at faults. To mitigate the impact of randomly placing the faults on the crossbar, we performed every experiment hundred times which reinitialized the random generator with a new seed value. Fig. 4(a-b) illustrates that stuck-at faults impact the model more severely than bit-flips independent of the layer. _While stuck-at faults influence almost all layers equally strongly, bit-flip faults affect the accuracy depending on the layer depth. Moreover, convolutional layers appear more susceptible to bit-flips than dense layers_. The impact of dynamic bit-flip faults is \begin{table} \begin{tabular}{l|l} **Hardware** \\ \hline CPU & AMD Ryzen 7 5800X \\ \hline RAM & DDR4 2666MHz 64GB \\ \hline GPU & NVIDIA GeForce RTX 308D 12GB \\ \hline \hline **Software** \\ \hline GPU Driver & 470.129.06 \\ \hline CUDA & 11.4 \\ \hline CADNN & 8.1.0.77-1 \\ \hline TensorFlow & 2.8.0 \\ \hline LARQ & 0.12.0 (modified) \\ \hline \end{tabular} \end{table} TABLE I: Adopted experimental setup. Fig. 4: Simulation results: Impact of (a) bit-flips, (b) stuck-at, (c) dynamic faults, (d) faulty columns, and (e) faulty rows on different layers. (f) Performance benchmark. shown in Fig. 4(c), whereas the x-axis represents the number of XNOR operations required to sensitize the fault. The results show that the BNN model's accuracy stabilizes around its original value at around four consecutive XNOR operations. Next, we investigate the impact of faulty rows/columns on the model's accuracy. This experiment instantiates a 40\(\times\)10 crossbar for each layer. Fig. 4(d-e) portrays the results of this experiment. Once again, the layer's depth directly correlates with the impact on accuracy. In particular, the last dense layer declines almost linearly. In general, the impact of faulty columns is more substantial than of faulty rows. Considering the column-wise parallelism of XNOR operations, this result appears plausible. **Performance evaluation:** We evaluate the performance of our fault injection platform by executing the inference on the previous LeNet model together with the complete MNIST test dataset consisting of 10.000 images. While FLIM and the vanilla Larq implementation perform fifty consecutive runs of the complete dataset, we estimate the total run time of X-Fault based on five images. During the inference, the fault injection mechanism maps the respective operations but does not inject actual faults. Thus, the vanilla Larq implementation serves as a lower boundary regarding the total simulation time. Fig. 4(f) shows the substantial performance improvement of our work. _FLIM classifies the 10.000 images 29375\(\times\) faster than X-Fault._ Due to the deep integration within Larq and Tensorflow, FLIM takes advantage of GPUs, _doubling the performance to a speed-up of 66754\(\times\) compared to X-Fault._ Conclusively, FLIM abstracts the fault model on the XNOR operation level and, hence, trades simulation accuracy with noteworthy performance improvement. **Model resilience:** The last experiment investigates the resilience of various models (see Table II). We pre-trained the models with the ImageNet [27] dataset and injected bit-flips, dynamic, and stuck-at faults. Once again, we run every experiment hundred times to mitigate the impact of the randomly placed faults. Fig. 5(a-c) displays the simulation results. As expected, the obtained results indicate that stuck-at faults cause a more substantial impact on the accuracy than bit-flips. _In other words, it is possible to see that faults related to time-dependent deviations can affect the reliability of emerging applications differently. Depending on the injection rate, transient faults will compromise the reliability of such applications at different levels. In addition, it is possible to see that the reliability of emerging applications is more affected by permanent faults._ The BiRealNet and XNOR-Net represent a particular case because their convolutions are not strictly binarized. BiRealNet utilizes real-valued activation functions through identity shortcuts [28]. On the other hand, XNOR-Net's weights are multiplied by an individual gain based on the magnitude of the channel. Still, FLIM is capable of simulating both models by slightly adjusting the bit-flip mask. Fig. 5: Simulation results of (a) bit-flips, (b) stuck-at, and (c) dynamic faults on different models. ## V Conclusion This work proposed a fault injection platform, called **FLIM**, able to evaluate the impact of in-field faults related to time-dependent deviations in emerging applications. The platform injects bit-flips (static and dynamic), related to environmental variations, and stuck-at faults, associated with temporal variations. We investigated the impact of these faults on individual layers and various models. Furthermore, FLIM outperforms the current state-of-the-art platform by four orders of magnitude in terms of performance. The obtained results show that a certain level of in-field faults can be tolerated and that the impact of bit-flips, even if multiple, compromises the reliability of emerging applications less than stuck-at faults. These results also demonstrate that in order to guarantee the development of high-reliability emerging applications, it is mandatory to adopt not only fault-tolerant approaches but also strategies able to monitor and/or mitigate applications' degradation during their lifetime. In the future, we want to extend the capabilities of FLIM to inject faults during training.
2308.12565
AMUSE-antlia I: Nuclear X-ray properties of early-type galaxies in a dynamically young galaxy cluster
To understand the formation and growth of supermassive black holes (SMBHs) and their co-evolution with host galaxies, it is essential to know the impact of environment on the activity of active galactic nuclei (AGN). We present new Chandra X-ray observations of nuclear emission from member galaxies in the Antlia cluster, the nearest non-cool core and the nearest merging galaxy cluster, residing at D = 35.2 Mpc. Its inner region, centered on two dominant galaxies NGC 3268 and NGC 3258, has been mapped with three deep Chandra ACIS-I pointings. Nuclear X-ray sources are detected in 7/84 (8.3%) early-type galaxies (ETG) and 2/8 (25%) late-type galaxies with a median detection limit of 8x10^38 erg/s. All nuclear X-ray sources but one have a corresponding radio continuum source detected by MeerKAT at the L-band. Nuclear X-ray sources detected in early-type galaxies are considered as the genuine X-ray counterpart of low-luminosity AGN. When restricted to a detection limit of logLx(erg/s) > 38.9 and a stellar mass of 10 < log Ms(Msun) <11.6, 6/11 (54.5%) ETG are found to contain an X-ray AGN in Antlia, exceeding the AGN occupation fraction of 7/39 (18.0%) and 2/12 (16.7%) in the more relaxed, cool core clusters, Virgo and Fornax, respectively, and rivaling that of the AMUSE-Field ETG of 27/49 (55.1%). Furthermore, more than half of the X-ray AGN in Antlia are hosted by its younger subcluster, centered on NGC 3258. We believe that this is because SMBH activity is enhanced in a dynamically young cluster compared to relatively relaxed clusters.
Zhensong Hu, Yuanyuan Su, Zhiyuan Li, Kelley M. Hess, Ralph P. Kraft, William R. Forman, Paul E. J. Nulsen, Sarrvesh S. Sridhar, Andra Stroe, Junhyun Baek, Aeree Chung, Dirk Grupe, Hao Chen, Jimmy A. Irwin, Christine Jones, Scott W. Randall, Elke Roediger
2023-08-24T05:12:33Z
http://arxiv.org/abs/2308.12565v1
AMUSE-Antlia I: Nuclear X-ray Properties of Early-Type Galaxies in a Dynamically Young Galaxy Cluster ###### Abstract To understand the formation and growth of supermassive black holes (SMBHs) and their co-evolution with host galaxies, it is essential to know the impact of environment on the activity of active galactic nuclei (AGN). We present new _Chandra_ X-ray observations of nuclear emission from member galaxies in the Antlia cluster, the nearest non-cool core and the nearest merging galaxy cluster, residing at \(D=35.2\) Mpc. Its inner region, centered on two dominant galaxies NGC 3268 and NGC 3258, has been mapped with three deep _Chandra_ ACIS-I pointings. Nuclear X-ray sources are detected in 7/84 (8.3%) early-type galaxies (ETG) and 2/8 (25%) late-type galaxies with a median detection limit of \(8\times 10^{38}\) erg s\({}^{-1}\). All nuclear X-ray sources but one have a corresponding radio continuum source detected by MeerKAT at the L-band. Nuclear X-ray sources detected in early-type galaxies are considered as the genuine X-ray counterpart of low-luminosity AGN. When restricted to a detection limit of log(\(L_{\rm X}\)/erg s\({}^{-1}\)) \(\geq 38.9\) and a stellar mass of \(10\leq\log(M_{\star}/{\rm M}_{\odot})<11.6\), 6 of 11 ETG are found to contain an X-ray AGN in Antlia, exceeding the AGN occupation fraction of 7/39 (18.0%) and 2/12 (16.7%) in the more relaxed, cool core clusters, Virgo and Fornax, respectively, and rivaling that of the AMUSE-Field ETG of 27/49 (55.1%). Furthermore, more than half of the X-ray AGN in Antlia are hosted by its younger subcluster, centered on NGC 3258. We believe that this is because SMBH activity is enhanced in a dynamically young cluster compared to relatively relaxed clusters. black hole physics - galaxies: clusters: individual (Antlia) ## 1 Introduction Nuclear X-ray emission provides an unambiguous diagnostic of the activity of supermassive black holes (SMBH). Its correlation with black hole mass and its host galaxy is fundamental to understanding the relation between SMBH and the host galaxy properties, such as the \(M_{\rm BH}\)-\(\sigma\) and \(M_{\rm BH}\)-\(L_{\rm X}\) relations (Kormendy & Ho, 2013; Gaspari et al., 2019). The sub-arcsec spatial resolution of _Chandra_ has made it possible to detect nuclear X-ray emission down to \(\sim 10^{38}\) erg s\({}^{-1}\), enabling the study of active galactic nuclei (AGN) in a large number of low mass quiescent galaxies. The AGN Multi-wavelength Survey of Early-Type Galaxies in the Virgo Cluster (AMUSE-Virgo) has studied the nuclear X-ray emission of 100 elliptical, lenticular, and dwarf elliptical galaxies in Virgo, a well known cool core cluster with a sharply increasing X-ray surface brightness profile towards its center. It was found that \(24\%-34\%\) of the Virgo early-type galaxies (ETG) host X-ray AGN. Also, it provided evidence for down-sizing: black holes with lower mass radiate closer to their Eddington limits than their higher mass counterparts (Gallo et al., 2008, 2010). A study of another cool core cluster, Fornax (Lee et al., 2019), reports a level of nuclear activity similar to Virgo, with \(27\%\pm 10\%\) ETG hosting AGN. The environment plays a key role in galaxy evolution. Relaxed cool core clusters are dominated by red, elliptical galaxies, due to a number of quenching mechanisms, including ram pressure stripping (e.g. Gunn & Gott, 1972; Dressler, 1980). The dependence of the nuclear X-ray activity on the large scale environment can provide insight into the mechanisms that govern the feeding and feedback of SMBH. AMUSE-Field is a _Chandra_ large program targeted on 103 nearby field and group ETG for a comparison with AMUSE-Virgo. In this paper, "the Field" stands for the AMUSE-Field, which refers to heterogeneous environments from galaxy groups to isolated fields. Miller et al. (2012) report that the AMUSE-Field sample displays a higher X-ray AGN occupation fraction \(45\%\pm 7\%\) and a higher nuclear X-ray luminosity at a given black hole mass than the Virgo sample. Lee et al. (2019) has further confirmed that AMUSE-Field is also more active than Fornax. Environments like the Virgo and Fornax clusters may have suppressed black hole accretion and quenched star formation by cutting off the fuel supply via ram pressure stripping (Ricarte et al., 2020). However, the intrinsic properties of various galaxy clusters can be different. For example, member galaxies in dynamically young, merging clusters as well as high redshift proto-clusters, often have enhanced star formation and contain abundant cold gas comparable to field galaxies (Stroe et al., 2017; Cava et al., 2017; Noble et al., 2017). The difference of black hole activity in a variety of nearby clusters can cast light on the environmental dependence of black hole activity. The Antlia cluster (Abell S0636) is the third nearest cluster after Virgo and Fornax at a distance of 35.2 Mpc (\(1^{\prime\prime}=170\) pc) (Dirsch et al., 2003). It is a Bautz-Morgan type III cluster. Antlia has a \(R_{200}\)1 of 887 kpc and \(M_{200}\) of \(7.9\times 10^{13}\) M\({}_{\odot}\)(Wong et al., 2016). Its size and halo mass are similar to the Virgo cluster with \(R_{200}=974.1\pm 5.7\) kpc and \(M_{200}=1.05\pm 0.02\times 10^{14}\) M\({}_{\odot}\)(Simionescu et al., 2017) and the Fornax cluster with \(R_{200}\sim 700\) kpc and \(M_{200}\sim 7\times 10^{13}\) M\({}_{\odot}\)(Drinkwater et al., 2001). Meanwhile, the global temperature of Antlia is 2 keV (Wong et al., 2016), which falls between that of Virgo of 2.3 keV (e.g. Urban et al., 2011) and Fornax of \(<1.5\) keV (e.g. Su et al., 2017; Jones et al., 1997). Antlia is likely the dynamically youngest of these three galaxy clusters. The main cluster of Antlia, centered on the brightest cluster galaxy NGC 3268, is in the process of merging with a subcluster associated with the bright elliptical galaxy NGC 3258, which is \(22^{\prime}\) (225 kpc) to the southwest of NGC 3268. _ASCA_, _Suzaku_, and _XMM-Newton_ observations have revealed that its ICM displays relatively uniform surface brightness and temperature distributions at the cluster center, in contrast to typical cool core clusters with a sharp surface brightness peak and a steep temperature gradient (Nakazawa et al., 2000; Wong et al., 2016). The galaxy density of Antlia is 1.7 times higher than Virgo and 1.4 times higher than Fornax (Ferguson & Sandage, 1990). Also, the ratio of the velocity dispersion between infalling galaxies to virialized galaxies \(\sigma_{\rm infall}/\sigma_{\rm vir}\) in Antlia is 2.31 (Hess et al., 2015), higher than that of Virgo of 1.64 and the predicted virialized population ratio of 1.4 (Conselice et al., 2001), suggesting that the Antlia cluster is less virialized. CO (2-1) and H i observations reveal that many of Antlia's member galaxies, both star forming and passive, contain large reservoirs of molecular and atomic gas, unlike galaxies in more relaxed clusters (Hess et al., 2015; Cairns et al., 2019). Footnote 1: \(R_{\Delta}\) is the radius within which the enclosed matter density is \(\Delta\) times the critical density of the universe. \(R_{200}\) is conventionally taken as an approximate of the virial radius of a cluster. To study the nuclear X-ray activity in Antlia, we observed the central region with three _Chandra_ ACIS-I pointings (PI: Y. Su), with an exposure of \(\sim 70\) ks each and 223.9 ks in total. As shown in Figure 1, the three AMUSE-Antlia fields focused on: NGC 3268, NGC 3258, and the southeast of NGC 3268. There are 92 galaxies covered by observations that have stellar masses \(M_{\star}\) in the range of \(10^{7}\)-\(10^{11}\) M\({}_{\odot}\), 84 of which are ETG while 8 are late-type galaxies (LTG). This paper is structured as follows. Data preparation and methods are presented in Section 2. The results of the nuclear X-ray source detection are shown in Section 3. Section 4 compares the AGN occupation fraction and X-ray luminosity function (XLF) of ETG in Antlia with AMUSE-Virgo, AMUSE-Field, and Fornax ETG AGN. Our findings are discussed in Section 5 and summarized in Section 6. ## 2 Data Preparation We reprocessed the _Chandra_ level-1 data and the calibration files according to the standard procedure of CIAO v4.14 (Fruscione et al., 2006). To calibrate the astrometry of each field, we chose the longest-exposed image as the reference image and matched the centroid of commonly detected point sources with the CIAO tool reproject_aspect. We checked the lightcurves for flares. Counts maps, exposure maps, and point spread function (PSF) maps were generated for each observation in three energy bands: 0.5-2 (\(S\)-band), 2-8 (\(H\)-band), and 0.5-8 (\(F\)-band) \(\rm{keV}\). The maps of multiple observations of the same field were then merged. We followed the source detection procedures described in Hou et al. (2017) and Jin et al. (2019). The original X-ray point source list was generated by the CIAO tool wavdetect. To correct for the source centroids, we iterated over the source position within the 90% PSF. The position uncertainty (PU) at 68% confidence level was calculated according to the empirical relation among PU, off-axis angle (OAA, unit in arcminutes), and source counts (\(C\)) (Kim et al., 2007, Equation 14), \[\begin{split}\log\mathrm{PU}=\\ & 0.1140\mathrm{AA}-0.460\log C-0.240,0.000<\log C\leq 2.123,\\ & 0.1030\mathrm{AA}-0.195\log C-0.803,2.123<\log C\leq 3.300. \end{split} \tag{1}\] To filter out spurious sources due to background fluctuations, we calculated the binomial no-source probability \(P_{\mathrm{B}}\)(Weisskopf et al., 2007) and removed those sources with \(P_{\mathrm{B}}>0.01\). Finally, a cross-matching method (Hong et al., 2009) was applied to identify the same source detected in different energy bands. We only kept X-ray sources located within 8\({}^{\prime}\) from the aimpoint to ensure that they are covered by all observations with the same aimpoint but different roll angles. The resultant source catalog and a more detailed data processing and source detection method will be fully presented in a separate publication (in preparation). To identify AGN, we searched for optical nuclei coincident with any X-ray point-like source emission. The AMUSE-Antlia footprint contains 92 member galaxies, according to the member galaxy catalog based on the optical observation using the 4-m Blanco telescope at CTIO (Calderon et al., 2020). We calculated the stellar masses using the mass-luminosity relation (Bell et al., 2003). We measured the \(g-r\) band color index and \(r\) band luminosity \(L_{r}\) of each galaxy from the CTIO image. The stellar mass was then determined as \(\log(M/L)=-0.306+1.097\) (\(g-r\)). We compared the galaxy positions with those in the 2MASS Extended Source Catalog (XSC) (Skrutskie et al., 2006). We adopted the XSC coordinates if the galaxy falls within 1\({}^{\prime\prime}\) of any XSC records. Then we identified AGN by looking for any X-ray point source that is located within the minimum of 1\({}^{\prime\prime}\) or 3 PU from the optical nuclei of any member galaxy. The Digital Sky Survey (DSS) \(J\) band image of the Antlia central region is shown in Figure 1, where 10\({}^{\prime}\simeq\) 102 kpc. The field of view (FoV) of _Chandra_ ACIS-I observations presented in this study are highlighted with white solid boxes. The positions of detected nuclear X-ray point sources are marked as green circles. In addition, we compared the nuclear X-ray sources with their corresponding MeerKAT radio continuum image (see Figure 2). The MeerKAT observations (SCI-20210212-KH-01; PI: K. Hess) were carried out in the L band, spanning the 856-1712 MHz frequency range and covering a region out to 1.4 times the virial radius of the cluster. A forthcoming publication will present a detailed analysis of the MeerKAT spectropolarimetry data. The radio continuum image has an rms noise of about 6.5 \(\mu\)Jy/beam and an angular resolution of 7\({}^{\prime\prime}\) with an astrometric uncertainty of 1.5\({}^{\prime\prime}\). ## 3 Results ### Nuclear X-ray Emission We find 9 point-like X-ray sources located at the optical nuclei of the member galaxies. As discussed in Gallo et al. (2008), these sources are generally considered AGN, although some of them could be X-ray binaries (XRB). 7 of the 9 host galaxies are ETG, while the other 2 are LTG - a blue compact dwarf (BCD), Antlia 98, and a spiral galaxy, Antlia 88. The galaxy and nuclear X-ray source properties are summarized in Table 1. The CTIO \(r\) band, _Chandra_ X-ray, and MeerKAT radio images of each galaxy in which a nuclear X-ray source is detected are shown in Figure 2. The estimated black hole masses range from \(3\times 10^{5}\) M\({}_{\odot}\) to \(5\times 10^{7}\) M\({}_{\odot}\), with an uncertainty of 0.3-0.4 dex, according to the fundamental plane (Appendix A). No nuclear X-ray source is detected in NGC 3258, the dominant elliptical galaxy of the southern subgroup that is merging with the northern group centered on NGC 3268. The X-ray emission at NGC 3258 is quite extended, which can lead to spurious detection2. We fit the spectrum of the central region to an absorbed power-law model and obtained a power-law index of about 4, which is about twice the typical spectral index of an AGN3. It is not considered an AGN due to its extended shape and soft spectrum. Footnote 2: At the first stage of the X-ray source detection process, there are two X-ray point sources “detected” close to the center of NGC 3258: one is in the \(S\)-band that offsets by 0.7′′ from the center, the other is in the \(H\) and \(F\)-band and offsets by 1.66′′. These two sources are more likely to be false detections caused by the diffuse emission. Footnote 3: The spectra can also be fitted with an absorbed thermal plasma model (apec in XSPEC), with a galactic absorption of \(N_{\rm H}\approx 2.8\times 10^{21}\) cm\({}^{-2}\) and a plasma temperature of 0.8 keV. The AGN detection in ETG can be contaminated by low mass X-ray binaries (LMXBs). The distribution of LMXB follows the stellar mass distribution, which can be traced by a Sersic profile (Sersic, 1968). The projected stellar mass inside the 1'' nuclear source matching radius accounts for \(\sim 3\)% of the total, assuming a mean effective radius of \(\sim 9\)''. We adopt the LMXB X-ray luminosity function (Zhang et al., 2012) to estimate the number of LMXBs over the detection limit. We expect \(\sim 0.4\) X-ray nuclear sources to be LXMB, which is small compared to the 7 X-ray nuclear sources detected in ETG in AMUSE-Antlia. \begin{table} \begin{tabular}{c c c c c c c} \hline \hline \multicolumn{1}{c}{FS90} & NGC & RA & DEC & Morph. & \(\log L_{\rm X}\) & \(\log M_{\star}\) \\ & & (deg) & (deg) & & & (erg s\({}^{-1}\)) & (M\({}_{\odot}\)) \\ \multicolumn{1}{c}{(1)} & (2) & (3) & (4) & (5) & (6) & (7) \\ \hline 82 & & 157.09246 & -35.49791 & E & 38.86\({}^{+0.20}_{-0.15}\) & 9.2 \\ 88 & & 157.11674 & -35.51491 & L & 41.53\({}^{+0.01}_{-0.01}\) & 9.3 \\ 98 & & 157.14249 & -35.46079 & L & 39.05\({}^{+0.16}_{-0.12}\) & 8.7 \\ 105 & 3257 & 157.19617 & -35.65797 & E & 38.92\({}^{+0.12}_{-0.10}\) & 10.4 \\ 125 & 3260 & 157.27638 & -35.59513 & E & 39.03\({}^{+0.11}_{-0.10}\) & 10.6 \\ 168 & 3267 & 157.45242 & -35.32195 & E & 39.07\({}^{+0.10}_{-0.09}\) & 10.5 \\ 184 & 3269 & 157.48765 & -35.22433 & E & 39.14\({}^{+0.11}_{-0.11}\) & 10.8 \\ 185 & 3268 & 157.50272 & -35.32545 & E & 40.75\({}^{+0.10}_{-0.02}\) & 11.6 \\ 226 & 3273 & 157.62125 & -35.61017 & E & 39.27\({}^{+0.08}_{-0.07}\) & 10.8 \\ \hline \end{tabular} Note. – (1) Galaxy Name according to Ferguson & Sandage (1990). (2) NGC name of the galaxy. (3–4) Right ascension and declination at equinox J2000. (5) The morphological type of the galaxy, “E” and “L” stand for ETG and LTG. (6) X-ray luminosity in the 0.5–8 keV energy band. (7) Stellar mass of the galaxy, derived from the \(g-r\) mass-luminosity relation, with an uncertainty of \(\sim 0.2\) dex. \end{table} Table 1: Nuclear X-ray source and host galaxy properties Figure 1: DSS \(J\) band image of the central region of the Antlia cluster, \(10\arcmin\simeq 102\) kpc. The fields of the _Chandra_ observations presented in this study are marked in white. Green circles indicate the detected nuclear X-ray sources. NGC 3268 and NGC 3258 are marked with a red “\(\times\)” and a blue “\(\star\)”, respectively. ### The nuclear X-ray source in a BCD Blue compact dwarf galaxies resemble galaxies in the infant universe, a critical stage for the formation of black holes and the establishment of the M-\(\sigma\) relation. An X-ray and a radio source are found overlapping with one BCD, Antlia 98. The membership of the galaxy to the Antlia cluster is confirmed by Smith Castelli et al. (2008). It is classified as a BCD based on optical and H\(\alpha\) observations (Ferguson & Sandage, 1990; Smith Castelli et al., 2008; Vaduvescu et al., 2014). Vaduvescu et al. (2014) find two non-central star forming knots in Antlia 98. The spatial relationship between the X-ray source and the two star forming knots is unclear, as this source is close to the edge of the FoV, where the 50% PSF is sizable, with a positional uncertainty of 4''. The directly measured photon flux is \(F_{\rm 0.5\mbox{-- }8\mbox{ \scriptsize keV}}=(2.2\pm 0.7)\times 10^{-6}\mbox{ \rm ph cm}^{-2}\mbox{ \rm s}^{-1}\), which corresponds to an X-ray luminosity of \(L_{\rm 0.5\mbox{-- }8\mbox{ \scriptsize keV}}=(1.1\pm 0.3)\ \times\ 10^{39}\mbox{ \rm erg s}^{-1}\), and an H-band luminosity of \(L_{\rm 2\mbox{-- }10\mbox{ \scriptsize keV}}=(7.7\pm 2.3)\ \times 10^{38}\mbox{ \rm erg s}^{-1}\), assuming a power-law photon index of 1.8 and a galactic absorption of \(N_{\rm H}=1\times 10^{21}\mbox{ \rm cm}^{-2}\). The 1.28 GHz flux measured from a MeerKAT observation is \((8.4\pm 1.5)\times 10^{-5}\mbox{ \rm Jy}\), corresponding to a star formation rate of \(\mbox{SFR}=(7.8\pm 1.3)\times 10^{-3}\mbox{ \rm M}_{\odot}\mbox{ \rm yr}^{-1}\)(Kennicutt & Evans, 2012). Considering the galaxy stellar mass of \(M_{\star}=(5.2\pm 1.2)\times 10^{8}\mbox{ \rm M}_{\odot}\), the specific star formation rate is \(\mbox{SFR}/M_{\star}=(1.5\pm 0.2)\times 10^{-11}\mbox{ \rm yr}^{-1}\), which is relatively small compared to other BCDs (Hunter et al., 2010). We investigate whether this source is an XRB or an SMBH. Lehmer et al. (2010) notice a tight correlation among the 2-10 keV hard band X-ray luminosity of XRB, stellar mass, and SFR, which is \(L_{\rm 2\mbox{-- }10\mbox{ \scriptsize keV}}=\alpha M_{\star}+\beta\mbox{SFR}\), where \(\alpha=(9.05\pm 0.37)\times 10^{28}\mbox{ \rm erg s}^{-1}\mbox{ \rm M}_{\odot}^{-1}\) and \(\beta=(1.62\pm 0.22)\times 10^{39}\mbox{ \rm erg s}^{-1}\mbox{ \rm M}_{\odot}^{-1}\) yr. For Antlia 98, we obtain a 2-10 keV X-ray luminosity of \((6.0\pm 0.3)\times 10^{37}\mbox{ \rm erg s}^{-1}\) for the expected XRB dominated X-ray emission. This falls short of the detected X-ray luminosity but consistent within the 3\(\sigma\) uncertainty. The ratio of the X-ray and radio intensities can also cast light into its origin. We follow the correlation given by Terashima & Wilson (2003), \[R_{\rm X}=\frac{\nu L_{\nu}(5\,{\rm GHz})}{L_{\rm X}(2-10)\,{\rm keV}}. \tag{2}\] We derive the source 5 GHz luminosity of \((6.1\pm 1.1)\times 10^{34}\mbox{ \rm erg s}^{-1}\) from its 1.28 GHz luminosity based on an assumed power-law spectrum \(S\propto\nu^{-0.7}\)(Condon et al., 2002). We obtain \(\log R_{\rm X}\sim-4.1\). This source is too luminous in the radio to be a stellar-mass XRB, which typically have \(\log R_{\rm X}\leq-5.3\). Therefore, the source detected at the center of Antlia 98 is a promising AGN candidate. Future on-axis observation is required to determine its nature. ### X-ray stacking of undetected galaxies To probe the AGN population below the detection limit, we performed a stacking analysis for member galaxies lacking X-ray detected AGN. Among the 83 candidate galaxies, we exclude NGC 3258 due to the diffuse nature of its X-ray emission. Based on the PSF and stellar mass \(M_{\star}\), we categorize the galaxies into four groups: small-PSF low-mass subset (SPLM), small-PSF intermediate-mass subset (SPIM), intermediate-PSF low-mass subset (IPLM), and intermediate-PSF intermediate-mass subset (IPIM). The boundary of small and intermediate 90% PSF is 4'', while that of the low and intermediate mass is \(\log(M_{\star}/\mbox{M}_{\odot})=\ 8.5\). There are 20, 11, 30, and 21 galaxies in the SPLM, SPIM, IPLM, and IPIM subsets, respectively. We stack the counts maps of galaxies in each subset, and extract the net counts within a nuclear region 2'' in radius, for which we chose an annulus with an inner radius of 4'' and an outer radius of 5'' as the background. We use the CIAO tool aprates to compute the net counts and uncertainties for each subset. No signal is detected in the LM subset for either PSF. Taking \(N_{\rm H}=1\times 10^{21}\mbox{ \rm cm}^{-2}\) and \(\Gamma=1.8\), the 3\(\sigma\) upper limits on the unabsorbed \(0.5-8\mbox{ \rm keV}\) luminosity for SPLM and IPLM are \(3.3\times 10^{37}\mbox{ \rm erg s}^{-1}\) and \(3.0\times 10^{37}\mbox{ \rm erg s}^{-1}\), respectively. However, X-ray emission is detected for the subsets of more massive galaxies. A detection with a signal-to-noise ratio (SNR) of 3.1 is obtained for SPIM. There are 22 net counts, with 41 and 44 counts in the source and background apertures, respectively, corresponding to a photon flux of \((1.7\pm 0.6)\times 10^{-7}\mbox{ \rm ph cm}^{-2}\mbox{ \rm s}^{-1}\), and an unabsorbed luminosity of \((8.0\pm 2.6)\times 10^{37}\mbox{ \rm erg s}^{-1}\). For the IPIM subset, we obtain a SNR of 2.1 and an unabsorbed luminosity of \((4.0\pm 1.8)\times 10^{37}\mbox{ \rm erg s}^{-1}\). Fainter nuclear X-ray sources are likely missed due to the limited sensitivity. ## 4 Black hole activity in ETGS ### Nuclear X-ray Luminosity Function We compare the ETG AGN X-ray Luminosity Functions (XLF) of the Antlia cluster, as well as the Virgo cluster (Gallo et al., 2010), the Fornax cluster (Lee et al., 2019), and the AMUSE-Field (Miller et al., 2012) in Figure 3. The detection limits of these surveys are different. The median and completeness sensitivities of AMUSE-Antlia footprint are \(\log(L_{\rm X}/\mbox{erg s}^{-1})=38.9\) and \(\log(L_{\rm X}/\mbox{erg s}^{-1})=39.0\) in the 0.5-8 keV passband, which are the shallowestamong all the samples. The AMUSE-Field, AMUSE-Virgo, and Fornax survey are dominated by snapshots, which means that the target galaxies are at the aimpoint. For the snapshots, the detection limit mainly depends on the effective exposure time, while in the AMUSE-Antlia fields, the detection limit also depends on the off-axis angle. The completeness sensitivities for AMUSE-Field, AMUSE-Virgo, and Fornax snapshots are \(38.3,38.6\), and \(38.7\) dex, respectively. To compare all AGN presented in those studies, while taking into account the sensitivity difference, we set the median sensitivity of AMUSE-Antlia at the beginning of the second bin. Thus, all the samples are comparable, except for the faintest bin. AGN detection can be biased by the "Eddington ratio incompleteness" (Gallo et al., 2010; Miller et al., 2012). For example, Gallo et al. (2010) emphasize that the nuclear SMBH activity does not increase with the host stellar mass, if the sample is Eddington complete. On the other hand, for any luminosity-limited survey, it is impossible to reach the same Eddington-scaled luminosity across an extensive range of black hole masses. It is therefore more likely to detect SMBH activity in more massive galaxies, due to their higher luminosity and pos Figure 2: Multi-wavelength images of the galaxies that host nuclear X-ray sources, \(10^{\prime\prime}\simeq 1.7\) kpc. For each galaxy, from left to right, are images of CTIO \(r\) band, _Chandra_ X-ray in the energy band of 0.5–8 keV, and broad bandwidth L-band radio continuum image from MeerKAT. The X-ray image is binned into twice the original pixel size. The name of each galaxy at the upper right corner of the CTIO \(r\) image is adopted from Ferguson & Sandage (1990). The NGC name is also shown if available. The cyan circle centered on each X-ray source has a radius of 50% PSF. All galaxies have a corresponding radio source except Antlia 105. sibly higher black hole masses. The stellar mass distributions of the four samples are different, as shown in the right panel of Figure 3. There are more low mass galaxies in the Antlia sample than any other, while the AMUSE-Field sample contains more high mass galaxies. We apply the Kolmogorov-Smirnov test (K-S test) to examine if any two samples are from the same distribution. The results of the K-S test indicate significant differences in the stellar mass distributions between each pair of samples. The p-values for the comparisons between the sample pairs Antlia and Virgo, Antlia and Field, Antlia and Fornax, Virgo and Field, Virgo and Fornax, and Field and Fornax are \(1.1\times 10^{-16}\), \(2.5\times 10^{-6}\), \(1.2\times 10^{-11}\), \(1.42\times 10^{-4}\), \(7.4\times 10^{-3}\), and \(1.7\times 10^{-3}\), respectively. Also, we compare these samples with the standard ETG stellar mass distribution (Moffett et al., 2016) of the Galaxy and Mass Assembly survey phase two (GAMA-II) survey down to a completeness limit of \(\log(M_{\star}/\mathrm{M_{\odot}})=8\). The p-value for Fornax is 0.33 and therefore the Fornax sample is consistent with the standard ETG distribution. Other samples deviate significantly from the standard distribution. To compare samples with different stellar mass distributions, we divide the number of galaxies by the total stellar mass of all ETG in each sample, the total stellar masses \(M_{\mathrm{T}}\) for Antlia, Field, Virgo, and Fornax are 1.0, 5.1, 6.0, and 2.1, in units of \(10^{12}\) M\({}_{\odot}\). The intrinsic X-ray luminosities are converted to 0.5-8 keV assuming a power-law photon index \(\Gamma=1.8\), the median value of AGN spectra in the local 50 Mpc volume (She et al., 2017). Also, we apply the same luminosity bins to all data. The lowest luminosity bin is lower than the median sensitivity of most samples, so we focus on the other three bins. No Fornax ETG is detected in the two most luminous bins. No Antlia ETG AGN is detected at \(\log(L_{\mathrm{X}}/\mathrm{erg~{}s^{-1}})=39.7\)-40.5, while only one AGN with \(\log(L_{\mathrm{X}}/\mathrm{erg~{}s^{-1}})=40.75\pm 0.02\) falls in the \(\log(L_{\mathrm{X}}/\mathrm{erg~{}s^{-1}})=40.5\)-41.1 bin. The high luminosity bin is somewhat arbitrary, especially given the small Antlia source population. To compare the luminous end of AMUSE-Antlia with AMUSE-Virgo and AMUSE-Field, we calculated the expected AGN detections in these samples for the stellar mass of AMUSE-Antlia and took the number of the XLF of the two other samples. The AMUSE-Field XLF gives \(\sim 1.2\) and \(\sim 0.6\) AGN in \(\log(L_{\mathrm{X}}/\mathrm{erg~{}s^{-1}})=39.7\)-40.5 and \(\log(L_{\mathrm{X}}/\mathrm{erg~{}s^{-1}})=40.5\)-41.3 bins. AMUSE-Virgo XLF implies \(\sim 0.2\) AGN in these same two bins. As a result, one real Antlia AGN detection at the luminous end is between the expected \(\sim 0.4\) AGN based on the AMUSE-Virgo XLF and \(\sim 1.8\) AGN on the AMUSE-Field XLF, although the large uncertainty should be noted. For the less luminous bin of \(\log(L_{\mathrm{X}}/\mathrm{erg~{}s^{-1}})=38.9\)-39.7, the number of Antlia AGN found relative to the total stellar mass is comparable with that of AMUSE-Field, and much higher than AMUSE-Virgo. Even though the luminous end of AMUSE-Antlia XLF is hard to constrain due to the small number of AGN, we conclude that the nuclear SMBH activity is higher in a dynamically active environment like the Antlia cluster and the Field, but lower in a more relaxed environment, like Virgo and Fornax. ### Occupation Fraction We compare the black hole activity in ETG of different samples through their AGN occupation fractions. Two Antlia sources hosted by LTG - Antlia 88 and Antlia 98 are excluded. Unlike AMUSE-Virgo, AMUSE-Field, and Fornax, in which most galaxies have \(\sim 5\) ks on-axis snapshots, galaxies in the three AMUSE-Antlia fields have been observed at a variety of detection limits. To ensure a fair comparison, we restrict our study to nuclear X-ray sources of \(\log(L_{\mathrm{X}}/\mathrm{erg~{}s^{-1}})\geq 38.9\) in the energy band of 0.5-8 keV. The luminosity threshold is chosen as the least luminous nuclear X-ray source of AMUSE-Antlia since it has the highest detection limit. The 0.3-10 keV nuclear source luminosities of AMUSE-Virgo and AMUSE-Field are given in Gallo et al. (2010) and Miller et al. (2012), respectively. We convert them to 0.5-8 keV by assuming a single power-law with a photon index of \(\Gamma=2\), as used in the two works. Furthermore, since galaxies in these four samples have distinct stellar mass functions, we categorize them into four equally sized stellar mass bins, which range from \(\log(M_{\star}/\mathrm{M_{\odot}})=8\)-11.6. The occupation rate \(f_{\mathrm{occ}}\) is defined as the number of AGN over the galaxy population. To calculate the uncertainties in the occupation rate, we adopt the posterior probability density function (PDF) based on Bayesian analysis (Sun et al., 2022, Equation 2), \[\begin{split}& P(f_{\mathrm{occ}}|N_{\mathrm{gal}},N_{\mathrm{AGN}}) \propto\\ &\int P(N_{\mathrm{gal}},N_{\mathrm{AGN}}|f_{\mathrm{occ}},\lambda )P(\lambda)\mathrm{d}\lambda\,\end{split} \tag{3}\] where \(N_{\mathrm{gal}}\) is the number of galaxies. \(N_{\mathrm{AGN}}\) is the detected number of AGN and \(\lambda\) stands for its expectation value. On the right side of Equation 3, the joint likelihood can be written as two parts, \[P(N_{\mathrm{gal}},N_{\mathrm{AGN}}|f_{\mathrm{occ}},\lambda)=P(N_{\mathrm{ gal}})P(N_{\mathrm{AGN}}|\lambda). \tag{4}\] The number of galaxies \(N_{\mathrm{gal}}\) is known, which gives \(P(N_{\mathrm{gal}})=1\). Now, \(\lambda=f_{\mathrm{occ}}N_{\mathrm{gal}}\), and \(P(\lambda)=\delta(\lambda-f_{\mathrm{occ}}N_{\mathrm{gal}})\), where \(\delta\) is the Dirac function, so \(f_{\mathrm{occ}}\) can be omitted from the expression. Then, Equation 3 reduces to \[P(f_{\rm occ}|N_{\rm gal},N_{\rm AGN})\propto P(N_{\rm AGN}|\lambda). \tag{5}\] \(P(N_{\rm AGN}|\lambda)\) follows the binomial distribution, where, \[\begin{array}{l}P(N_{\rm AGN}|\lambda)=\\ \frac{N_{\rm gal}!}{N_{\rm AGN}!(N_{\rm gal}-N_{\rm AGN})!}f_{\rm occ}^{N_{ \rm AGN}}(1-f_{\rm occ})^{N_{\rm gal}-N_{\rm AGN}}\.\end{array} \tag{6}\] Thus, we can get the PDF of the occupation rate in order to calculate uncertainties. The occupation rate is likely biased high for AMUSE-Field and that of AMUSE-Antlia may have been underestimated due to the Eddington incompleteness. When restricted to the same X-ray luminosity limit of \(\log(L_{\rm X}/{\rm erg\ s^{-1}})\geqslant 38.9\), and the same stellar mass range of \(10\leqslant\log(M_{\star}/{\rm M}_{\odot})<11.6\), we find that the ETG AGN occupation fraction is \(55^{+13}_{-14}\%\) in Antlia. This fraction is \(55^{+7}_{-7}\%\) for the Field, \(18^{+6}_{-6}\%\) for Virgo, and \(17^{+11}_{-10}\%\) for Fornax. This finding indicates that the black hole activity is enhanced in Antlia, relative to Virgo or Fornax. Also, we compare their occupation fractions in four mass bins in Figure 4. The number of AGN with \(\log(L_{\rm X}/{\rm erg\ s^{-1}})\geqslant 38.9\) in 0.5-8 keV energy band and the number of galaxies in each bin are listed in Table 2. We do not include the lowest mass bin for Fornax and Virgo due to their very small numbers of galaxies. In the two massive bins, the occupation fractions of Antlia and Field are higher than Virgo and Fornax. To fully normalize the impact of different \(M_{\star}\) distributions, we use a weighted bootstrap method, which gives a posterior probability for each item when bootstrapping. We apply the same method on AMUSE-Antlia, AMUSE-Field, AMUSE-Virgo, and Fornax, setting the GAMA-II survey ETG stellar mass function (Moffett et al., 2016) as the standard. The technique details are described in Appendix B. As a result, the new occupation fractions of the normalized samples are \(45\%\pm 10\%\), \(38\%\pm 6\%\), \(13\%\pm 4\%\), and \(20\%\pm 8\%\) for AMUSE-Antlia, AMUSE-Field, AMUSE-Virgo, and Fornax, respectively, as shown in Figure 5. In conclusion, the AGN activity of Antlia and Field are similar, while both are higher than Virgo and Fornax. \begin{table} \begin{tabular}{c c c c c} \hline Sample & Tiny \(M_{\star}\) & Small \(M_{\star}\) & Medium \(M_{\star}\) & Large \(M_{\star}\) \\ \cline{2-5} & \(N_{\rm AGN}/N_{\rm gal}\) & \(N_{\rm AGN}/N_{\rm gal}\) & \(N_{\rm AGN}/N_{\rm gal}\) & \(N_{\rm AGN}/N_{\rm gal}\) \\ (1) & (2) & (3) & (4) & (5) \\ \hline Antlia & 0/25 (0\%) & 1/16 (6.3\%) & 3/8 (37.5\%) & 3/5 (60\%) \\ Field & 1/23 (4.3\%) & 0/17 (0\%) & 8/21 (38.1\%) & 20/30 (66.7\%) \\ Virgo & 0/5 (0\%) & 0/45 (0\%) & 4/33 (12.1\%) & 3/10 (30\%) \\ Fornax & 0/0 (0\%) & 0/8 (0\%) & 3/14 (21.4\%) & 0/5 (0\%) \\ \hline \end{tabular} Note. – (1) Sample Name. (2)–(5) Number of ETG AGN with \(\log(L_{\rm X}/{\rm erg\ s^{-1}})\geqslant 38.9\) in 0.5–8 keV energy band, number of ETG and its occupation fraction in four stellar mass bins. Tiny \(M_{\star}\), small \(M_{\star}\), medium \(M_{\star}\), and large \(M_{\star}\) correspond to the logarithmic galaxy stellar masses of 8–8.9, 8.9–9.8, 9.8–10.7, 10.7–11.6, as shown in Figure 4. \end{table} Table 2: ETG AGN and host galaxy population of different samples Figure 3: Left panel: ETG XLF for AMUSE-Antlia (black error bar), AMUSE-Field (green error bar), AMUSE-Virgo (yellow error bar), and Fornax sample (pink error bar). The leftmost luminosity bin is under the median sensitivity of AMUSE-Antlia, so we focus on the right three bins. We present the 90% upper limit as a downwards arrow for the non-detection of the third bin of Antlia XLF. The \(y-\)axis is \(\log L_{\rm X}\) number density divided by the total stellar mass of galaxies \(M_{\rm T}\) in each sample. Right panel: The stellar mass distributions of the four samples. Also, we plot the standard ETG stellar mass function (Moffett et al., 2016) with the blue long-dashed line. Below a certain detection limit, high mass galaxies can have more AGN detections due to Eddington incompleteness, so for the left panel we divide the XLF by \(M_{\rm T}\). Generally speaking, the AGN XLF in Antlia is consistent with the Field, while both are much higher than Virgo and Fornax. ## 5 Discussion We find that the black hole activity of AMUSE-Antilia is similar to that of AMUSE-Field, and higher than AMUSE-Virgo and Fornax. Note that the AMUSE-Field includes systems from various different environments. Among the AMUSE-Field galaxies with known group membership status, 78% are group galaxies, while 22% are non-group members. However, the black hole activity for the group and non-group members are nearly identical (Miller et al., 2012, Section 5). Overall, we consider the AMUSE-Field sample represents a non-cluster environment. In this context, an intriguing question arises: what is responsible for the enhanced black hole activity in the Antlia cluster, which exceeds the other two clusters and reaches the activity level of non-cluster galaxies? We find 5 AGN (3 ETG AGN and 2 LTG AGN) in the NGC 3258 field, which exceeds the 3 AGN found in the NGC 3268 field and 1 in the southeast field. Hess et al. (2015) note that the NGC 3258 subcluster is the younger structure in the Antlia cluster, based on a larger velocity dispersion of galaxies around NGC 3258 and abundant H i gas content. Pedersen et al. (1997) also conclude that the intragroup gas of NGC 3258 has low metalicity based on the study of its X-ray halo. The AGN activity is enhanced in Antlia, compared to Virgo and Fornax, while within Antlia, the youngest subcluster has the highest AGN activity. A picture starts to emerge that a dynamically young environment is responsible for triggering AGN. Cold gas can fuel AGN accretion. Thus, it is natural to link the cold gas content with AGN activity. Antlia is found to retain a large population of gas-rich galaxies. However, these gas-rich galaxies in Antlia are not strongly linked with AGN. Hess et al. (2015) present a 4.4 deg\({}^{2}\) H i mosaic survey that fully covers the three AMUSE-Antlia fields with the Karoo Array Telescope (KAT-7). Four LTGs within the AMUSE-Antilia footprint (Antlia 93, Antlia 98, Antlia 120, and Antlia 212) have H i detections. Only the BCD, Antlia 98, contains a nuclear X-ray source (see Section 3.2 for details). In addition, Cairns et al. (2019) use the Atacama Pathfinder Experiment telescope (APEX) to study the CO (2-1) content as an H\({}_{2}\) tracer of 72 Antlia galaxies, 20 of which are in the AMUSE-Antilia fields. As a result, four ETG (Antlia 72, Antlia 111, Antlia 222, and Antlia 224) have CO (2-1) detections. However, none of them is paired with a nuclear X-ray source. Also, 7 out of the 9 nuclear X-ray source hosting galaxies in AMUSE-Antlia have APEX observations, but none of them is confirmed to contain CO. The non-detection of cold gas is not a result of depletion by black hole accretion. Taking the maximum Eddington ratio of \(10^{-4}\) as suggested in Miller et al. (2012) for the low-luminosity AGN (LLAGN) in the AMUSE-Virgo and AMUSE-Field surveys (see the left panel of Figure 5 in Miller et al., 2012), the accretion rate of an AGN is ex Figure 4: The occupation fraction of AMUSE-Antila, AMUSE-Field, AMUSE-Virgo, and Fornax in four mass bins. The data are restricted to nuclear X-ray sources with \(\log(L_{\rm X}/{\rm erg~{}s^{-1}})\geq 38.9\) in the 0.5–8 keV band and logarithmic galaxy stellar masses of 8–8.9, 8.9–9.8, 9.8–10.7, and 10.7–11.6. The occupation fractions of Antlia and Field samples are higher than Virgo and Fornax. Figure 5: The probability density functions of the weighted bootstrap replications. Note that the Field refers to the AMUSE-Field sample. As a result, the outcome occupation fraction of the AMUSE-Antlia, AMUSE-Field, AMUSE-Virgo, and Fornax samples are \(45\%\pm 10\%\), \(38\%\pm 6\%\), \(13\%\pm 4\%\), and \(20\%\pm 8\%\), respectively. In conclusion, the AGN activity is similar in AMUSE-Antlia and AMUSE-Field samples, but both are higher than AMUSE-Virgo and Fornax. pected to be \(\sim 10^{-4}\) M\({}_{\odot}\) yr\({}^{-1}\). Antlia contains H i gas and molecular gas with \(M_{\rm H\,\textsc{i}}\sim 1.5\times 10^{10}\) M\({}_{\odot}\)(Hess et al., 2015, Table 2) and \(M_{\rm mol}\sim 9\times 10^{9}\) M\({}_{\odot}\)(Cairns et al., 2019, Table 2), and the depletion timescale would be \(10^{13}\) yr, greatly exceeding the Hubble time. The lack of correlation between cold gas content and AGN activity may be due to the limited detection sensitivity. It is also possible to maintain LLAGN without cold gas. A promising mechanism is the radiatively inefficient accretion flow (RIAF), such as the advection-dominated accretion flow (ADAF) model (e.g. Yuan and Narayan, 2014; Netzer, 2013, chap. 4.4). In this model, the accretion flow can be very hot, even reaching the virial temperature. ADAFs inefficiently convert the gravitational energy to radiative electromagnetic energy because much of the energy is advected into the black hole, thus producing LLAGN. In this scenario, nuclear X-ray sources are not necessarily expected to be associated with cold gas. The enhanced AGN activity may be related to the role of ram pressure stripping. Ram pressure stripping can induce a loss of angular momentum of the gas, causing gas, cold and hot, to flow towards the center and trigger AGN. For example, Poggianti et al. (2017) study seven jellyfish galaxies with clear ram pressure stripping morphology and find 6 of them host AGN. The infall of the NGC 3258 subcluster is likely to have caused ram pressure stripping. There is also clear evidence for ram stripping in Fornax (e.g. Serra et al., 2023; Su et al., 2017, 2017) and Virgo (e.g. Boselli et al., 2014; Junais et al., 2022; Su et al., 2019; Forman et al., 1979). However, these two clusters may be experiencing different stages of ram pressure stripping compared to Antlia. All three clusters contain molecular gas (Virgo: e.g. Kenney and Young, 1989, Fornax: Kleiner et al., 2021), while only Antlia has sufficient H i gas and both Virgo (e.g. Kenney and Young, 1989; di Serego Alighieri et al., 2007; Oosterloo et al., 2010) and Fornax (Loni et al., 2021) are H i deficient. Boselli et al. (2014) suggest that the molecular gas is not stripped as efficiently as the atomic gas. Therefore, Antlia is likely to be in an early stage of ram pressure stripping, in which its gaseous supply is yet completely removed. ## 6 Summary and Conclusions In this work, we present a study of the nuclear X-ray sources of member galaxies in the Antlia cluster using deep _Chandra_ observations. We also include optical data from CTIO and radio observations from MeerKAT. The detection rate of nuclear X-ray sources is 7/84 (8.3%) for ETG and 2/8 (25%) for LTG. All nuclear X-ray sources, but one, have radio counterparts from broad bandwidth MeerKAT L-band observation. These sources in ETG, which typically lack star formation, are considered to be AGN. According to the fundamental plane (Appendix A), the estimated black hole masses range from \(3\times 10^{5}\) M\({}_{\odot}\) to \(5\times 10^{7}\) M\({}_{\odot}\), with an error of 0.3-0.4 dex. We perform a stacking analysis for galaxies in which nuclear X-ray sources are not individually detected, yielding a detection of \(L_{\rm 0.5\mbox{-- }8\mbox{ keV}}=(8.0\pm 2.6)\times 10^{37}\) erg s\({}^{-1}\) with a SNR 3.1, which implies the existence of low luminosity nuclear X-ray activity below our detection limit. For low mass galaxies with \(\log(M_{\star}/\mbox{M${}_{\odot}$})<8.5\), we obtain a 3\(\sigma\) upper limit on their nuclear X-ray luminosity of \(L_{\rm 0.5\mbox{-- }8\mbox{ keV}}=3.3\times 10^{37}\) erg s\({}^{-1}\). The Antlia cluster, as a non-cool core cluster with an ongoing merger, presents a typical dynamically young environment, in sharp contrast with the relatively relaxed, cool core clusters, Virgo and Fornax. These three nearest clusters provide an ideal laboratory for studying the environmental effect on the black hole activity. When restricted to the same X-ray luminosity limit of \(\log(L_{\rm X}/\mbox{erg s}^{-1})\geqslant 38.9\), and the same stellar mass range of \(10\leqslant\log(M_{\star}/\mbox{M${}_{\odot}$})<11.6\), we find that the ETG AGN occupation fraction is \(55^{+13}_{-4}\%\) in Antlia. This fraction is \(55^{+7}_{-7}\%\) for the Field, \(18^{+6}_{-6}\%\) for Virgo, and \(17^{+11}_{-10}\%\) for Fornax. This finding indicates that the black hole activity is enhanced in Antlia, relative to Virgo or Fornax, consistent with the study of their AGN XLF. An early stage of ram pressure stripping may be responsible for the enhanced AGN activities in a dynamically young environment, such as the non-cool core cluster, Antlia, particularly its young subcluster NGC 3258. There is more cold gas in Antlia member galaxies, especially in the young NGC 3258 subcluster, than other clusters, but we do not find a direct link between the detected cold gas and the AGN-hosting ETG, which may be due to the limited detection sensitivity. Meanwhile, the LLAGN may be maintained by the accretion of hot gas. The authors thank the anonymous reviewer for their helpful comments on the manuscript. Z.H. and Z.L. acknowledge the support of the National Natural Science Foundation of China (grant 12225302). Y.S. acknowledges support from Chandra X-ray Observatory grants GO1-22104X, GO2-23120X and NASA Grants 80NSSC22K0856. K.M.H. acknowledges financial support from the grant CEX2021-001131-S funded by MCIN/AEI/ 10.13039/501100011033, from the coordination of the participation in SKA-SPAIN, funded by the Ministry of Science and Innovation (MCIN). W.F. and C.J. acknowledge support from the Smithsonian Institution, the _Chandra_ High Resolution Camera Project through NASA contract NAS8-03060, and NASA Grants 80NSSC19K0116, GO1-22132X, and GO9-20109X. A.S. acknowledges support through a Clay Fellowship. Z.H. acknowledges Fangzheng Shi and Tao Wang for helpful discussion on mathematics and ETG mass function. This research made use of photutils, an astropy package for detection and photometry of astronomical sources (Bradley et al., 2021). ## Appendix A The Fundamental Plane The fundamental plane indicates the correlation among radio flux, X-ray flux, and black hole mass, as shown in (Gultekin et al., 2019, Equation 8): \[\mu=0.55\pm 0.22+(1.09\pm 0.1)R+(-0.59^{+0.16}_{-0.15})X,\] (A1) where \(\mu=\log(M_{\rm BH}/10^{8}\ {\rm M}_{\odot})\), \(R=\log(L_{\rm R}/10^{38}\ {\rm erg\ s}^{-1})\) at 5 GHz, and \(X=\log(L_{\rm X}/10^{40}\ {\rm erg\ s}^{-1})\) in \(2-10\ {\rm keV}\). We measure the 1.28 GHz radio source flux by fitting each source image with a 2-dimensional Gaussian distribution using Python package photutils(Bradley et al., 2021). We convert the flux to 5 GHz assuming a power-law spectrum \(S\propto\nu^{-0.7}\)(Condon et al., 2002), where \(S\) is the measured flux and \(\nu\) is the frequency. As shown in Figure 6, the 9 galaxies with nuclear X-ray sources and radio emission have black hole mass \(6\lesssim\log(M_{\rm BH}/{\rm M}_{\odot})\lesssim 8\), which is typical for AGN. ## Appendix B The Weighted Bootstrap Method Bootstrap is a kind of Bayesian method assuming a static posterior probability of \(1/n\) for each individual value, where \(n\) is the size of the sample (Efron, 1979; Rubin, 1981). The AGN occupation fraction is defined as the ratio of AGN numbers to the galaxy population. For the purpose of comparing the AGN occupation fractions of samples with different stellar mass distributions, we set the bootstrap posterior probability to normalize the replicated sample to the local spheroidal stellar mass distribution (Moffett et al., 2016). We refer to this method as the weighted bootstrap method. **1. A mathematical description**. Suppose that we have an observed sample with size \(n\), let \(\mathbf{x}=(x_{1},x_{2},\cdots,x_{n})\) and \(x_{i}\) denotes the \(i\)th item. Consider a set of bins \(\mathbf{b}=(b_{1},b_{2},\cdots,b_{m})\) with size \(m\), where \(\mathbf{b}\) is chosen to ensure that each bin corresponds to at least one \(x_{i}\) and each \(x_{i}\) falls in bin \(b_{j}\), say: \(b_{j}-\frac{1}{2}\Delta b\leqslant x_{i}<b_{j}+\frac{1}{2}\Delta b\), where \(\Delta b\) is the bin width. A statistic \(\hat{f}\) estimates a parameter \(f\) based on a distribution \(\Phi(x)\). To normalize \(\mathbf{x}\) to \(\Phi(x)\), so that the statistic \(\hat{f}\) can be applied to \(\mathbf{x}\), a posterior probability \(P_{i}\) for each \(x_{i}\) is given as, \[\left\{\begin{array}{l}P_{i}\propto\Phi(b_{j})/N_{j}\\ \sum_{i=1}^{n}P_{i}=1,\end{array}\right.\] (B2) where \(N_{j}\) is the number of \(\mathbf{x}\) elements that fall in \(b_{j}\). Thus, a posterior probability set \(\mathbf{P}=(P_{1},P_{2},\cdots,P_{n})\) is constructed, corresponding to \(\mathbf{x}\). A weighted bootstrap replication generates a random sample of size \(n\) from \(\mathbf{x}\) with a weight factor in \(\mathbf{P}\). Applying \(\hat{f}\) to one replicated sample gives one estimate of parameter \(f\). After many replications, the distribution of the replicated items approaches \(\Phi(x)\). Finally, the result of \(\hat{f}\) on \(\mathbf{x}\) is calculated from all possible bootstrap estimations of \(f\). **2. Realization.** We aim to compare the AGN occupation fractions of three samples, AMUSE-Virgo, AMUSE-Field, and AMUSE-Antlia. We implement this weighted bootstrap method to correct for their different stellar mass distributions (see right panel of Figure 3) which can strongly bias the occupation fraction. Before the statistical procedure, we first restrict the stellar mass to the range of \(9\leqslant\log\left(M_{\star}/\mathrm{M}_{\odot}\right)<11.7\). Also, we keep AGNs with 0.5-8 keV X-ray luminosity \(\log(L_{\mathrm{X}}/\mathrm{erg\ s^{-1}})\geqslant 38.9\), which is the median detection limit of AMUSE-Antlia, the highest among the four samples. With these constraints, the occupation fractions of AMUSE-Antlia, AMUSE-Field, AMUSE-Virgo, and Fornax are \(26^{+8}_{-8}\%\), \(42^{+6}_{-6}\%\), \(12^{+3}_{-4}\%\), and \(11^{+5}_{-6}\%\), respectively. The Schechter function (Schechter, 1976) describes the probability density function for each galaxy in mass space. \[\begin{array}{ll}\Phi(\log M)\mathrm{d}\log M=&\ln(10)\phi^{\star}10^{\log(M /M^{\star})(\alpha+1)}\\ &\exp(-10^{\log(M/M^{\star})})\mathrm{d}\log M,\end{array}\] (B3) where \(\phi^{\star}\) is the normalization constant, \(M^{\star}\) is the characteristic mass of the 'knee' in the mass function, and \(\alpha\) is the slope at the low mass end. We adopt the parameters \(\log(M^{\star}/\mathrm{M}_{\odot})=10.74\pm 0.026\), \(\alpha=-0.525\pm 0.029\) and \(\phi^{\star}=3670\pm 200\) dex\({}^{-1}\) Mpc\({}^{-3}\) of the local spheroidal stellar mass distribution, according to the Galaxy and Mass Assembly survey phase two (GAMA-II) (Moffett et al., 2016). We then calculate the occupation fraction of AMUSE-Antlia by normalizing the stellar mass distribution to the Schechter function with the weighted bootstrap method. For convenience, \(\mathbf{x}\) stands for the observed AMUSE-Antlia sample with \(n\) galaxies, and \(x_{i}\) indicates the stellar mass of the \(i\)th member galaxy. There are 12 stellar mass bins in \(\mathbf{b}\), the first 10 bins have bin widths of 0.2 dex, while the last two have widths of 0.3 dex. We generate weighted bootstrap replications with a posterior probability \(P_{i}\), according to Equation B2, as plotted in Figure 7. After \(100,000\) replications, the mean stellar distribution fits \(\Phi(x)\) well. The statistic \(\hat{f}\) calculating the occupation fraction is applied to each replication. Finally, the weighted bootstrapped distribution of \(\hat{f}\) on \(\mathbf{x}\) is shown in Figure 5. The same process is also applied to AMUSE-Field, AMUSE-Virgo, and Fornax. As a result, the new occupation fractions of the normalized samples are \(45\%\pm 10\%\), \(38\%\pm 6\%\), \(13\%\pm 4\%\), and \(20\%\pm 8\%\) for normalized Antlia, Field, Virgo, and Fornax, respectively. Here, the bootstrapped occupation fraction of AMUSE-Antlia is much higher than the original value. This is due to 5 of 7 AMUSE-Antlia AGN being hosted by ETG with stellar masses around 10.7 dex, which is the 'knee', the peak of Schechter function. Due to their higher posterior probability, these five items significantly contributed to the substantial increase in the outcome. Similarly, Lee et al. (2019) normalize the stellar mass distributions of AMUSE-Virgo and AMUSE-Field samples to Fornax, and find that Virgo and Fornax have similar AGN activity, both lower than AMUSE-Field, consistent with our findings. In conclusion, the AGN activity of Antlia and AMUSE-Field are quite similar, both of which are much higher than the AGN activity in Virgo and Fornax. Figure 6: The fundamental plane correlation of AGN mass, 5 GHz radio luminosity, and 2–10 keV hard X-ray luminosity. The 9 Antlia galaxies hosting nuclear X-ray sources are shown as the black dots with \(1\sigma\) error. Antlia 105 does not have a significant radio source, so here we use 3 times the root-mean-square background level as the radio term to calculate the black hole mass and present it with an open circle. The yellow dotted, green dashed, and purple dash-dotted line indicate black hole mass \(\log(M_{\mathrm{BH}}/\mathrm{M}_{\odot})\) from 6 to 8. As a result, if the galaxies do host AGN, the black hole masses fall in a reasonable range \(6\lesssim\log(M_{\mathrm{BH}}/\mathrm{M}_{\odot})\lesssim 8\).
2301.11567
Threshold dynamics of a nonlocal dispersal SIS epidemic model with free boundaries
To study the influence of the moving front of the infected interval and the spatial movement of individuals on the spreading or vanishing of infectious disease, we consider a nonlocal SIS (susceptible-infected-susceptible) reaction-diffusion model with media coverage, hospital bed numbers and free boundaries. The principal eigenvalue of the integral operator is defined, and the impacts of the diffusion rate of infected individuals and interval length on the principal eigenvalue are analyzed. Furthermore, sufficient conditions for spreading and vanishing of the disease are derived.Our results show that large media coverage and hospital bed numbers are beneficial to the prevention and control of disease. The difference between the model with nonlocal diffusion and that with local diffusion is also discussed and nonlocal diffusion leads to more possibilities.
Yachun Tong, Inkyung Ahn, Zhigui Lin
2023-01-27T07:20:02Z
http://arxiv.org/abs/2301.11567v2
# Threshold dynamics of a nonlocal dispersal SIS epidemic model with free boundaries + # Threshold dynamics of a nonlocal dispersal SIS epidemic model with free boundaries + Footnote †: The first author is supported by the Postgraduate Research & Practice Innovation Program of Jiangsu Province (KYCX21-3188), the second author is supported by the National Research Foundation of Korea(NRF) grant funded by the Korea government (MSIT) (NRF-2022R1F1A1063068) and the third author is supported by the National Natural Science Foundation of China (Grant No. 12271470). Yachun Tong\({}^{a}\), Inkyung Ahn\({}^{b}\) and Zhigui Lin\({}^{a}\) \({}^{a}\) School of Mathematical Science, Yangzhou University, Yangzhou 225002, China \({}^{b}\) Department of Mathematics, Korea University, Sejong 339-700, Republic of Korea Corresponding author. Email: [email protected] (Z. Lin). **Abstract.** To study the influence of the moving front of the infected interval and the spatial movement of individuals on the spreading or vanishing of infectious disease, we consider a nonlocal SIS (susceptible-infected-susceptible) reaction-diffusion model with media coverage, hospital bed numbers and free boundaries. The principal eigenvalue of the integral operator is defined, and the impacts of the diffusion rate of infected individuals and interval length on the principal eigenvalue are analyzed. Furthermore, sufficient conditions for spreading and vanishing of the disease are derived. Our results show that large media coverage and hospital bed numbers are beneficial to the prevention and control of disease. The difference between the model with nonlocal diffusion and that with local diffusion is also discussed and nonlocal diffusion leads to more possibilities. _MSC:_ 35K57, 92D30; secondary: 35R35. _Keywords:_ SIS model; Free boundary; Nonlocal diffusion; Spreading and vanishing ## 1 Introduction With the emergence and outbreak of COVID-19 [3, 30] in recent years, infectious disease models have become one of the most popular research topics. To study the spread and dynamics of COVID-19, most scholars use the SIR (susceptible-infected-recovered) [3, 28], SEIR (susceptible-exposed-infected-recovered) [25, 30] and SEAIR (susceptible-exposed-asymptomatic-infectious-removed) [2, 46] models to describe the spread of COVID-19. Meanwhile, the classical SIS model has received great attention in mathematical epidemiology. Considering the impact of the spatial heterogeneity of the environment and the movement of individuals on infectious diseases, Allen et al. in [1] proposed and discussed an SIS reaction-diffusion system \[\left\{\begin{array}{ll}S_{t}-d_{S}\Delta S=-\frac{\beta(x)SI}{S+I}+\gamma(x )I,&t>0,\ x\in\Omega,\\ I_{t}-d_{I}\Delta I=\frac{\beta(x)SI}{S+I}-\gamma(x)I,&t>0,\ x\in\Omega,\\ \frac{\partial S}{\partial\eta}=\frac{\partial I}{\partial\eta}=0,&t>0,\ x\in \partial\Omega.\end{array}\right. \tag{1.1}\] Here, \(\Omega\subset\mathbb{R}^{n}\) (\(n\geq 1\)) is a bounded domain; \(S(t,x)\) and \(I(t,x)\) indicate the density of susceptible and infected individuals at location \(x\) and time \(t\), respectively; \(d_{S}\) and \(d_{I}\) are positive constants that account for the diffusion rate of susceptible and infected individuals, respectively; and the positive bounded Holder continuous functions \(\beta(x)\) and \(\gamma(x)\) can be interpreted as rates of disease transmission and recovery for \(x\in\Omega\), respectively. The authors in [1] mainly discussed the existence, uniqueness and stability of DFE (disease-free equilibrium) and EE (endemic equilibrium) and used the basic reproduction number \(\mathcal{R}_{0}\) to characterize the risk of the region. Afterwards, Peng and Liu [34] confirmed the conjecture proposed by Allen et al. in [1] that a unique EE is globally asymptotically stable in some special cases. Further results that the effect of individual movement (large or small) on the existence and disappearance of disease were obtained in [33]. For more results of the SIS reaction-diffusion model, one can see [24, 35, 42] and the references therein. It is easy to find that the above articles are devoted to the study of SIS models on a fixed domain. In real life, the movement of species leads to changes in biological habitats, and in mathematics, the free boundary can be used to describe this phenomenon, such as the healing of wounds [10] and the expansion of new species or invasive species [6, 14, 27, 38]. Free boundary problems can also be used to describe the transmission of disease, such as the SIRS model [7], SIS model [22, 47] and references therein. To explore the moving front of the infected individual, Wang and Guo [40] introduced the free boundary and studied the dynamics of the following SIS reaction-diffusion model: \[\left\{\begin{array}{ll}S_{t}-d\Delta S=\sigma-\mu S-\beta(x)SI+\gamma(x)I,& t>0,\,x\in\mathbb{R},\\ I_{t}-d\Delta I=\beta(x)SI-\mu I-\gamma(x)I,&t>0,\,x\in(g(t),h(t)),\\ I(t,x)=0,&t>0,\,x\in\mathbb{R}\backslash(g(t),h(t)),\\ g^{\prime}(t)=-kI_{x}(t,g(t)),\,\,g(0)=-h_{0},&t\geq 0,\\ h^{\prime}(t)=-kI_{x}(t,h(t)),\,\,h(0)=h_{0},&t\geq 0,\\ S(0,x)=S_{0}(x),\,\,I(0,x)=I_{0}(x),&x\in\mathbb{R}.\end{array}\right. \tag{1.2}\] The basic reproduction number was given, and the spreading-vanishing dichotomy was established. Some conditions for disease spreading or vanishing were presented by investigating the effect of the diffusion rate (\(d\)), initial value (\(I_{0}\)) and expanding capability (\(k\)) on the asymptotic behavior of the infected individuals. It is widely known that random dispersal or local diffusion describes the local behavior of the movements of organisms between adjacent spatial locations [26]. Briefly, the classical Laplace diffusion operator is used to describe that the movement of the infectious agent and infected population only occurs between adjacent spatial positions [43]. However, Murray [32] noted that a local or short-range diffusive flux proportional to the gradient is not suitable to characterize some biological phenomena. In the real world, the movements and interactions of some organisms occur at nonadjacent spatial positions, and such dispersal is called nonlocal diffusion [13]. Nonlocal diffusion can occur naturally through dispersal and migration or facilitated by human activities. It can positively impact a population's genetic diversity and long-term viability, but it can also introduce disease or invasive species into new territories. Local diffusion usually involves individuals migrating short distances within a defined area, such as a habitat patch or a specific population. Various factors, including random movement, resource competition, and response to environmental conditions, can cause it. On the other hand, nonlocal diffusion refers to the movement of individuals between different regions or sub-populations. This migration typically involves individuals moving long distances or crossing geographic barriers such as rivers or mountain ranges. Recently, nonlocal diffusion equations have attracted extensive attention and have been used to characterize long-range dispersal in population ecology [6, 26]. In addition, scholars have extensively investigated infectious disease models with nonlocal diffusion, such as the West Nile virus model [15], SIS epidemic model [17, 44], and SIR reaction-diffusion model [18, 45]. For other epidemic models with nonlocal diffusion, see references [9, 39, 41] and references therein. In addition, there are many factors that affect the spread of infectious disease, such as the contact transmission rate and the recovery rate. Educating the public about the disease through mass media (such as television, radio, newspapers, billboards, internet, magazines, etc.), is one of the important precautions. Therefore, media coverage can indirectly reduce the contact rate between people and infectious diseases, thus reducing the contact transmission rate of infectious diseases [36]. In general, the main factor impacting the recovery rate is the availability of health care (such as the number of physicians, nurses, hospital beds and isolation places). In fact, health and medical institutions use the hospital bed-population ratio (HBPR) (the number of hospital beds per 10000 people) as a method of reckoning available resources to the public [31]. Taking into account nonlocal diffusion, media coverage and hospital bed numbers, we consider the following nonlocal dispersal SIS epidemic model with a free boundary: \[\left\{\begin{array}{ll}S_{t}=d\mathcal{L}_{1}[S]+\sigma-\mu_{1}S-\beta(m(x),I,x)SI+\gamma(b(x),I,x)I,&t>0,\,x\in\mathbb{R},\\ I_{t}=d\mathcal{L}_{2}[I;g,h]-\mu_{2}I+\beta(m(x),I,x)SI-\gamma(b(x),I,x)I,&t>0, \,x\in(g(t),h(t)),\\ I(t,x)=0,&t\geq 0,\,\,x\in\mathbb{R}\backslash(g(t),h(t)),\\ h^{\prime}(t)=k\int_{g(t)}^{h(t)}\int_{h(t)}^{+\infty}J(x-y)I(t,x)dydx,&t>0,\\ g^{\prime}(t)=-k\int_{g(t)}^{h(t)}\int_{-\infty}^{g(t)}J(x-y)I(t,x)dydx,&t>0,\\ S(0,x)=S_{0}(x),\,\,g(0)=-h_{0},\,\,h(0)=h_{0},&x\in\mathbb{R},\\ I(0,x)=I_{0}(x),&x\in(-h_{0},h_{0}),\end{array}\right. \tag{1.3}\] where \[\mathcal{L}_{1}[S]=\int_{\mathbb{R}}J(x-y)S(t,y)dy-S(t,x),\] \[\mathcal{L}_{2}[I;g,h]=\int_{g(t)}^{h(t)}J(x-y)I(t,y)dy-I(t,x),\] and \(d,\,S(t,x)\) and \(I(t,x)\) have the same epidemiological interpretation as in (1.1). The constants \(\sigma\), \(\mu_{1}\) and \(\mu_{2}\) are positive, where \(\sigma\) accounts for the environment carrying capability; the natural mortality rate of the susceptible individuals is expressed by \(\mu_{1}\), and \(\mu_{2}\) denotes the sum of the natural mortality and disease-caused death rates of the infected individuals. The functions \(\beta(m(x),I,x)\), \(\gamma(b(x),I,x)\), \(m(x)\), \(b(x)\) are nonnegative, where \(m(x)\) represents the media coverage, and \(b(x)\) stands for the number of hospital beds. In this paper, we assume that (1) the contact infectious rate \(\beta(m(x),I,x)\) is Lipschitz continuous and monotonically decreasing in \(m(x)\) and increasing in \(I\); (2) the recovery rate \(\gamma(b(x),I,x)\) is Lipschitz continuous and increasing in \(b(x)\) and monotonically decreasing in \(I\); (3) \(\beta_{I}(m(x),I,x)\) and \(\gamma_{I}(b(x),I,x)\) are continuous and bounded for \(m(x)\in[0,\infty),I\in[0,\infty)\) and \(x\in(-\infty,\infty)\). For instance, Cui and Zhu [12] used the function \(\beta(I)=\beta e^{mI}\) to model the impact of media coverage on the transmission rate, and Shan and Zhu [37] used the function \(\gamma(b,I,x)=\gamma_{0}+(\gamma_{1}-\gamma_{0})\frac{b}{b+I}\) to describe the hospital resource impact factors. Recalling that \(S(t,x)\) denotes the density at point \(x\) and time \(t\), the kernel function \(J(x-y)\) is regarded as the probability distribution of jumping from place \(y\) to place \(x\), then the integral operator \(\int_{\mathbb{R}}J(x-y)S(t,y)dy\) accounts for the rate at which the individuals are gathering at point \(x\) from all other places, and \(-S(t,x)\) is the rate at which the individuals are leaving at point \(x\) to other places. In addition, the infected individuals stay in the infected interval \((g(t),h(t))\). We further suppose that the initial function \(S_{0}(x)\) satisfies \[S_{0}(x)\in C(\mathbb{R})\cap L^{\infty}(\mathbb{R})\,\,\,\text{and}\,\,\,S_{ 0}(x)>0\,\,\,\text{in}\,\,\mathbb{R}, \tag{1.4}\] and \(I_{0}(x)\) satisfies \[I_{0}(x)\in C([-h_{0},h_{0}]),\,I_{0}(\pm h_{0})=0,\,I_{0}(x)>0\,\,\,\text{in }\,(-h_{0},h_{0}). \tag{1.5}\] For system (1.3), we assume that the kernel function \(J:\mathbb{R}\to\mathbb{R}\) is continuous and nonnegative and has the properties \[(\mathbf{J}):J\in C(\mathbb{R})\cap L^{\infty}(\mathbb{R})\,\,\text{is}\, \text{symmetric},\,\,J(0)>0,\,\,\int_{\mathbb{R}}J(x)dx=1.\] The free boundary conditions \(h^{\prime}(t)=k\,\int_{g(t)}^{h(t)}\,f_{h(t)}^{+\infty}\,J(x-y)I(t,x)dydx\) and \(g^{\prime}(t)=-k\int_{g(t)}^{h(t)}\,f_{-\infty}^{g(t)}\,J(x-y)I(t,x)dydx\) in (1.3) imply that the expanding rate of the interval \((g(t),h(t))\) is determined by the infected individuals and is proportional to the outward flux of the infected individuals across the interval \((g(t),h(t))\)[5]. It is worth mentioning that there are links and differences between local diffusion and nonlocal diffusion. Local diffusion, expressed by the Laplace operator \(\Delta u\) (the Laplace in \(\mathbb{R}^{n}\), \(n\geq 2\)) or \(u_{xx}\) (in one-dimensional space), is used to describe the influence between adjacent positions, and nonlocal diffusion, expressed by the integral operator (is given by \(\int_{\mathbb{R}}J(x-y)u(t,y)dy-u(t,x)\)), is used to describe long-distance dispersal. However, the Laplace operator can be regarded as a local approximation of a nonlocal diffusion operator. In fact, when \(J(\cdot)\) is symmetric and has compact supports, such as \(J(x)=(1/\epsilon)K(x/\epsilon)\) with \(0<\epsilon\ll 1\) and \(K(x)\) is a general mollification function with support \(x\in[-1,1]\), we can transform nonlocal operators into local operators by using the Taylor formula [29]. This article is organized as follows: the existence and uniqueness of the global solution are given in Section 2. Section 3 is devoted to defining and studying the properties of the principal eigenvalue. Section 4 gives some sufficient conditions for the disease to spread or vanish. Finally, a brief discussion is presented in Section 5. ## 2 Global existence and uniqueness In this section, we assume that \(h_{0}>0\), \(S_{0}(x)\) and \(I_{0}(x)\) satisfy (1.4) and (1.5). For any given \(T>0\), we first introduce the notations as follows: \[\mathbb{H}_{T}:=\{h\in C([0,T]):h(0)=h_{0},\,\,\inf_{0\leq t_{1}< t_{2}\leq T}\frac{h(t_{2})-h(t_{1})}{t_{2}-t_{1}}>0\},\] \[\mathbb{G}_{T}:=\{g\in C([0,T]):-g\in\mathbb{H}_{T}\},\] \[D_{T}^{g,h}:=\{(t,x)\in\mathbb{R}^{2}:0<t\leq T,\,\,\,g(t)<x<h(t)\},\] \[D_{T}^{h_{0}}:=\{(t,x)\in\mathbb{R}^{2}:0<t\leq T,\,\,\,-h_{0}< x<h_{0}\},\] \[D_{T}^{\infty}:=\{(t,x)\in\mathbb{R}^{2}:0<t\leq T,\,\,\,x\in \mathbb{R}\},\] \[X_{T}^{S_{0}}:=\{\phi(t,x)\in C(D_{T}^{\infty})\cap L^{\infty}(D _{T}^{\infty}):\phi(0,x)=S_{0}(x)\,\,\,\text{in}\,\,\,\mathbb{R},\,\,\phi(t,x) \geq 0\,\,\,\text{in}\,\,\,D_{T}^{\infty}\},\] \[X_{T}^{I_{0}}:=\{\psi(t,x)\in C(D_{T}^{\infty}):\psi(0,x)=I_{0}( x)\,\,\,\text{in}\,\,\,[-h_{0},h_{0}],\,\,\,\psi(t,x)\geq 0\,\,\,\text{in}\,\,\,D_{T}^{g,h},\] \[\psi(t,x)=0\,\,\,\text{for}\,\,\,t\in(0,T),\,\,\,x\in\mathbb{R} \backslash(g(t),h(t))\}.\] To prove the existence and uniqueness of the global solution of problem (1.3), we first give the following result for problem (1.3) without a free boundary. **Lemma 2.1**: _For any given \(T>0\) and \((g,h)\in\mathbb{H}_{T}\times\mathbb{G}_{T}\), the problem_ \[\left\{\begin{array}{ll}S_{t}=d\mathcal{L}_{1}[S]+\sigma-\mu_{1}S-\beta(m(x), I,x)SI+\gamma(b(x),I,x)I,&0<t\leq T,\ x\in\mathbb{R},\\ I_{t}=d\mathcal{L}_{2}[I;g,h]-\mu_{2}I+\beta(m(x),I,x)SI-\gamma(b(x),I,x)I,&0<t \leq T,\ x\in(g(t),h(t)),\\ I(t,x)=0,&0\leq t\leq T,\ x\in\mathbb{R}\backslash(g(t),h(t)),\\ S(0,x)=S_{0}(x),&x\in\mathbb{R},\\ I(0,x)=I_{0}(x),&x\in(-h_{0},h_{0})\end{array}\right. \tag{2.1}\] _admits a unique solution \((S_{g,h},I_{g,h})\in C(\overline{D}_{T}^{\infty})\times C(\overline{D}_{T}^{g,h})\). Moreover,_ \[0<S_{g,h}(t,x)\leq A\qquad\mbox{for\,\, any}\ (t,x)\in D_{T}^{\infty}, \tag{2.2}\] \[0<I_{g,h}(t,x)\leq A\qquad\mbox{for\,\, any}\ (t,x)\in D_{T}^{g,h}, \tag{2.3}\] _where \(A=\max\{\frac{\sigma}{\mu_{1}},\|S_{0}\|_{\infty}+\|I_{0}\|_{\infty}\}\)._ **Proof.** The main idea of this proof comes from [45]. We divide the proof into three steps. **Step 1.** The parameterized ODE problem. For any given \(x\in\mathbb{R}\), \(s\in(0,T]\), denote \[t_{x}=\left\{\begin{array}{ll}t_{x}^{g},&x\in(g(s),-h_{0})\mbox{ and }x=g(t_{x}^{g}),\\ 0,&x\in[-h_{0},h_{0}],\\ t_{x}^{h},&x\in(h_{0},h(s))\mbox{ and }x=h(t_{x}^{h}),\\ s,&x\in\mathbb{R}\backslash(g(s),h(s)).\end{array}\right.\] Clearly, \(t_{x}>0\) for \(x\in\mathbb{R}\backslash[-h_{0},h_{0}]\), \(t_{x}<s\) for \(x\in(g(s),h(s))\). For any given \((\phi,\psi)\in X_{s}^{S_{0}}\times X_{s}^{I_{0}}\), define \[A_{1}=\max\{A,\,\frac{\sigma+\|\psi\|_{\infty}\sup\gamma}{\mu_{1}},\,\|\phi\|_ {\infty}\},\ \ A_{2}=\max\{A,\,\frac{(d+A_{1}\sup\beta)\|\psi\|_{\infty}}{d+\mu_{2}}\}.\] We discuss it in the following two cases: Case 1: \(x\in\mathbb{R}\backslash[-h_{0},h_{0}]\), \(t\in[0,t_{x}]\). Clearly, \(I(t,x)=0\) for \((t,x)\in[0,t_{x}]\times\mathbb{R}\backslash[-h_{0},h_{0}]\). Consider the ODE problem \[\left\{\begin{array}{ll}S_{t}=d\int_{\mathbb{R}}J(x-y)\phi(t,y)dy-dS+\sigma- \mu_{1}S,&0<t\leq t_{x},\\ S(0,x)=S_{0}(x),&x\in\mathbb{R}\backslash[-h_{0},h_{0}].\end{array}\right. \tag{2.4}\] For any \(S_{1}\), \(S_{2}\in[0,A_{1}]\), \[|d\int_{\mathbb{R}}J(x-y)\phi(t,y)dy-dS_{1}+\sigma-\mu_{1}S_{1}-d \int_{\mathbb{R}}J(x-y)\phi(t,y)dy+dS_{2}-\sigma+\mu_{1}S_{2}|\] \[= (d+\mu_{1})|S_{1}-S_{2}|.\] Therefore, \(F:=d\int_{\mathbb{R}}J(x-y)\phi(t,y)dy-dS+\sigma-\mu_{1}S\) is Lipschitz continuous in \(S\) for \(S\in[0,A_{1}]\). By the fundamental theory of ODEs, problem (2.4) has a unique solution \(S_{\phi}(t,x)\) defined in \(t\in[0,\widehat{t}_{x})\), and \(S_{\phi}(t,x)\) is continuous in both \(t\) and \(x\). To see that \(t\to S(\cdot,x)\) can be uniquely extended to \([0,t_{x}]\), we need to prove that if \(S_{\phi}(t,x)\) is uniquely defined for \(t\in[0,\widehat{t}_{x}]\) with \(\widehat{t}_{x}\in(0,t_{x}]\), then \[0\leq S_{\phi}(t,x)\leq A_{1},\;\;\mbox{for}\;\;t\in[0,\widehat{t}_{x}]\;\mbox{ and}\;x\in\mathbb{R}\backslash[-h_{0},h_{0}].\] Obviously, \[d\int_{\mathbb{R}}J(x-y)\phi(t,y)dy-dA_{1}+\sigma-\mu_{1}A_{1}\] \[\leq d\|\phi\|_{\infty}-dA_{1}+\sigma-\mu_{1}A_{1}\] \[\leq 0,\] and \(\|S_{0}\|_{\infty}\leq A_{1}\). Thanks to the direct comparison argument, one can derive \(S_{\phi}(t,x)\leq A_{1}\) for \(t\in[0,\widehat{t}_{x}]\) and \(x\in\mathbb{R}\backslash[-h_{0},h_{0}]\). We use similar method to prove that \(S_{\phi}(t,x)\geq 0\) for \(t\in[0,\widehat{t}_{x}]\), \(x\in\mathbb{R}\backslash[-h_{0},h_{0}]\). Case 2: \(x\in(g(s),h(s))\), \(t\in[t_{x},s]\). Define \[\widehat{S}_{\phi}(x)=\left\{\begin{array}{ll}S_{0}(x),&x\in[-h_{0},h_{0}] \\ S_{\phi}(t_{x},x),&x\notin[-h_{0},h_{0}]\end{array}\right.\;\;\mbox{and}\;\;\; \widehat{I}(x)=\left\{\begin{array}{ll}I_{0}(x),&x\in[-h_{0},h_{0}]\\ 0,&x\notin[-h_{0},h_{0}].\end{array}\right.\] Consider the ODE problem \[\left\{\begin{array}{ll}S_{t}=F_{1}(t,x,S,I),&t_{x}<t\leq s,\\ I_{t}=F_{2}(t,x,S,I),&t_{x}<t\leq s,\\ S(t_{x},x)=\widehat{S}_{\phi}(x),\;I(t_{x},x)=\widehat{I}(x),&x\in(g(s),h(s)) \end{array}\right. \tag{2.5}\] with \[F_{1}=d\int_{\mathbb{R}}J(x-y)\phi(t,y)dy-dS+\sigma-\mu_{1}S+\gamma(b,I,x) \psi-\beta(m,I,x)SI,\] \[F_{2}=d\int_{g(t)}^{h(t)}J(x-y)\psi(t,y)dy-dI-\mu_{2}I-\gamma(b,I,x)I+\beta(m,I,x)S\psi.\] For any \((S_{i},I_{i})\in[0,A_{1}]\times[0,A_{2}](i=1,\,2)\), obviously, \(F_{i}(t,x,S,I)\) is Lipschitz continuous in \((S,I)\) for \((S_{i},I_{i})\in[0,A_{1}]\times[0,A_{2}]\) by the continuity and monotonicity of \(\beta(m(x),I,x)\) and \(\gamma(b(x),I,x)\), and it is uniformly continuous for \(x\in(g(s),h(s))\) and \(t\in[t_{x},s]\). In addition, \(F_{i}(t,x,S,I)\) is continuous in all its variables in this range. Problem (2.5) has a unique solution \((S_{\phi,\psi}(t,x)\), \(I_{\phi,\psi}(t,x))\) for \(t\in[t_{x},s_{x})\), and \((S_{\phi,\psi}(t,x)\), \(I_{\phi,\psi}(t,x))\) is continuous in both \(t\) and \(x\) by the fundamental theorem of ODEs. To show that \((S_{\phi,\psi}(t,x)\), \(I_{\phi,\psi}(t,x))\) can be uniquely extended to \([t_{x},s]\), it suffices to prove that if \((S_{\phi,\psi}(t,x)\), \(I_{\phi,\psi}(t,x))\) is uniquely defined for \(t\in[t_{x},\widehat{t}]\) with \(\widehat{t}\in(t_{x},s]\), then \[0\leq S_{\phi,\psi}(t,x)\leq A_{1},\;0\leq I_{\phi,\psi}(t,x)\leq A_{2}\;\; \mbox{for}\;\;t\in[t_{x},\widehat{t}]. \tag{2.6}\] In fact, it is easy to see that \[F_{1}(t,x,A_{1},A_{2})\] \[= d\int_{\mathbb{R}}J(x-y)\phi(t,y)dy-dA_{1}+\sigma-\mu_{1}A_{1}+ \gamma(b,A_{2},x)\psi-\beta(m,A_{2},x)A_{1}A_{2}\] \[\leq d\|\phi\|_{\infty}-dA_{1}+\sigma-\mu_{1}A_{1}+\gamma(b,A_{2},x )\|\psi\|-\beta(m,A_{2},x)A_{1}A_{2}\] \[< d\|\phi\|_{\infty}-dA_{1}+\sigma-\mu_{1}A_{1}+\|\psi\|_{\infty} \sup\gamma\] \[\leq 0\] \[F_{2}(t,x,A_{1},A_{2})\] \[= d\int_{g(t)}^{h(t)}J(x-y)\psi(t,y)dy-dA_{2}-\mu_{2}A_{2}-\gamma(b,A_ {2},x)A_{2}+\beta(m,A_{2},x)A_{1}\psi\] \[\leq d\int_{g(t)}^{h(t)}J(x-y)\psi(t,y)dy-dA_{2}-\mu_{2}A_{2}+A_{1}\| \psi\|_{\infty}\sup\beta\] \[\leq 0.\] Since \(A_{1}\geq\|S_{0}\|_{\infty}\), \(A_{2}\geq\|I_{0}\|_{\infty}\), we have \(S_{\phi,\psi}(t,x)\leq A_{1}\) and \(I_{\phi,\psi}(t,x)\leq A_{2}\) in \(t\in[t_{x},\widehat{t}]\) by the comparison argument. The left part of (2.6) can be obtained similarly by using \(F_{i}(t,x,0,0)\geq 0\) (\(i=1,\,2\)). **Step 2.** A fixed point theorem. For any \(s\in(0,T)\), we note \[X_{s}^{S_{0}}:=\{\phi|_{\overline{D}_{s}^{\infty}}:\phi\in X_{T}^{S_{0}}\},\, \,\,X_{s}^{I_{0}}:=\{\psi|_{\overline{D}_{s}^{s,h}}:\psi\in X_{T}^{I_{0}}\}.\] Denote \[(\widehat{S}(t,x),\widehat{I}(t,x))=\left\{\begin{array}{ll}(S_{\phi(t,x)}, 0),&x\in\mathbb{R}\backslash[-h_{0},h_{0}],\,\,t=[0,t_{x}],\\ (S_{\phi,\psi}(t,x),I_{\phi,\psi}(t,x)),&x\in(g(s),h(s)),\,\,t\in[t_{x},s], \end{array}\right.\] where \(S_{\phi}(t,x)\), \(S_{\phi,\psi}(t,x)\) and \(I_{\phi,\psi}(t,x))\) are given in Step 1. By Step 1, for any \((\phi,\,\psi)\), we have a unique solution \((\widehat{S},\widehat{I})\) for \(t\in[0,s]\). It is easy to check that \(\widehat{S}(t,x)\) is continuous in \(\overline{D}_{s}^{\infty}\), and \(\widehat{I}(t,x)\) is continuous in \(\overline{D}_{s}^{s,h}\) due to the continuous dependence of the ODE solution on the parameters. Therefore, \((\widehat{S},\widehat{I})\in X_{s}^{S_{0}}\times X_{s}^{I_{0}}\). Note that \(X_{s}^{S_{0}}\) and \(X_{s}^{I_{0}}\) are complete metric spaces, respectively, with the norms \[d_{1}(\phi_{1},\phi_{2})=\|\phi_{1}-\phi_{2}\|_{C(\overline{D}_{s}^{\infty})},\,\,\,d_{2}(\psi_{1},\psi_{2})=\|\psi_{1}-\psi_{2}\|_{C(\overline{D}_{s}^{s,h })}.\] Hence, we find a mapping \(\Gamma:X_{s}^{S_{0}}\times X_{s}^{I_{0}}\to X_{s}^{S_{0}}\times X_{s}^{I_{0}}\) by \(\Gamma(\phi,\psi)=(\widehat{S},\widehat{I})\). Setting \[M_{1}=\max\{A,\,4\|S_{0}\|_{\infty},\,\frac{4\sigma}{\mu_{1}},\,\frac{4(\sigma +M_{2})}{\mu_{1}+d}\},\,M_{2}=\max\{A,\,2\|I_{0}\|_{\infty}\}.\] Define \[X_{s}^{M_{1}}=\{\phi|\,\phi\in X_{s}^{S_{0}},\,\,\|\phi\|_{C(\overline{D}_{s}^ {\infty})}\leq M_{1}\},\] \[X_{s}^{M_{2}}=\{\psi|\,\psi\in X_{s}^{I_{0}},\,\,\|\psi\|_{C(\overline{D}_{s}^{ s,h})}\leq M_{2}\}.\] Using the same arguments as Lemma 2.1 in [45], we can deduce that \(\Gamma\) is a contraction map and has a unique fixed point \((S^{*},\,I^{*})\in X_{s}^{M_{1}}\times X_{s}^{M_{2}}\) for any \(s\in(0,\widehat{s}]\) by the contraction mapping theorem, where \(\widetilde{s}\) relies on \(d\), \(M_{1}\), \(\beta\), \(\gamma\) and \(M_{2}\). To prove that \((S^{*},\,I^{*})\) is the unique solution (2.1) for \(t\in[0,s]\) with \(s\in(0,\widehat{s}]\), it suffices to discuss that any nonnegative solution \((S,\,I)\) of (2.1) for \(t\in[0,s]\) belongs to \(X_{s}^{M_{1}}\times X_{s}^{M_{2}}\). We claim that \[S+I\leq A\,\,\,\,\mbox{for}\,\,\,t\in[0,s]\,\,\,\mbox{and}\,\,\,x\in\mathbb{R}, \tag{2.7}\] which implies that \[0\leq S(t,x)\leq A,\,\,\,\,(t,x)\in[0,s]\times\mathbb{R},\] \[0\leq I(t,x)\leq A,\,\,\,\,(t,x)\in[0,s]\times[g(t),h(t)].\] Consequently, we obtain that for any \(s\in(0,\widehat{s}]\), (2.1) admits a unique solution for \(t\in[0,s]\). To complete the proof, it only needs to prove that claim (2.7) is true. Let \(N=S+I\), then for \(t\in[0,s]\) and \(x\in(g(t),h(t))\), \[\begin{array}{rcl}N_{t}&\leq&d\int_{\mathbb{R}}J(x-y)N(t,y)dy-dN(t,x)-d(\int_ {-\infty}^{g(t)}+\int_{h(t)}^{\infty})J(x-y)I(t,y)dy+\sigma-\mu_{1}N\\ &\leq&d\int_{\mathbb{R}}J(x-y)N(t,y)dy-dN(t,x)-\mu_{1}N+\sigma.\end{array}\] Since \(\|S_{0}\|_{\infty}+\|I_{0}\|_{\infty}\leq A\), we have \(N(t,x)\leq A\) for \(t\in[0,s]\) and \(x\in(g(t),h(t))\) by using the comparison principle. While \(t\in[0,s]\) and \(x\in\mathbb{R}\backslash(g(t),h(t))\), then \(I(t,x)=0\), which implies \[N_{t}=d\int_{\mathbb{R}}J(x-y)N(t,y)dy-dN(t,x)+\sigma-\mu_{1}N.\] It is clear that \(N\leq A\) for \(t\in[0,s]\) and \(x\in\mathbb{R}\backslash(g(t),h(t))\). Next, we prove that (2.7) holds. We argue by contradiction and suppose that \(\max\limits_{(t,x)\in[0,\,s]\times\mathbb{R}}N(t,x)>A\), there exists a point \((t_{0},x_{0})\in[0,s]\times\mathbb{R}\) such that \(\max N=N(t_{0},x_{0})>A\). According to the above analysis, we can obtain that \(x_{0}=g(t_{0})\) or \(x_{0}=h(t_{0})\). Without loss of generality, we assume that \(x_{0}=g(t_{0})\). Since \(I(t_{0},g(t_{0}))=0\), \(S(t_{0},x_{0})\) satisfies \[S_{t}(t_{0},x_{0})=d\int_{\mathbb{R}}J(x_{0}-y)S(t_{0},y)dy-dS(t_{0},x_{0})+ \sigma-\mu_{1}S(t_{0},x_{0}).\] Obviously, \(S_{t}(t_{0},x_{0})\geq 0\) and \(S(t_{0},x_{0})\leq A\), which contradicts the assumption: \(\max\limits_{(t,x)\in[0,\,s]\times\mathbb{R}}N(t,x)>A\). **Step 3.** Extension of the solution. We now prove that the unique solution of (2.1) for \(0<t\leq s\) can be extended to \(0<t\leq T\). In Step 2, \(\widehat{s}\) depends only on \(d\), \(\gamma\), \(\beta\), and \(A\). With the help of the iterative method, we obtain that problem (2.1) has a unique solution for \(t\in[0,T]\). We omit it here; see Step 3 of the proof in Lemma 2.1 in [45] for more details. \(\Box\) **Theorem 2.2**: _Assume that \((\mathbf{J})\) holds. For any \(S_{0}\) satisfying (1.4) and \(I_{0}\) satisfying (1.5), problem (1.3) admits a unique positive solution \((S(t,x),\,I(t,x);\,g(t),\,h(t))\) defined for all \(t>0\)._ **Proof.** We will prove this result by using Lemma 2.1 and the fixed point theorem. For any given \(T>0\) and \((g^{*},h^{*})\in\mathbb{H}_{T}\times\mathbb{G}_{T}\), we know that (2.1) with \((g,h)=(g^{*},h^{*})\) has a unique solution \((S^{*},I^{*})\). Define \[\left\{\begin{array}{l}\widetilde{g}=-h_{0}-k\int_{0}^{t}\int_{g^{*}( \tau)}^{h^{*}(\tau)}\int_{-\infty}^{g^{*}(\tau)}J(x-y)I^{*}(\tau,x)dydxd\tau, \\ \widetilde{h}=h_{0}+k\int_{0}^{t}\int_{g^{*}(\tau)}^{h^{*}(\tau)}\int_{h^{*}( \tau)}^{+\infty}J(x-y)I^{*}(\tau,x)dydxd\tau.\end{array}\right. \tag{2.8}\] In view of \((\mathbf{J})\) and \(J(0)>0\), there exist constants \(\epsilon_{0}\in(0,h_{0}/4)\) and \(\delta_{0}>0\) such that \[J(x)\geq\delta_{0}\;\;\mbox{if}\;\;|x|\leq\epsilon_{0}.\] By virtue of the above inequality and proof of Theorem 2.1 in [5], there exists \[T_{0}=T_{0}(k,A,h_{0},\epsilon_{0},I_{0},J)>0,\] such that, for any \(T\in(0,T_{0}]\), \[\sup\limits_{0\leq t_{1}<t_{2}\leq T}\frac{\widetilde{g}(t_{2})-\widetilde{g}( t_{1})}{t_{2}-t_{1}}\leq-k\eta_{1},\;\inf\limits_{0\leq t_{1}<t_{2}\leq T} \frac{\widetilde{h}(t_{2})-\widetilde{h}(t_{1})}{t_{2}-t_{1}}\geq k\eta_{2},\] \[\widetilde{h}(t)-\widetilde{g}(t)\leq 2h_{0}+\epsilon_{0}/4\,\,\,\text{for}\,\,t \in[0,T],\] where \[\eta_{1}=\frac{1}{4}\epsilon_{0}\delta_{0}e^{-(d+\mu_{2}+\sup\gamma)T_{0}}\int_{ -h_{0}}^{-h_{0}+\frac{\epsilon_{0}}{4}}I_{0}(x)dx,\,\,\,\eta_{2}=\frac{1}{4} \epsilon_{0}\delta_{0}e^{-(d+\mu_{2}+\sup\gamma)T_{0}}\int_{h_{0}-\frac{ \epsilon_{0}}{4}}^{h_{0}}I_{0}(x)dx.\] Let \[\Sigma_{T}:=\{(g,h)\in\mathbb{H}_{T}^{h_{0}}\times\mathbb{G}_{T}^{h_{0}}:\sup _{0\leq t_{1}<t_{2}\leq T}\frac{g(t_{2})-g(t_{1})}{t_{2}-t_{1}}\leq-k\eta_{1},\] \[\inf_{0\leq t_{1}<t_{2}\leq T}\frac{h(t_{2})-h(t_{1})}{t_{2}-t_{1}}\geq k\eta _{2},\,h(t)-g(t)\leq 2h_{0}+\frac{\epsilon_{0}}{4}\,\,\,\text{for}\,\,\,t\in[0,T]\},\] and define the mapping \(\mathcal{F}(g^{*},h^{*})=(\widetilde{g},\widetilde{h})\). Clearly, the above analysis implies that \[\mathcal{F}(\Sigma_{T})\subset\Sigma_{T}\,\,\text{for}\,\,\text{T}\in(0, \text{T}_{0}].\] In the following, similar to the proof of Theorem 1.1 in [45], we first prove that \(\mathcal{F}\) is a contraction mapping, and then \(\mathcal{F}\) admits a unique fixed point in \(\Sigma_{T}\) by the contraction mapping theorem. Next, we can derive that \((g,h)\in\Sigma_{T}\) holds for any solution \((S,I;g,h)\) of (1.3) for \(t\in[0,T]\), that is, \((S,I;g,h)\) is a unique solution of (1.3) for \(t\in[0,T]\). Finally, we can show that the solution \((S,I;g,h)\) of (1.3) is uniquely extended to \(t\in(0,+\infty)\). Here, we omit the proof here; see Step 3 in the proof of Theorem 1.1 in [45] for more details. ## 3 The eigenvalue problem For any \(-\infty<L_{1}<L_{2}<+\infty\) and \(d>0\), denote \[\mathcal{L}_{\{(L_{1},\,L_{2}),\,d\}}[\phi](x)=d\int_{L_{1}}^{L_{2}}J(x-y) \phi(y)dy-d\phi(x).\] Now, we introduce some results on the principal eigenvalue of the linear operator \(\mathcal{L}_{\{(L_{1},\,L_{2}),\,d\}}+a(x):C([L_{1},L_{2}])\mapsto C([L_{1},L _{2}])\) defined by \[(\mathcal{L}_{\{(L_{1},\,L_{2}),\,d\}}+a(x))[\phi](x)=d\int_{L_{1}}^{L_{2}}J (x-y)\phi(y)dy-d\phi(x)+a(x)\phi(x),\] where \(a(x)=\frac{\sigma\beta(m(x),0,x)}{\mu_{1}}-\mu_{2}-\gamma(b(x),0,x)\in C([L_{ 1},L_{2}])\) and \(J\) satisfies \((\mathbf{J})\). Furthermore, we assume \((\mathbf{H}):a(x)\) is Lipschitz continuous and achieves its maximum in \([L_{1},L_{2}]\) at some point \(x_{0}\in(L_{1},L_{2})\). Define the generalized principal eigenvalue as \[\begin{array}{l}\lambda_{p}(\mathcal{L}_{\{(L_{1},\,L_{2}),\,d\}}+a(x))\\ :=\inf\{\lambda\in\mathbb{R}\,|\,\exists\phi\in C([L_{1},L_{2}]),\,\phi>0\,\, \,\text{s. t.}\,\,(\mathcal{L}_{\{(L_{1},\,L_{2}),\,d\}}+a(x))[\phi]\leq \lambda\phi\,\,\text{in}\,\,(L_{1},L_{2})\}.\end{array} \tag{3.1}\] Furthermore, we call it a principal eigenvalue if \(\lambda_{p}(\mathcal{L}_{\{(L_{1},\,L_{2}),\,d\}}+a(x))\) is an eigenvalue of the operator \(\mathcal{L}_{\{(L_{1},\,L_{2}),\,d\}}+a(x)\) with a continuous and positive eigenfunction. Recalling that \(a(x)\) is Lipschitz continuous and achieves a global maximum in \((L_{1},L_{2})\), \(a(x)\) automatically satisfies the condition \(\frac{1}{(\sup_{x\in(L_{1},\,L_{2})}a(x))-a(x)}\not\in L^{1}\). Therefore, it follows from Theorem 1.1 or Theorem 1.2 in [11] that the generalized principal eigenvalue \(\lambda_{p}({\cal L}_{\{(L_{1},\,L_{2}),\,d\}}+a(x))\) is a principal eigenvalue. In this section, we are interested in the properties of the generalized principal eigenvalue \(\lambda_{p}({\cal L}_{\{(L_{1},\,L_{2}),\,d\}}+a(x))\) with media coverage \(m(x)\) and hospital bed number \(b(x)\) and asymptotic behavior of the principal eigenvalue in large and small interval lengths \((L_{1},L_{2})\) or diffusion rate \(d\). Before stating our primary result, we recall a useful proposition from [11]. **Proposition 3.1**: _([11]) The following assertions hold:_ \((i)\) _Assume \((L_{1},L_{2})\subset(L_{3},L_{4})\). Then,_ \[\lambda_{p}({\cal L}_{\{(L_{1},\,L_{2}),\,d\}}+a(x))\leq\lambda_{p}({\cal L}_{ \{(L_{3},\,L_{4}),\,d\}}+a(x)).\] \((ii)\) _Fix \(L_{1},L_{2}\) and suppose that \(a_{1}(x)\leq a_{2}(x)\). Then,_ \[\lambda_{p}({\cal L}_{\{(L_{1},\,L_{2}),\,d\}}+a_{1}(x))\leq\lambda_{p}({\cal L }_{\{(L_{1},\,L_{2}),\,d\}}+a_{2}(x)).\] _Moreover, if \(a_{1}(x)+\delta<a_{2}(x)\) for some \(\delta>0\), then_ \[\lambda_{p}({\cal L}_{\{(L_{1},\,L_{2}),\,d\}}+a_{1}(x))<\lambda_{p}({\cal L}_ {\{(L_{1},\,L_{2}),\,d\}}+a_{2}(x)).\] \((iii)\)_\(\lambda_{p}({\cal L}_{\{(L_{1},\,L_{2}),\,d\}}+a(x))\) is Lipschitz continuous in \(a(x)\). More precisely,_ \[|\lambda_{p}({\cal L}_{\{(L_{1},\,L_{2}),\,d\}}+a_{1}(x))-\lambda_{p}({\cal L} _{\{(L_{1},\,L_{2}),\,d\}}+a_{2}(x))|\leq\|a_{1}(x)-a_{2}(x)\|_{\infty}.\] Let us now analyze the impact of media coverage \(m(x)\) and hospital bed number \(b(x)\) on the generalized principal eigenvalue. Obviously, the following result holds by Proposition 3.1\((ii)\). **Theorem 3.2**: _Suppose that \(({\bf J})\) and \(({\bf H})\) hold. Then,_ \((i)\)_\(\lambda_{p}({\cal L}_{\{(L_{1},\,L_{2}),\,d\}}+a(x))\) is strictly monotone decreasing in \(m(x)\)._ \((ii)\)_\(\lambda_{p}({\cal L}_{\{(L_{1},\,L_{2}),\,d\}}+a(x))\) is strictly monotone decreasing in \(b(x)\)._ From now on, we discuss the effect of interval length on the principal eigenvalue \(\lambda_{p}({\cal L}_{\{(L_{1},L_{2}),\,d\}}+a(x))\). **Theorem 3.3**: _Suppose that \(({\bf J})\) and \(({\bf H})\) hold; then, the following three conclusions hold:_ \((i)\)_\(\lambda_{p}({\cal L}_{\{(L_{1},\,L_{2}),\,d\}}+a(x))\) is continuous for \(L_{1},L_{2}\in(-\infty,+\infty)\)._ \((ii)\)_\(\lim_{L_{1},\,L_{2}\to 0}\lambda_{p}({\cal L}_{\{(L_{1},\,L_{2}),\,d\}}+a(x))=a(0)-d\)._ \((iii)\)_\(\lim_{-L_{1},L_{2}\to+\infty}\lambda_{p}({\cal L}_{\{(L_{1},\,L_{2}),\,d\}}+a(x))= \sup_{x\in{\mathbb{R}}}a(x)\)._ **Proof.** The proof of \((i)\) is similar to Proposition 3.4 in [5], and we omit it here. \((ii)\) Due to the continuity of \(a(x)\), for any given \(\epsilon>0\), there exists \(h>0\) small enough such that \[|a(x)-a(0)|<\epsilon,\ x\in[-h,h].\] Since \(\lambda_{p}({\cal L}_{\{(-h,\,h),\,d\}}+a(x))\) is a principal eigenvalue, there exists a positive function \(\phi(x)\in C([-h,h])\) such that \[d\int_{-h}^{h}J(x-y)\phi(y)dy-d\phi(x)+a(x)\phi(x)=\lambda_{p}({\cal L}_{\{(- h,\,h),\,d\}}+a(x))\phi(x),\ x\in[-h,h],\] which gives by integrating, \[|\lambda_{p}(\mathcal{L}_{\{(-h,\,h),\,d\}}+a(x))-a(0)+d|\] \[= |\tfrac{d\int_{-h}^{h}\int_{-h}^{h}J(x-y)\phi(y)\phi(x)dydx}{\int_{ \cdoth}^{h}\phi^{2}(x)dx}+\tfrac{\int_{-h}^{h}a(x)\phi^{2}(x)dx}{\int_{\cdoth}^ {h}\phi^{2}(x)dx}-a(0)|\] \[= |\tfrac{d\int_{-h}^{h}\int_{h}^{h}J(x-y)\phi(y)\phi(x)dydx}{\int_{ \cdoth}^{h}\phi^{2}(x)dx}+\tfrac{\int_{-h}^{h}(a(x)-a(0))\phi^{2}(x)dx}{\int_{ \cdoth}^{h}\phi^{2}(x)dx}|\] \[\leq \tfrac{d\|\mathcal{J}\|_{\infty}(\int_{-h}^{h}\phi(x)dx)^{2}}{ \int_{-h}^{h}\phi^{2}(x)dx}+|\tfrac{\int_{-h}^{h}(a(x)-a(0))\phi^{2}(x)dx}{ \int_{-h}^{h}\phi^{2}(x)dx}|\] \[\leq 2d\|J\|_{\infty}h+\epsilon\to\epsilon\,\,\,\text{as}\,\,\,h\to 0^{+}.\] From the arbitrariness of \(\epsilon\), we have \[|\lambda_{p}(\mathcal{L}_{\{(-h,\,h),\,d\}}+a(x))-a(0)+d|\to 0\,\,\,\text{as}\, \,\,h\to 0^{+},\] which together with the continuity of \(\lambda_{p}(\mathcal{L}_{\{(L_{1},\,L_{2}),\,d\}}+a(x))\) about \(L_{1}\), \(L_{2}\) give \[\lim_{L_{1},\,L_{2}\to 0}\lambda_{p}(\mathcal{L}_{\{(L_{1},\,L_{2}),\,d\}}+a(x)) =a(0)-d.\] \((iii)\) According to the monotonicity of \(\lambda_{p}(\mathcal{L}_{\{(L_{1},\,L_{2}),\,d\}}+a(x))\) with respect to interval \((L_{1},L_{2})\) and function \(a(x)\) yields \[\lambda_{p}(\mathcal{L}_{\{(L_{1},\,L_{2}),\,d\}}+a(x))\leq\lambda_{p}( \mathcal{L}_{\{(L_{1},\,L_{2}),\,d\}}+\sup_{x\in\mathbb{R}}a(x))\leq\lambda_ {p}(\mathcal{L}_{\{(-\infty,\,+\infty),\,d\}}+\sup_{x\in\mathbb{R}}a(x)).\] Consider the following eigenvalue problem \[d\int_{-\infty}^{\infty}J(x-y)\phi(y)dy-d\phi(x)+\phi(x)\sup_{x\in\mathbb{R}}a (x)=\lambda_{p}(\mathcal{L}_{\{(-\infty,\,+\infty),\,d\}}+\sup_{x\in\mathbb{ R}}a(x))\phi,\,\,x\in\mathbb{R}. \tag{3.2}\] It is easily seen that \(\lambda_{p}(\mathcal{L}_{\{(-\infty,\,+\infty),\,d\}}+\sup_{x\in\mathbb{R}}a(x ))=\sup_{x\in\mathbb{R}}a(x)\). So the principal eigenvalue \(\lambda_{p}(\mathcal{L}_{\{(L_{1},\,L_{2}),\,d\}}+a(x))\leq\sup_{x\in\mathbb{ R}}a(x)\). Therefore, \[\limsup_{-L_{1},\,L_{2}\to+\infty}\lambda_{p}(\mathcal{L}_{\{(L_{1},\,L_{2}),\,d\}}+a(x))\leq\sup_{x\in\mathbb{R}}a(x).\] To prove \((iii)\), it suffices to prove that \(\liminf_{-L_{1},\,L_{2}\to+\infty}\lambda_{p}(\mathcal{L}_{\{(L_{1},\,L_{2}),\,d \}}+a(x))\geq\sup_{x\in\mathbb{R}}a(x)\) holds. In fact, by the continuity of \(a(x)\) and the definition of sup, for given \(\epsilon>0\), there exists some \(x_{0}\in\mathbb{R}\) such that \[\sup_{x\in\mathbb{R}}a(x)-\epsilon\leq a(x_{0}).\] By \((\mathbf{J})\), for given \(\epsilon>0\), there exist \(L_{1}<x_{0}-1\) and \(L_{2}>x_{0}+1\) such that \[\int_{L_{1}}^{L_{2}}J(z)dz>1-\epsilon.\] Now take \[\delta_{n}(x-x_{0})=\left\{\begin{array}{ll}k_{1}e^{-1/(1-n^{2}(x-x_{0})^{2} )}>0,&x\in(x_{0}-1/n,x_{0}+1/n),\\ =0,&x\notin(x_{0}-1/n,x_{0}+1/n),\end{array}\right.\] where \(k_{1}\) is positive and satisfies \(\int_{\mathbb{R}}\delta_{n}(x-x_{0})dx=1\). It is easy to check that the sequence \(\{\delta_{n}(x-x_{0})\}\) weakly converges to some \(\delta(x-x_{0})\) in \(L^{1}((L_{1},L_{2}))\). By the definition of \(\lambda_{p}(\mathcal{L}_{\{(L_{1},\,L_{2}),\,d\}}+a(x))\), one can easily obtain \[\begin{array}{l}d\int_{L_{1}}^{L_{2}}J(x-y)\delta_{n}(y-x_{0})dy-d\delta_{n} (x-x_{0})+a(x)\delta_{n}(x-x_{0})\\ \leq\ \lambda_{p}(\mathcal{L}_{\{(L_{1},\,L_{2}),\,d\}}+a(x))\delta_{n}(x-x_{0} ),\ x\in(L_{1},L_{2}).\end{array} \tag{3.3}\] Integrating the equation of (3.3) over \((L_{1},L_{2})\) yields \[\begin{array}{lcl}\lambda_{p}(\mathcal{L}_{\{(L_{1},\,L_{2}),\,d\}}+a(x))& \geq&\frac{d\,L_{1}^{L_{2}}\int_{L_{1}}^{L_{2}}J(x-y)\delta_{n}(y-x_{0})dydx-d \int_{L_{1}}^{L_{2}}\delta_{n}(x-x_{0})dx+\int_{L_{1}}^{L_{2}}a(x)\delta_{n}(x -x_{0})dx}{\int_{L_{1}}^{L_{2}}\delta_{n}(x-x_{0})dx}\\ \geq&\frac{d\,L_{1}^{L_{2}}\int_{x_{0}-1}^{x_{0}+1}J(x-y)\delta_{n}(y-x_{0})dydx -d\int_{L_{1}}^{L_{2}}\delta_{n}(x-x_{0})dx+\int_{L_{1}}^{L_{2}}a(x)\delta_{n} (x-x_{0})dx}{\int_{L_{1}}^{L_{2}}\delta_{n}(x-x_{0})dx}\\ \geq&\frac{d\,f_{x_{0}-1}^{x_{0}+1}\,\delta_{n}(y-x_{0})[\int_{L_{1}-x_{0}+1}^{ L_{2}-x_{0}+1}J(x)dz]yd-d\int_{L_{1}}^{L_{2}}\delta_{n}(x-x_{0})dx+\int_{L_{1}}^{L_{ 2}}a(x)\delta_{n}(x-x_{0})dx}{\int_{L_{1}}^{L_{2}}\delta_{n}(x-x_{0})dx}\\ \geq&\frac{(d(1-\epsilon)-d)\int_{L_{1}}^{L_{2}}\delta_{n}(x-x_{0})dx+\int_{L_ {1}}^{L_{2}}a(x)\delta_{n}(x-x_{0})dx}{\int_{L_{1}}^{L_{2}}\delta_{n}(x-x_{0} )dx}\\ =&-d\epsilon+\frac{\int_{L_{1}}^{L_{2}}a(x)\delta_{n}(x-x_{0})dx}{\int_{L_{1}}^ {L_{2}}\delta_{n}(x-x_{0})dx},\end{array}\] where we have used that \(\int_{x_{0}-1}^{x_{0}+1}\delta_{n}(x-x_{0})dx=\int_{L_{1}}^{L_{2}}\delta_{n}(x -x_{0})dx\) for sufficiently large \(n\). Therefore, by taking \(n\rightarrow+\infty\), \[\begin{array}{lcl}\liminf_{-L_{1},\,L_{2}\rightarrow+\infty}\lambda_{p}( \mathcal{L}_{\{(L_{1},\,L_{2}),\,d\}}+a(x))&\geq&-d\epsilon+\int_{-\infty}^{+ \infty}a(x)\delta(x-x_{0})dx\\ &=&-d\epsilon+a(x_{0})\\ &\geq&-d\epsilon+\sup_{x\in\mathbb{R}}a(x)-\epsilon.\end{array}\] It follows from the arbitrarily of \(\epsilon\) that \[\liminf_{-L_{1},L_{2}\rightarrow+\infty}\lambda_{p}(\mathcal{L}_{\{(L_{1},\, L_{2}),\,d\}}+a(x))\geq\sup_{x\in\mathbb{R}}a(x).\] \(\Box\) In the following, we discuss the monotonicity of the principal eigenvalue with respect to \(d\) and its limiting behaviors as \(d\to 0\) or \(d\rightarrow+\infty\). **Theorem 3.4**: _Suppose that \((\mathbf{J})\) and \((\mathbf{H})\) hold. Then, the following statements hold:_ \((i)\)_\(\lambda_{p}(\mathcal{L}_{\{(L_{1},\,L_{2}),\,d\}}+a(x))\) is a strictly monotone decreasing function of \(d\)._ \((ii)\)_\(\lim_{d\to 0}\lambda_{p}(\mathcal{L}_{\{(L_{1},\,L_{2}),\,d\}}+a(x))=\max_{x\in[L_{1},\,L_{ 2}]}a(x)\)._ \((iii)\) _If \(\int_{-\infty}^{L_{1}-L_{2}}J(z)dz>0\)(symmetrically, \(\int_{L_{2}-L_{1}}^{+\infty}J(z)dz>0\)) holds, then \(\lim_{d\rightarrow+\infty}\lambda_{p}(\mathcal{L}_{\{(L_{1},\,L_{2}),\,d\}}+a( x))=-\infty\)._ **Proof.**\((i)\) Assume that \(\lambda_{p}(d_{1}):=\lambda_{p}(\mathcal{L}_{\{(L_{1},L_{2}),\,d\}}+a(x))\) is the principal eigenvalue and \(\phi(x)\) is the corresponding positive eigenfunction with \(\|\phi\|_{L^{2}}=1\), we have \[\lambda_{p}(d_{1})\phi(x)=d_{1}\int_{L_{1}}^{L_{2}}J(x-y)\phi(y)dy-d_{1}\phi(x )+a(x)\phi(x),\ \ x\in(L_{1},L_{2}).\] Suppose that \(d<d_{1}\), then \[\lambda_{p}(d_{1}) = d_{1}\int_{L_{1}}^{L_{2}}\int_{L_{1}}^{L_{2}}J(x-y)\phi(y)\phi(x)dydx -d_{1}+\int_{L_{1}}^{L_{2}}a(x)\phi^{2}(x)dx\] \[= d\int_{L_{1}}^{L_{2}}\int_{L_{1}}^{L_{2}}J(x-y)\phi(y)\phi(x)dydx -d_{1}+\int_{L_{1}}^{L_{2}}a(x)\phi^{2}(x)dx\] \[+(d_{1}-d)\int_{L_{1}}^{L_{2}}\int_{L_{1}}^{L_{2}}J(x-y)\phi(y)\phi (x)dydx\] \[< d\int_{L_{1}}^{L_{2}}\int_{L_{1}}^{L_{2}}J(x-y)\phi(y)\phi(x)dydx -d_{1}+\int_{L_{1}}^{L_{2}}a(x)\phi^{2}(x)dx+d_{1}-d\] \[= d\int_{L_{1}}^{L_{2}}\int_{L_{1}}^{L_{2}}J(x-y)\phi(y)\phi(x)dydx -d+\int_{L_{1}}^{L_{2}}a(x)\phi^{2}(x)dx\] \[\leq \lambda_{p}(d).\] Therefore \(\lambda_{p}(d_{1})<\lambda_{p}(d)\). \((ii)\) The idea of this proof is from Theorem 2.8 in [44]. For the eigenvalue problem \[d\int_{L_{1}}^{L_{2}}J(x-y)\varphi(y)dy-d\varphi(x)+(\max_{x\in[L_{1},\,L_{2}] }a(x))\,\varphi(x)=\lambda_{p}^{*}\varphi(x),\ x\in(L_{1},L_{2}). \tag{3.4}\] It follows from [19] that \(\lambda_{p}^{*}\leq\max_{x\in[L_{1},\,L_{2}]}a(x)\). Therefore, we have \(\lambda_{p}(\mathcal{L}_{\{(L_{1},\,L_{2}),\,d\}}+a(x))\leq\lambda_{p}^{*} \leq\max_{x\in[L_{1},\,L_{2}]}a(x)\) by \((ii)\) of Proposition 3.1. Next, we prove that \(\liminf_{d\to 0}\lambda_{p}(\mathcal{L}_{\{(L_{1},\,L_{2}),\,d\}}+a(x))\geq \max_{x\in[L_{1},\,L_{2}]}a(x)\). Assume for the contrary that \(\liminf_{d\to 0}\lambda_{p}(\mathcal{L}_{\{(L_{1},\,L_{2}),\,d\}}+a(x))\leq \max_{x\in[L_{1},\,L_{2}]}a(x)-\epsilon\) for some \(\epsilon>0\). By the definition of \(\liminf\), there exists some \(\widehat{d}>0\) such that if \(d\leq\widehat{d}\), then \[\lambda_{p}(\mathcal{L}_{\{(L_{1},\,L_{2}),\,d\}}+a(x))\leq\max_{x\in[L_{1},\,L _{2}]}a(x)-\frac{\epsilon}{2}.\] On the other hand, by the continuity of \(a(x)\), there exist \(x_{0}\in(L_{1},L_{2})\) and \(r>0\) such that \[\max_{x\in[L_{1},\,L_{2}]}a(x)\leq a(x)+\frac{\epsilon}{4},\ x\in U_{r}(x_{0}) \subset(L_{1},L_{2}).\] Therefore, \[\lambda_{p}(\mathcal{L}_{\{(L_{1},\,L_{2}),\,d\}}+a(x))\leq a(x)-\frac{ \epsilon}{4}\] for \(0<d<\widehat{d}\) and \(x\in U_{r}(x_{0})\). Let \((\lambda_{p}(\mathcal{L}_{\{(L_{1},\,L_{2}),\,d\}}+a(x)),\psi(x))\) be the eigenpair of the following eigenvalue problem: \[d\int_{L_{1}}^{L_{2}}J(x-y)\psi(y)dy-d\psi(x)+a(x)\psi(x)=\lambda_{p}( \mathcal{L}_{\{(L_{1},\,L_{2}),\,d\}}+a(x))\psi(x),\ x\in(L_{1},L_{2}).\] Then, \[\int_{L_{1}}^{L_{2}}J(x-y)\psi(y)dy-\psi(x)=\frac{\lambda_{p}(\mathcal{L}_{\{( L_{1},\,L_{2}),\,d\}}+a(x))-a(x)}{d}\psi(x)\leq-\frac{\epsilon}{4d}\psi(x) \ \,\mbox{in}\,\ U_{r}(x_{0}).\] Let \(\widetilde{\lambda}\) be the principal eigenvalue of the linear problem \[\left\{\begin{array}{ll}\int_{\mathbb{R}}J(x-y)u(y)dy-u(x)=\lambda u(x)& \mbox{in}\,\ U_{r}(x_{0}),\\ u(x)=0&\mbox{in}\,\ \mathbb{R}\backslash U_{r}(x_{0}).\end{array}\right. \tag{3.5}\] It is well known that \(-1<\widetilde{\lambda}<0\) by Theorem 2.1 in [20]. Let \(\Psi(x)\) be the eigenfunction corresponding to \(\widetilde{\lambda}\) and \(\|\Psi(x)\|_{L^{\infty}}=1\). Take \[\overline{\psi}(x)=\frac{\psi(x)}{\inf_{U_{r}(x_{0})}\psi(x)},\,\,\,\underline {\psi}(x)=\Psi(x).\] We consider the following problem \[\left\{\begin{array}{ll}\int_{\mathbb{R}}J(x-y)u(y)dy-u(x)=-\frac{\epsilon} {4d}u(x)&\mbox{ in }\,\,U_{r}(x_{0}),\\ u(x)=0&\mbox{ in }\,\,\mathbb{R}\backslash U_{r}(x_{0}).\end{array}\right. \tag{3.6}\] Direct calculation yields \[\begin{array}{ll}&\int_{U_{r}(x_{0})}J(x-y)\overline{\psi}(y)dy-\overline{ \psi}(x)+\frac{\epsilon}{4d}\overline{\psi}(x)\\ =&\frac{1}{\inf\psi}[\int_{U_{r}(x_{0})}J(x-y)\psi(y)dy-\psi(x)]+\frac{ \epsilon}{4d}\overline{\psi}(x)\\ \leq&\frac{1}{\inf\psi}(-\frac{\epsilon}{4d}\psi(x)+\frac{\epsilon}{4d}\psi(x) )\\ =&0,\end{array}\] and \[\begin{array}{ll}&\int_{U_{r}(x_{0})}J(x-y)\underline{\psi}(y)dy-\underline {\psi}(x)+\frac{\epsilon}{4d}\underline{\psi}(x)\\ =&\int_{U_{r}(x_{0})}J(x-y)\Psi(x)dy-\Psi(x)]+\frac{\epsilon}{4d}\Psi(x)\\ =&\widetilde{\lambda}\Psi(x)+\frac{\epsilon}{4d}\Psi(x)\\ \geq&0\end{array}\] provided \(d<\min\{\widehat{d},-\frac{\epsilon}{4\widetilde{\lambda}}\}\). Hence, by the super-sub solution method in [21], one can yield (3.6) has a positive solution between \(\overline{\psi}(x)\) and \(\underline{\psi}\), which implies that \(\widetilde{\lambda}=-\frac{\epsilon}{4d}\). This contradicts to the independence of \(\widetilde{\lambda}\) from \(d\). Therefore, \(\underset{d\to 0}{\lim}\,\lambda_{p}(\mathcal{L}_{\{(L_{1},\,L_{2}),\,d\}}+a(x))= \underset{x\in[L_{1},\,L_{2}]}{\max}\,a(x)\). \((iii)\) We claim that if \(\int_{-\infty}^{L_{1}-L_{2}}J(z)dz>0\) (or \(\int_{L_{2}-L_{1}}^{+\infty}J(z)dz>0\)), then \(\underset{d\to+\infty}{\lim}\,\lambda_{p}(\mathcal{L}_{\{(L_{1},\,L_{2}),\,d \}}+a(x)):=\lambda_{\infty}=-\infty\). We argue by contradiction and suppose that \(\lambda_{\infty}>-\infty\). Now let \(\varphi(x)=C_{1}\) (positive constant), without loss of generality, taking \(C_{1}=1\), then for \(x\in(L_{1},L_{2})\), \[\begin{array}{ll}&d\int_{L_{1}}^{L_{2}}J(x-y)\varphi(y)dy-d\varphi(x)+a(x) \varphi(x)\\ <&d\int_{L_{1}}^{L_{2}}J(x-y)dy-d+\underset{x\in[L_{1},L_{2}]}{\max}\,a(x)\\ =&-d\int_{\mathbb{R}\backslash[L_{1},L_{2}]}J(x-y)dy+\underset{x\in[L_{1},L_{ 2}]}{\max}\,a(x)\\ =&-d(\int_{-\infty}^{x-L_{2}}+\int_{x-L_{1}}^{+\infty})J(z)dz+\underset{x\in[ L_{1},L_{2}]}{\max}\,a(x)\\ \leq&-d(\int_{-\infty}^{L_{1}-L_{2}}+\int_{L_{2}-L_{1}}^{+\infty})J(z)dz+ \underset{x\in[L_{1},L_{2}]}{\max}\,a(x)\end{array}\] as \(\int_{\mathbb{R}}J(x)dx=1\). Owing to the assumption that \(\int_{-\infty}^{L_{1}-L_{2}}J(z)dz>0\) (or \(\int_{L_{2}-L_{1}}^{+\infty}J(z)dz>0\)), there exists a \(d\) adequately large such that \[d\int_{L_{1}}^{L_{2}}J(x-y)\varphi(y)dy-d\varphi(x)+a(x)\varphi(x)\leq( \lambda_{\infty}-1)\varphi(x).\] Thus, by definition of \(\lambda_{p}(\mathcal{L}_{\{(L_{1},\,L_{2}),\,d\}}+a(x))\), we have \[\lambda_{p}(\mathcal{L}_{\{(L_{1},\,L_{2}),\,d\}}+a(x))\leq\lambda_{\infty}-1,\] further, \[\lim_{d\to+\infty}\lambda_{p}({\cal L}_{\{(L_{1},\,L_{2}),\,d\}}+a(x))=\lambda_{ \infty}\leq\lambda_{\infty}-1.\] We obtain the desired contradiction. Hence, \(\lim_{d\to+\infty}\lambda_{p}({\cal L}_{\{(L_{1},\,L_{2}),\,d\}}+a(x))=-\infty\). \(\Box\) **Remark 3.5**: _Compared with [44], where the nonlocal operator is \(d\int_{\Omega}J(x-y)(\psi(y)-\psi(x))dy\), we consider the nonlocal operator \(d\int_{\Omega}J(x-y)\varphi(y)dy-d\varphi(x)\) and prove that when \(d\to+\infty\), the limit of the principal eigenvalue is \(-\infty\) for some cases, which is different from the result in [44]; its limit is the average of \(a(x)\) over \(\Omega\)._ ## 4 Spreading-vanishing Since \(h(t)\) and \(-g(t)\) are monotonically increasing with \(t>0\), there exist \(h_{\infty}\) and \(g_{\infty}\) such that \(\lim_{t\to+\infty}g(t)=g_{\infty}\in[-\infty,-h_{0})\) and \(\lim_{t\to+\infty}h(t)=h_{\infty}\in(h_{0},+\infty]\). Here we define that **vanishing** occurs if \(h_{\infty}-g_{\infty}<+\infty\) and \(\lim_{t\to+\infty}\max_{x\in[g(t),\,h(t)]}I(t,x)=0\); and **spreading** happens provided that \(h_{\infty}-g_{\infty}=+\infty\) and \(\limsup_{t\to+\infty}\|I(\cdot,t)\|_{C([g(t),h(t)])}>0\). In this section, we always assume that \(({\bf J})\) and \(({\bf H})\) hold. The following proposition directly holds from Theorem 3.5 in [6]. **Proposition 4.1**: _Let \((S,I;g,h)\) be the unique solution of (1.3). If \(h_{\infty}-g_{\infty}<+\infty\), then \(\lim_{t\to+\infty}g^{\prime}(t)=\lim_{t\to+\infty}h^{\prime}(t)=0\)._ Next, we discuss the asymptotic behavior of the solution of problem (1.3) when \(h_{\infty}-g_{\infty}<+\infty\). **Theorem 4.2**: _Let \((S,I;g,h)\) be the unique solution of problem (1.3) with \(h_{\infty}-g_{\infty}<+\infty\), then \(\lim_{t\to+\infty}\max_{x\in[g(t),\,h(t)]}I(t,x)=0\), \(\lim_{t\to+\infty}S(t,x)=\frac{\sigma}{\mu_{1}}\) and \(\lambda_{p}({\cal L}_{\{(g_{\infty},\,h_{\infty}),\,d\}}+a(x))\leq 0\)._ **Proof.** Assume by contradiction that \(\lim_{t\to+\infty}\max_{x\in[g(t),\,h(t)]}I(t,x)>0\), there exists \(\epsilon_{1}>0\) and sequence \(\{(t_{i},x_{i})\}_{i=1}^{\infty}\) with \(x_{i}\in[g(t),h(t)]\) and \(t_{i}\to+\infty\) as \(i\to+\infty\) such that \(I(t_{i},x_{i})\geq\frac{\epsilon_{1}}{2}\) for \(i\in\mathbb{N}\). Since \(g_{\infty}<g(t)<x_{i}<h(t)<h_{\infty}\), there exists a subsequence \(\{x_{i_{j}}\}_{j=1}^{\infty}\) such that \(x_{i_{j}}\to x_{0}\in(g_{\infty},h_{\infty})\) as \(j\to+\infty\). For \(t\in(-t_{i},+\infty)\) and \(x\in(g(t+t_{i}),h(t+t_{i}))\), define \[\overline{I}_{i}(t,x)=I(t+t_{i},x).\] Applying Theorem 2.2 gives that \(I\) and \(S\) are positive and bounded, and then \(\overline{I}_{i}(t,x)\) satisfies \[\overline{I_{it}}\geq d\int_{g_{i}(t)}^{h_{i}(t)}J(x-y)\overline{I_{i}}(t,y)dy -d\overline{I_{i}}(t,x)-\mu_{2}\overline{I_{i}}-\gamma(b,0,x)\overline{I_{i}}, \,t>-t_{i},\,x\in(g_{i}(t),h_{i}(t)).\] We next consider the following auxiliary problem \[\left\{\begin{array}{ll}u_{t}=d\int_{g_{i}(t)}^{h_{i}(t)}J(x-y)u(t,y)dy-du(t, x)-\mu_{2}u-\gamma(b,0,x)u,&t>-t_{i},\,x\in(g_{i}(t),h_{i}(t)),\\ u(0,x)=\overline{I}_{i}(0,x),&x\in(g_{i}(t),h_{i}(t)),\end{array}\right.\] it follows that \(u(t,x)\to U(t,x)\) as \(i\to+\infty\), and \(U(t,x)\) satisfies \[\left\{\begin{array}{ll}U_{t}(t,x)=d\int_{g_{\infty}}^{h_{\infty}}J(x-y)U(t, y)dy-dU(t,x)-\mu_{2}U-\gamma(b,0,x)U,&t\in\mathbb{R},\,\,\,x\in(g_{\infty},h_{ \infty}),\\ U(0,x_{0})=\lim_{i\to+\infty}\overline{I}_{i}(0,x_{i})=\lim_{i\to+\infty}I(t_{i},x _{i})\geq\frac{\epsilon_{1}}{2}>0,\end{array}\right.\] and then \(U(t,x)>0\) in \(\mathbb{R}\times(g_{\infty},h_{\infty})\) by the maximum principle [15] for the nonlocal problem. On the other hand, considering \(h_{\infty}-g_{\infty}<+\infty\) and Proposition 4.1, we have \(\lim\limits_{t\to+\infty}g^{\prime}(t)=\lim\limits_{t\to+\infty}h^{\prime}(t)=0\) as \(t\to+\infty\), which means \[0=\lim\limits_{i\to+\infty}h^{\prime}(t+t_{i}) = k\lim\limits_{i\to+\infty}\int_{g(t+t_{i})}^{h(t+t_{i})}\int_{h (t+t_{i})}^{+\infty}J(x-y)\overline{I}_{i}(t,x)dydx\] \[\geq k\int_{g(\infty)}^{h(\infty)}\int_{h(\infty)}^{+\infty}J(x-y)U(t,x)dydx\] \[> 0\] and \[\begin{array}{rcl}0=\lim\limits_{i\to+\infty}g^{\prime}(t+t_{i})&=&-k\lim \limits_{i\to+\infty}\int_{g(t+t_{i})}^{h(t+t_{i})}\int_{-\infty}^{g(t+t_{i})} J(x-y)\overline{I}_{i}(t,x)dydx\\ &\leq&-k\int_{g(\infty)}^{h(\infty)}\int_{-\infty}^{g(\infty)}J(x-y)U(t,x)dydx \\ &<&0.\end{array}\] It is a contradiction. Hence, \(\lim\limits_{t\to+\infty}\max\limits_{x\in[g(t),h(t)]}I(t,x)=0\). Next, we will prove that \(\lim\limits_{t\to+\infty}S(t,x)=\frac{\sigma}{\mu_{1}}\). Since \(\lim\limits_{t\to+\infty}\max\limits_{x\in[g(t),h(t)]}I(t,x)=0\), for any \(\epsilon>0\), one can choose a \(T>0\) large such that \[0<I(t,x)<\epsilon\] for \(t>T\), \(x\in(g(t),h(t))\). Obviously, \(S(t,x)\) satisfies \[S_{t}\geq d\mathcal{L}_{1}[S]+\sigma-\mu_{1}S-\epsilon\beta(m,0,x)S,\ t>T,\ x\in \mathbb{R},\] and then \(S(t,x)\geq\underline{S}(t)\), where \(\underline{S}(t)\) is the solution to problem \[\left\{\begin{array}{ll}\underline{S}_{t}=\sigma-(\mu_{1}+\epsilon\sup \limits_{x\in\mathbb{R}}\beta(m,0,x))\underline{S},\ \ t>T,\\ \underline{S}(T)=\inf\limits_{x\in\mathbb{R}}S(T,x).\end{array}\right.\] It follows from Lemma 2.4 in [26] that \(\lim\limits_{t\to+\infty}\underline{S}(t)=\frac{\sigma}{\mu_{1}+\epsilon\sup \limits_{x\in\mathbb{R}}\beta(m,0,x)}\). Therefore, \(\liminf\limits_{t\to+\infty}S(t,x)\geq\frac{\sigma}{\mu_{1}+\epsilon\sup \limits_{x\in\mathbb{R}}\beta(m,0,x)}\). Letting \(\epsilon\to 0\) yields \[\liminf\limits_{t\to+\infty}S(t,x)\geq\frac{\sigma}{\mu_{1}}. \tag{4.1}\] On the other hand, \(S(t,x)\) satisfies \[S_{t}\leq d\mathcal{L}_{1}[S]+\sigma-\mu_{1}S+\gamma(b,\epsilon,x)\epsilon,\ t>T,\ x\in\mathbb{R}.\] Let \(\overline{S}(t)\) be the solution of \[\left\{\begin{array}{ll}\overline{S}_{t}=\sigma-\mu_{1}\overline{S}(t)+\sup \limits_{x\in\mathbb{R}}\gamma(b,\epsilon,x)\epsilon,\ \ t>T,\\ \overline{S}(T)=\sup\limits_{x\in\mathbb{R}}S(T,x).\end{array}\right.\] Apparently, \(S(t,x)\leq\overline{S}(t)\) by Lemma 2.4 in [26] and \(\lim\limits_{t\to+\infty}\overline{S}(t)=\frac{1}{\mu_{1}}[\sigma+\sup \limits_{x\in\mathbb{R}}\gamma(b,\epsilon,x)\epsilon]\). Therefore, \[\limsup\limits_{t\to+\infty}S(t,x)\leq\lim\limits_{t\to+\infty}\overline{S}(t )=\frac{1}{\mu_{1}}[\sigma+\sup\limits_{x\in\mathbb{R}}\gamma(b,\epsilon,x) \epsilon].\] Letting \(\epsilon\to 0\) gives \[\limsup_{t\rightarrow+\infty}S(t,x)\leq\frac{\sigma}{\mu_{1}}, \tag{4.2}\] which together with (4.1) yields \(\lim_{t\rightarrow+\infty}S(t,x)=\frac{\sigma}{\mu_{1}}\) uniformly for \(x\in\mathbb{R}\). In what follows, we prove that \(\lambda_{p}(\mathcal{L}_{\{(g_{\infty},h_{\infty}),\,d\}}+a(x))\leq 0\). We argue by contradiction and suppose that \(\lambda_{p}(\mathcal{L}_{\{(g_{\infty},h_{\infty}),\,d\}}+a(x))>0\). Owing to the continuous dependence of \(\lambda_{p}(\mathcal{L}_{\{(g_{\infty},h_{\infty}),\,d\}}+a(x))\) on \(a(x)\) and \((g_{\infty},h_{\infty})\), there exists a small \(\epsilon\) such that \(\lambda_{p}(\mathcal{L}_{\{(g_{\infty}+\epsilon,h_{\infty}-\epsilon),\,d\}}+a (x)-\beta(m,0,x)\epsilon)>0\). Furthermore, in view of \(S(t,x)\rightarrow\frac{\sigma}{\mu_{1}}\) as \(t\rightarrow+\infty\) and \(h_{\infty}-g_{\infty}<+\infty\), there exists a \(T^{*}>0\) such that \[g(t)<g_{\infty}+\epsilon,\ h(t)>h_{\infty}-\epsilon,\ t>T^{*},\] \[S(t,x)>\frac{\sigma}{\mu_{1}}-\epsilon,\ t>T^{*},\ x\in\mathbb{R}.\] Then, for \(t>T^{*}\), \(x\in(g(t),h(t))\), \[I_{t}(t,x) = d\int_{g(t)}^{h(t)}J(x-y)I(t,y)dy-dI(t,x)-\mu_{2}I+\beta(m,I,x)SI- \gamma(b,I,x)I\] \[\geq d\int_{g_{\infty}+\epsilon}^{h_{\infty}-\epsilon}J(x-y)I(t,y)dy -dI(t,x)-\mu_{2}I+\beta(m,I,x)(\tfrac{\sigma}{\mu_{1}}-\epsilon)I-\gamma(b,I, x)I\] \[\geq d\int_{g_{\infty}+\epsilon}^{h_{\infty}-\epsilon}J(x-y)I(t,y)dy -dI(t,x)-\mu_{2}I+\beta(m,0,x)(\tfrac{\sigma}{\mu_{1}}-\epsilon)I-\gamma(b,0, x)I.\] Let \(\phi(x)\) be the eigenfunction corresponding to \(\lambda_{p}(\mathcal{L}_{\{(g_{\infty}+\epsilon,h_{\infty}-\epsilon),\,d\}}+a (x)-\beta(m,0,x)\epsilon)\) and \(\|\phi(x)\|_{L^{\infty}}=1\), and then \(\phi(x)\) satisfies \[d\int_{g_{\infty}+\epsilon}^{h_{\infty}-\epsilon}J(x-y)\phi(y) dy-d\phi-\mu_{2}\phi+\beta(m,0,x)(\tfrac{\sigma}{\mu_{1}}-\epsilon)\phi- \gamma(b,0,x)\phi\] \[= d\int_{g_{\infty}+\epsilon}^{h_{\infty}-\epsilon}J(x-y)\phi(y) dy-d\phi+a(x)\phi-\beta(m,0,x)\epsilon\phi\] \[= \lambda_{p}(\mathcal{L}_{\{(g_{\infty}+\epsilon,h_{\infty}- \epsilon),\,d\}}+a(x)-\beta(m,0,x)\epsilon)\phi.\] If we choose \(\delta\) sufficiently small such that \(\delta\phi(x)\leq I(T^{*},x)\) for \(x\in[g_{\infty}+\epsilon,h_{\infty}-\epsilon]\), then \[I(t,x)\geq\delta\phi(x)>0\ \ \mbox{for}\ \ t>T^{*},\ \ x\in[g_{\infty}+ \epsilon,h_{\infty}-\epsilon]\] by the comparison principle in [15], which leads to a contradiction to the fact \(\lim_{t\rightarrow+\infty}\max_{x\in[g(t),\,h(t)]}I(t,x)=0\). \(\Box\) **Theorem 4.3**: _Suppose \(\lambda_{p}(\mathcal{L}_{\{(-h_{0},\,h_{0}),\,d\}}+a(x))<0\). Then \(h_{\infty}-g_{\infty}<+\infty\), \(\lim_{t\rightarrow+\infty}\max_{x\in[g(t),\,h(t)]}I(t,x)=0\) and \(\lim_{t\rightarrow+\infty}S(t,x)=\frac{\sigma}{\mu_{1}}\) if \(\|S_{0}(x)\|_{L^{\infty}(\mathbb{R})}+\|I_{0}(x)\|_{C([-h_{0},\,h_{0}])}\) is sufficiently small._ **Proof.** Applying Lemma 2.1 yields \(S(t,x)\leq\frac{\sigma}{\mu_{1}}\) for \(t>0,\,x\in\mathbb{R}\) if \(\|S_{0}(x)\|_{L^{\infty}(\mathbb{R})}+\|I_{0}(x)\|_{C([-h_{0},\,h_{0}])}\leq \frac{\sigma}{\mu_{1}}\). In view of \(\lambda_{p}(\mathcal{L}_{\{(-h_{0},\,h_{0}),\,d\}}+a(x))<0\), there exists a \(\epsilon>0\) small such that \(\lambda_{p}(\mathcal{L}_{\{(-h_{\epsilon},h_{\epsilon}),\,d\}}+a(x))<0\) with \(h_{\epsilon}=h_{0}+\epsilon\) by Theorem 3.3\((i)\). Let \(\phi(x)\) be the eigenfunction of the principal eigenvalue \(\lambda_{p}(\mathcal{L}_{\{(-h_{\epsilon},\,h_{\epsilon}),\,d\}}+a(x))\), which satisfies \[d\int_{-h_{\epsilon}}^{h_{\epsilon}}J(x-y)\phi(y)dy-d\phi(x)+a(x)\phi(x)= \lambda_{p}(\mathcal{L}_{\{(-h_{\epsilon},\,h_{\epsilon}),\,d\}}+a(x))\phi(x), \ x\in(-h_{\epsilon},h_{\epsilon}).\] Denote \[M=\delta C(\int_{-h_{\epsilon}}^{h_{\epsilon}}\phi(x)dx)^{-1},\ \ C=\frac{h_{ \epsilon}-h_{0}}{k},\] \[\overline{h}(t)=h_{0}+kC[1-e^{-\delta t}],\ \ \overline{g}(t)=-\overline{h}(t),\] \[\overline{h}^{\prime}(t)=kC\delta e^{-\delta t},\ \ \overline{I}(t,x)=Me^{-\delta t}\phi(x).\] By direct calculations, we have \[k\int_{\overline{g}}^{\overline{h}}\int_{\overline{h}}^{\infty} J(x-y)\overline{I}(t,x)dydx\] \[\leq k\int_{\overline{g}}^{\overline{h}}\overline{I}(t,x)dx\] \[= k\delta Ce^{-\delta t}=\overline{h}^{\prime}(t),\ \ t>0.\] In a similar way, one can deduce \(-k\int_{\overline{g}}^{\overline{h}}\int_{-\infty}^{\overline{g}}J(x-y) \overline{I}(t,x)dydx\geq\overline{g}^{\prime}(t)\) for \(t>0\). Clearly, \(\overline{I}(t,\overline{g}(t))>0\) and \(\overline{I}(t,\overline{h}(t))>0\) for \(t>0\). For \(t>0\) and \(x\in(\overline{g},\overline{h})\), we obtain \[\overline{I}_{t}(t,x)-d\int_{\overline{g}(t)}^{\overline{h}(t)}J (x-y)\overline{I}(t,y)dy+d\overline{I}(t,x)+\mu_{2}\overline{I}-\beta(m, \overline{I},x)\overline{S}\overline{I}+\gamma(b,\overline{I},x)\overline{I}\] \[\geq -[d\int_{\overline{g}(t)}^{\overline{h}(t)}J(x-y)Me^{-\delta t} \phi(y)dy-dMe^{-\delta t}\phi(x)-\mu_{2}Me^{-\delta t}\phi(x)]\] \[-\delta\overline{I}+\gamma(b,\overline{I},x)\overline{I}-\beta(m, \overline{I},x)\overline{S}\overline{I}\] \[\geq -\delta\overline{I}-(\lambda_{p}({\cal L}_{(-h_{c},h_{c})}+a(x))+ \gamma(b,0,x)-\beta(m,0,x)\frac{\sigma}{\mu_{1}})\overline{I}+\gamma(b, \overline{I},x)\overline{I}-\beta(m,\overline{I},x)\frac{\sigma}{\mu_{1}} \overline{I}\] \[= \overline{I}(-\delta-\lambda_{p}({\cal L}_{(-h_{c},h_{c})}+a(x))- \gamma(b,0,x)+\gamma(b,\overline{I},x)+\frac{\sigma}{\mu_{1}}(\beta(m,0,x)- \beta(m,\overline{I},x))).\] Recalling that \(\gamma(b,\overline{I},x)\rightarrow\gamma(b,0,x)\) and \(\beta(m,\overline{I},x)\rightarrow\beta(m,0,x)\) as \(\delta\to 0\), we can choose \(\delta\) small enough so that \[\overline{I}_{t}(t,x)-d\int_{\overline{g}(t)}^{\overline{h}(t)}J(x-y) \overline{I}(t,y)dy+d\overline{I}(t,x)+\mu_{2}\overline{I}-\beta(x)\overline{S }\overline{I}+\gamma(x,b,\overline{I})\overline{I}\geq 0.\] Moreover, if \(\|S_{0}(x)\|_{L^{\infty}(\mathbb{R})}+\|I_{0}(x)\|_{C([-h_{0},h_{0}])}\) sufficiently small, then \(I_{0}\leq M\phi(x)\), \(x\in[-h_{0},h_{0}]\). Applying the comparison principle gives \[\overline{g}(t)\leq g(t),\ \ h(t)\leq\overline{h}(t)\,\mbox{and}\ \ I(\mbox{t}, \mbox{x})\leq\overline{I}(\mbox{t},\mbox{x})\] for \(t>0\) and \(g(t)<x<h(t)\). Therefore, \[\lim_{t\rightarrow+\infty}I(t,x)\leq\lim_{t\rightarrow+\infty}\overline{I}(t,x)=0 \tag{4.3}\] and \[h_{\infty}-g_{\infty}\leq 2h_{\epsilon}<+\infty,\] which gives that \(\lim_{t\rightarrow+\infty}S(t,x)=\frac{\sigma}{\mu_{1}}\) uniformly for \(x\in\mathbb{R}\) by Theorem 4.2. \(\Box\) **Remark 4.4**: _It follows from the proof of Theorem 4.3 that \(M\rightarrow+\infty\) as \(k\to 0\). Therefore, there exists a \(k_{*}>0\) such that (4.3) holds for all \(k\in(0,k_{*})\) for any given initial function pair \((S_{0}(x),I_{0}(x))\)._ **Theorem 4.5**: _If \(-g_{\infty}=h_{\infty}=+\infty\) and \(\sup_{x\in\mathbb{R}}a(x)>0\), then \(\limsup_{t\rightarrow+\infty}\|I(\cdot,t)\|_{C([g(t),\,h(t)])}>0\)._ **Proof.** Conversely, suppose that \(\lim_{t\to+\infty}\|I(t,\cdot)\|_{C([g(t),\,h(t)])}=0\). It follows from Theorem 4.2 that \[\lim_{t\to+\infty}S(t,x)=\frac{\sigma}{\mu_{1}}\ \ \mbox{uniformly for}\ \ x\in\mathbb{R}. \tag{4.4}\] Since \(-g_{\infty}=h_{\infty}=+\infty\), from \((iii)\) of Theorem 3.3, there exists a \(T^{*}>0\) large enough such that \[\lambda_{p}({\cal L}_{\{(g(T^{*}),\,h(T^{*})),\,d\}}+a(x))>0\ \ \ \mbox{for}\ \ \ t\geq T^{*}.\] Now, we consider the eigenvalue problem \[d\int_{g(T^{*})}^{h(T^{*})}J(x-y)\psi(y)dy-d\psi(x)+a(x)\psi(x)=\lambda_{p}({ \cal L}_{\{(g(T^{*}),\,h(T^{*})),\,d\}}+a(x))\psi(x),\ x\in(g(T^{*}),h(T^{*})),\] and positive function \(\psi(x)\) with \(\|\psi\|_{L^{\infty}((g(T^{*}),\,h(T^{*})))}=1\) is its eigenfunction to the principal eigenvalue \(\lambda_{p}({\cal L}_{\{(g(T^{*}),\,h(T^{*})),\,d\}}+a(x))\). In view of (4.4), for any given \(0<\epsilon<\min\{\frac{\sigma}{\mu_{1}},\,\lambda_{p}({\cal L}_{\{(g(T^{*}), \,h(T^{*})),\,d\}}+a(x))(\sup_{x\in\mathbb{R}}\beta)^{-1}\}\), there exists a \(T^{**}>T^{*}\) such that \[S(t,x)>\frac{\sigma}{\mu_{1}}-\epsilon,\ \ \ t\geq T^{**},\ \ x\in[g(T^{*}),h(T^{*})],\] and then \(I(t,x)\) satisfies \[\left\{\begin{array}{ll}I_{t}\geq d\int_{g(T^{*})}^{h(T^{*})}J(x-y)I(t,y)dy -dI(t,x)-\mu_{2}I&\\ \ \ \ \ \ \ \ \ \ +\beta(m,I,x)(\frac{\sigma}{\mu_{1}}-\epsilon)I-\gamma(b,I,x)I,&t> T^{**},\ x\in(g(T^{*}),h(T^{*})),\\ I(T^{**},x)>0,&g(T^{*})\leq x\leq h(T^{*}).\end{array}\right.\] We now construct a suitable lower solution for the following auxiliary problem \[\left\{\begin{array}{ll}W_{t}=d\int_{g(T^{*})}^{h(T^{*})}J(x-y)W(t,y)dy-dW(t,x)&\\ \ \ \ \ \ \ \ -\mu_{2}W+\beta(m,W,x)(\frac{\sigma}{\mu_{1}}-\epsilon)W- \gamma(b,W,x)W,&t>T^{**},\ x\in(g(T^{*}),h(T^{*})),\\ W(T^{**},x)=I(T^{**},x),&g(T^{*})\leq x\leq h(T^{*}).\end{array}\right. \tag{4.5}\] Choose \[\underline{W}(t,x)=\delta\psi(x),\ \ \ t>T^{**},\ \ g(T^{*})\leq x\leq h(T^{*}),\] where \(\delta>0\) is small enough such that \(\delta\psi(x)\leq I(T^{**},x)\) for \(x\in[g(T^{*}),h(T^{*})]\). For \(t>T^{**}\) and \(g(T^{*})<x<h(T^{*})\), direct computation yields \[\underline{W}_{t}-d\int_{g(T^{*})}^{h(T^{*})}J(x-y)\underline{W}( y)dy+d\underline{W}+\mu_{2}\underline{W}-\beta(m,\underline{W},x)(\frac{ \sigma}{\mu_{1}}-\epsilon)\underline{W}+\gamma(b,\underline{W},x)\underline{W}\] \[= -\delta(d\int_{g(T^{*})}^{h(T^{*})}J(x-y)\psi(t,y)dy-d\psi(x)-\mu _{2}\psi+\beta(m,\delta\psi,x)(\frac{\sigma}{\mu_{1}}-\epsilon)\psi-\gamma(b, \delta\psi,x)\psi)\] \[= -\delta\psi(\lambda_{p}+\gamma(b,0,x)-\beta(m,0,x)\frac{\sigma}{ \mu_{1}}+\beta(m,\delta\psi,x)(\frac{\sigma}{\mu_{1}}-\epsilon)-\gamma(b, \delta\psi,x))\] \[\leq \delta\psi(-\lambda_{p}({\cal L}_{\{(g(T^{*}),h(T^{*})),\,d\}}+a (x))+\beta(m,\delta\psi,x)\epsilon)\] \[< 0.\] Note that the boundaries \(g(T^{*})\) and \(h(T^{*})\) are fixed, so there is no need to compare the boundary values of \(W(t,x)\) and \(I(t,x)\) by Lemma 3.1 in [15]. Applying the comparison principle in [15] gives \[I(t,x)\geq W(t,x)\geq\underline{W}(t,x)=\delta\psi(x)\ \ \mbox{in}\ \ [T^{**},+\infty)\times(g(T^{*}),h(T^{*})).\] Therefore, \(\liminf_{t\to+\infty}I(t,x)\geq\liminf_{t\to+\infty}W(t,x)\geq\delta\psi(0)>0\), which is a contradiction. \(\Box\) **Remark 4.6**: _Suppose that \(a(x)\) is a positive constant. If \(h_{\infty}-g_{\infty}=+\infty\), then \(\limsup_{t\to+\infty}\|I(\cdot,t)\|_{C([g(t),\,h(t)])}>0\). In the special case, similar to Theorem 3.10 in [5], a spreading-vanishing dichotomy holds._ Suppose that \[a(0)\geq d,\] we know that \(\lambda_{p}({\cal L}_{\{(L_{1},\,L_{2}),\,d\}}+a(x))>0\) for any interval \((L_{1},L_{2})\) by Theorem 3.3 and \((i)\) of Proposition 3.1, which yields the following conclusion by Theorem 4.2. **Theorem 4.7**: _If \(a(0)\geq d\) holds, then spreading always occurs for (1.3)._ Next we consider the case \(0<a(0)<d\). According to Theorem 3.3 and \((i)\) of Proposition 3.1, there exists \(L^{*}>0\) such that \[\lambda_{p}({\cal L}_{\{(L_{1},\,L_{2}),\,d\}}+a(x))\left\{\begin{array}{ll}< 0,&(L_{1},L_{2})\subset(-L^{*},L^{*}),\\ =0,&-L_{1}=L_{2}=L^{*},\\ >0,&(-L^{*},L^{*})\subset(L_{1},L_{2}).\end{array}\right.\] **Theorem 4.8**: _Assume \(0<a(0)<d\) holds, then_ \((i)\) _if \(h_{0}\geq L^{*}\), then spreading always occurs for (1.3)._ \((ii)\) _if \(h_{0}<L^{*}\), then there exists a positive constant \(k^{*}\) such that \(h_{\infty}-g_{\infty}=\infty\) when \(k>k^{*}\)._ **Proof.**\((i)\) holds from Theorem 4.2 since \[\lambda_{p}({\cal L}_{\{(g_{\infty},\,h_{\infty}),\,d\}}+a(x))>\lambda_{p}({ \cal L}_{\{(-L^{*},\,L^{*}),\,d\}}+a(x))=0.\] In what follows, we prove \((ii)\). Notice that \[-\mu_{2}+\beta(m(x),I,x)S-\gamma(b(x),I,x)>-\mu_{2}-\gamma(b(x),I,x)>-C\] for some \(C>0\). Clearly \(I(t,x)\) satisfies \[\left\{\begin{array}{ll}I_{t}\geq d\int_{g(t)}^{h(t)}J(x-y)I(t,y)dy-dI(t,x) -CI(t,x),&t>0,\,x\in(g(t),h(t)),\\ I(t,x)=0,&t\geq 0,\,\,x\in\mathbb{R}\backslash(g(t),h(t)),\\ h^{\prime}(t)=k\int_{g(t)}^{h(t)}\int_{h(t)}^{+\infty}J(x-y)I(t,x)dydx,&t>0,\\ g^{\prime}(t)=-k\int_{g(t)}^{h(t)}\int_{-\infty}^{g(t)}J(x-y)I(t,x)dydx,&t>0,\\ g(0)=-h_{0},\,\,h(0)=h_{0},&x\in\mathbb{R},\\ I(0,x)=I_{0}(x),&x\in(-h_{0},h_{0}),\end{array}\right.\] thereby, for any given constant \(M\), there exists \(k^{*}>0\) such that \(h_{\infty}-g_{\infty}>M\) provided that \(k>k^{*}\) by Lemma 3.9 in [16]. Then, \(h_{\infty}-g_{\infty}=+\infty\) by the arbitrariness of \(M\). \(\Box\) Noting that the comparison principle for problem (1.3) is not valid, we cannot obtain the monotonicity of the solution for (1.3) with \(k\) and thus cannot take \(k\) as a sharp criterion for the spreading-vanishing dichotomy as in [5]. However, recalling Remark 4.4 and \((ii)\) in Theorem 4.8, we have the following result: **Theorem 4.9**: _Suppose that \(0<a(0)<d\) and \(h_{0}<L^{*}\). For problem (1.3), there exists \(0<k_{*}\leq k^{*}\) such that vanishing occurs if \(0<k<k_{*}\) and spreading happens provided that \(k>k^{*}\)._ Finally, we will discuss the impact of the diffusion coefficient on the vanishing and spreading of infectious disease. Assume that \(\int_{-\infty}^{-2h_{0}}J(z)dz>0\) (or \(\int_{2h_{0}}^{+\infty}J(z)dz>0\)) holds. Using Theorem 3.4 with \(L_{1}=-h_{0}\) and \(L_{2}=h_{0}\), if \(\max_{x\in[-h_{0},\,h_{0}]}a(x)<0\), for any \(d>0\), we have \(\lambda_{p}(\mathcal{L}_{\{(-h_{0},\,h_{0}),\,d\}}+a(x))<0\), while if \(\max_{x\in[-h_{0},\,h_{0}]}a(x)>0\), there exists a \(d^{*}>0\) such that \[\lambda_{p}(\mathcal{L}_{\{(-h_{0},\,h_{0}),\,d\}}+a(x))\left\{\begin{array}{ ll}<0,&d>d^{*},\\ =0,&d=d^{*},\\ >0,&d<d^{*}.\end{array}\right.\] Inspired by the above analysis, combined with Theorem 4.2, we can obtain the following result. **Theorem 4.10**: _Suppose \(\int_{-\infty}^{-2h_{0}}J(z)dz>0\) (or \(\int_{2h_{0}}^{+\infty}J(z)dz>0\)) holds. Then,_ \((i)\) _if \(\max_{x\in[-h_{0},\,h_{0}]}a(x)>0\), there exists a \(d^{*}>0\) such that for \(d<d^{*}\), then spreading occurs; conversely, if \(d>d^{*}\), then vanishing occurs provided that \(\|S_{0}(x)\|_{L^{\infty}(\mathbb{R})}+\|I_{0}(x)\|_{C([-h_{0},\,h_{0}])}\) is small enough;_ \((ii)\) _if \(\max_{x\in[-h_{0},\,h_{0}]}a(x)\leq 0\), for any \(d>0\), then vanishing always occurs as long as \(\|S_{0}(x)\|_{L^{\infty}(\mathbb{R})}+\|I_{0}(x)\|_{C([-h_{0},\,h_{0}])}\) is adequately small._ ## 5 Discussion In this paper, we study a free boundary problem (1.3) with media coverage and hospital bed numbers, which describes a nonlocal diffusive SIS epidemic model. The free boundary describes the moving front of the infected individuals, and the nonlocal diffusion operator characterizes the long-distance spatial movement of individuals. For the SIS model with nonlocal diffusion and free boundaries (1.3), the existence and uniqueness of the global solution are given by using two fixed point theorems (see Theorem 2.2). Then, we define the principal eigenvalue of the integral operator, and analyze the impacts of media coverage and hospital bed number (Theorem 3.2), interval length (Theorem 3.3), and diffusion coefficient (Theorem 3.4) on the principal eigenvalue. In addition, sufficient conditions for disease spreading and vanishing (see Theorems 4.2, 4.3 and 4.5) are given. Finally, we discuss the impact of the principal eigenvalue on the spreading or vanishing of infectious diseases. If \(a(0)\geq d\), then \(\lambda_{p}(\mathcal{L}_{\{(L_{1},\,L_{2}),\,d\}}+a(x))>0\) for any \(L_{1}<L_{2}\), and the disease is always spreading (see Theorem 4.7). If \(0<a(0)<d\), there exists an \(L^{*}>0\), then spreading always appears for \(h_{0}\geq L^{*}\) (see Theorem 4.8); and when \(h_{0}<L^{*}\), the impact of expanding capability \(k\) on the spreading or vanishing of disease is discussed. That is, there exists \(0<k_{*}\leq k^{*}\) such that vanishing occurs if \(0<k<k_{*}\), and spreading happens provided that \(k>k^{*}\) (see Theorem 4.9). If \(\max_{x\in[-h_{0},\,h_{0}]}a(x)>0\), there exists a \(d^{*}>0\) such that for \(d<d^{*}\), then the disease spreads; if \(d>d^{*}\) and \(\|S_{0}(x)\|_{L^{\infty}(\mathbb{R})}+\|I_{0}(x)\|_{C([-h_{0},\,h_{0}])}\) is small enough, then the disease vanishes; if \(\max_{x\in[-h_{0},\,h_{0}]}a(x)\leq 0\), then \(d^{*}=0\), that is, then vanishing always appears provided that \(\|S_{0}(x)\|_{L^{\infty}(\mathbb{R})}+\|I_{0}(x)\|_{C([-h_{0},\,h_{0}])}\) is adequately small (see Theorem 4.10). Finally, we may conclude that the differences between nonlocal diffusion in (1.3) and local diffusion in (1.2) in our mathematical analysis are as follows: first, the existence and uniqueness of the global solution for (1.2) are obtained by straightening the boundary and the first-order fixed point theorem. However, owing to lack of compactness, the existence and uniqueness of global solutions for (1.3) are given by using two fixed point theorems. Second, for (1.2), the corresponding principal eigenvalue always exists. However, for the nonlocal diffusion problem, the principal eigenvalue may not exist. In this paper, for the nonlocal diffusion model (1.3), we first define the generalized principal eigenvalue \(\lambda_{p}(\mathcal{L}_{\{(L_{1},\,L_{2}),\,d\}}+a(x))\) of the integral operator, and then show that the generalized principal eigenvalue is the principal eigenvalue under condition (**H**). Third, unlike local diffusion, whose principal eigenvalue is clear, the nonlocal operator leads to more possibilities because of the choice of the kernel function. It is worth mentioning that model (1.3) incorporates media coverage and hospital bed numbers. Based on the monotonicity of the generalized principal eigenvalue on media coverage and hospital bed numbers, we study the influence of the principal eigenvalue on infectious diseases, which implies that large media coverage and hospital bed numbers are beneficial to the prevention and control of disease.
2305.03494
Pairs of Woven continuous frames in Hilbert spaces
In this present paper we introduce weaving Hilbert space frames in the continuous case, we give new approaches for manufacturing pairs of woven continuous frames and we obtain new properties in continuous weaving frame theory related to dual frames. Also, we provide some approaches for constructing weaving continuous frames by using small perturbations.
Hafida Massit, Mohamed Rossafi, Samir Kabbaj
2023-03-20T23:41:36Z
http://arxiv.org/abs/2305.03494v1
# Pairs of woven continuous frames in Hilbert spaces ###### Abstract. In this present paper we introduce weaving Hilbert space frames in the continuous case, we give new approaches for manufacturing pairs of woven continuous frames and we obtain new properties in continuous weaving frame theory related to dual frames. Also, we provide some approaches for constructing weaving continuous frames by using small perturbations. Key words and phrases:Continous frames, Weaving Hilbert space frames, Duality principle 2010 Mathematics Subject Classification: 42C15 ## 1. Introduction The concept of frames in Hilbert spaces has been introduced by Duffin and Schaffer [7] in 1952 to study some deep problems in nonharmonic Fourier series, after the fundamental paper [6] by Daubechies, Grossman and Meyer, frame theory began to be widely used, particularly in the more specialized context of wavelet frames and Gabor frames. Continuous frames defined by Ali, Antoine and Gazeau [1]. Gabrado and Han in [8] called these frames associated with measurable spaces. For more about frames see [10, 12, 13, 14, 15, 16]. Recently, Bemrose et al.[2] has introduced a new concept of weaving frames in separable Hilbert space. This is motivated by a problem regarding distributed signal processing where frames plays an important role. Weaving frames has potential applications in wireless sensor networks that require distributed processing under different frames. The fundamental properties of weaving frames were reviewed by Casazza and Lynch in [3]. Weaving frames were further studied by Casazza, Freeman and Lynch [4]. In this paper, we give new basic properties of weaving continuous frames related to dual frames to survey under which conditions a continuous frame with its dual constitute woven continuous frames, and we give some approaches for constructing concrete pairs of woven continuous frames. ## 2. preliminaries Throughout this paper, we suppose \(\mathcal{H}\) is a separable Hilbert space, \(\mathcal{H}_{m}\) an \(m-\) dimensional Hilbert space, \(I\) the identity operator on \(\mathcal{H}\), \((\mathfrak{A},\mu)\) be a measure space with positive measure \(\mu\). A family of vectors \(F=\{F_{\varsigma}\}_{\varsigma\in\mathfrak{A}}\) in a separable Hilbert \(\mathcal{H}\) is called a Riesz basis if \(\overline{span}\{F_{\varsigma}\}_{\varsigma\in\mathfrak{A}}=\mathcal{H}\) and there exist constants \(0<A_{F}\leq B_{F}<\infty\), such that \[A_{F}\int_{\mathfrak{A}}|\alpha_{\varsigma}|^{2}d\mu(\varsigma)\leq\|\int_{ \mathfrak{A}}\alpha_{\varsigma}F_{\varsigma}d\mu(\varsigma)\|^{2}\leq B_{F} \int_{\mathfrak{A}}|\alpha_{\varsigma}|^{2}d\mu(\varsigma),\;\forall\{\alpha_ {\varsigma}\}_{\varsigma}\in L^{2}(\mathfrak{A}).\] The constants \(A_{F}\) and \(B_{F}\) are called Riesz basis bounds. A family of vectors \(F=\{F_{\varsigma}\}_{\varsigma\in\mathfrak{A}}\) in a separable Hilbert \(\mathcal{H}\) is said to be a continuous frame if there exist constants \(0<A_{F}\leq B_{F}<\infty\), such that \[A_{F}\|f\|^{2}\leq\|\int_{\mathfrak{A}}|\langle f,F_{\varsigma}\rangle|^{2}d \mu(\varsigma)\leq B_{F}\|f\|^{2},\;\forall f\in\mathcal{H}, \tag{2.1}\] then the constants \(A_{F}\) and \(B_{F}\) are called frame bounds. The family \(\{F_{\varsigma}\}_{\varsigma\in\mathfrak{A}}\) is said to be a Bessel sequence whenever in 2.1, the right hand side holds. In the case of \(A_{F}=B_{F}=1\), \(\{F_{\varsigma}\}_{\varsigma\in\mathfrak{A}}\) called a Parseval frame. And if \(A_{F}=B_{F}\) it is called a tight frame. Given a frame \(F=\{F_{\varsigma}\}\), the frame operator is defined by \[S_{F}f=\int_{\mathfrak{A}}\langle f,F_{\varsigma}\rangle F_{\varsigma}d\mu( \varsigma).\] It is a bounded, invertible, and self-adjoint operator. Also, the synthesis operator \(T_{F}:L^{2}(\mathfrak{A},\mu)\to\mathcal{H}\) defined by \(T_{F}(f)=\int_{\mathfrak{A}}f(\varsigma)F(\varsigma)d\mu(\varsigma)\). The frame operator can be written as \(S_{F}=T_{F}T_{F}^{*}\) where \(T_{F}^{*}:\mathcal{H}\to L^{2}(\mathfrak{A},\mu)\), the adjoint of \(T_{F}\), given by \(T_{F}^{*}(f)(\varsigma)=\{\langle f,F(\varsigma)\rangle\}_{\varsigma\in \mathfrak{A}}\) is called the analysis operator. The family \(\{S_{F}^{-1}F\}_{\varsigma\in\mathfrak{A}}\) is also a frame for \(\mathcal{H}\), called the canonical dual frame. In general, a continuous frame \(\{G_{\varsigma}\}_{\varsigma\in\mathfrak{A}}\subset\mathcal{H}\) is called an alternate dual for \(\{F_{\varsigma}\}_{\varsigma\in\mathfrak{A}}\) if \[f=\int_{\mathfrak{A}}\langle f,G_{\varsigma}\rangle F_{\varsigma}d\mu(\varsigma ),f\in\mathcal{H}. \tag{2.2}\] Every dual frame is of the form \(\{S_{F}^{-1}F_{\varsigma}+v_{\varsigma}\}_{\varsigma\in\mathfrak{A}}\), with \(\{v_{\varsigma}\}_{\varsigma\in\mathfrak{A}}\) a Bessel sequence that satisfies \[\int_{\mathfrak{A}}\langle f,v_{\varsigma}\rangle F_{\varsigma}=0. \tag{2.3}\] **Definition 2.1**.: A family of continuous frames \(\{F_{\varsigma,\nu}\}_{\varsigma\in\mathfrak{A},1\leq\nu\leq N}\) in Hilbert space \(\mathcal{H}\) is said to be a continuous woven if there are universal constants \(A\) and \(B\) so that for every partition \(\{\mathfrak{B}_{\nu}\}_{1\leq\nu\leq N}\) of \(\mathfrak{A}\), the family \(\{F_{\varsigma,\nu}\}_{\varsigma\in\mathfrak{B}_{\nu},1\leq\nu\leq N}\) is a continuous frame for \(\mathcal{H}\) with bounds \(A\) and \(B\), respectively. The family \(\{F_{\varsigma,\nu}\}_{\varsigma\in\mathfrak{B}_{\nu},1\leq\nu\leq N}\) is called a continuous weaving. If for every partition \(\{\mathfrak{B}_{\nu}\}_{1\leq\nu\leq N}\) of \(\mathfrak{A}\), the family \(\{F_{\varsigma,\nu}\}_{\varsigma\in\mathfrak{B}_{\nu},1\leq\nu\leq N}\) is a continuous frame for \(\mathcal{H}\), then the family \(\{F_{\varsigma,\nu}\}_{\varsigma\in\mathfrak{A},1\leq\nu\leq N}\) is called weakly continuous woven. Casazza- Freeman, and Lynch proved in [4] that the weaker form of weaving is equivalent to weaving using the uniform boundedness principle. **Theorem 2.2**.: _[_4_]_ _Given two continuous frames \(\{F_{\varsigma}\}\) and \(\{G_{\varsigma}\}\) for \(\mathcal{H}\), the following are equivalent:_ 1. _The two continuous frames are continuous woven._ 2. _The two continuous frames are weakly continuous woven._ **Proposition 2.3**.: _[_2_]_ _If \(\{F_{\varsigma,\nu}\}_{\varsigma\in\mathfrak{A},1\leq\nu\leq N}\) is a family of Bessel sequences for \(\mathcal{H}\) with a Bessel bound \(B_{\nu}\) for all \(1\leq\nu\leq N\), then every weaving is a Bessel sequence with the Bessel bound \(\sum_{\nu=1}^{N}B_{\nu}\)._ **Proposition 2.4**.: _[_2_]_ _Let \(F=\{F_{\varsigma}\}_{\varsigma\in\mathfrak{A}}\) be a continuous frame and \(T\) an invertible operator satisfying \(\|I-T\|^{2}<\dfrac{A}{B}\). Then, \(F\) and \(T\) are continuous woven with the universal lower bound \((\sqrt{A}-\sqrt{B}\|I-T\|)^{2}\)._ **Definition 2.5**.: _[_2_]_ _If \(U_{1}\) and \(U_{2}\) are subspaces of \(\mathcal{H}\), let_ \[d_{U_{1}}(U_{2})=\inf\{\|f-g\|:\;f\in U_{1},g\in S_{U_{2}}\}\] _and_ \[d_{U_{2}}(U_{1})=\inf\{\|f-g\|:\;f\in S_{U_{1}},g\in U_{2}\},\] _where \(S_{U_{i}}=S_{\mathcal{H}}\cup U_{i}\) and \(S_{\mathcal{H}}\) is the unit sphere in \(\mathcal{H}\). Then, the distance between \(U_{1}\) and \(U_{2}\) is defined as_ \[d(U_{1},U_{2})=\min\{d_{U_{1}}(U_{2}),d_{U_{2}}(U_{1})\}.\] **Theorem 2.6**.: _[_2_]_ _If \(F=\{F_{\varsigma}\}_{\varsigma\in\mathfrak{A}}\) and \(G=\{G_{\varsigma}\}_{\varsigma\in\mathfrak{A}}\) are two continuous Riesz bases for \(\mathcal{H}\), then the following are equivalent_ 1. \(F\) _and_ \(G\) _are continuous woven._ 2. _For every_ \(K\subset\mathfrak{A}\)_,_ \(d(\overline{span}\{F_{\varsigma}\}_{\varsigma\in K},\overline{span}\{G_{ \varsigma}\}_{\varsigma\in K^{C}})>0.\)__ 3. _There is a constant_ \(t>0\) _so that for every_ \(K\subset\mathfrak{A}\)_,_ \[d_{F_{K},G_{K^{C}}}:=d(\overline{span}\{F_{\varsigma}\}_{\varsigma\in K}, \overline{span}\{G_{\varsigma}\}_{\varsigma\in K^{C}})\geq t.\] _We denote \(K^{C}\) the complement of \(K\)._ ## 3. Main result Now, we try to examining some conditions under which a continuous frame with its approximate duals is woven. This leads us to construct some concrete pairs of woven frames. **Theorem 3.1**.: _Suppose that \(F=\{F_{\varsigma}\}_{\varsigma\in\mathfrak{A}}\) is a continuous redundant frame so that_ \[\|I-S_{F}^{-1}\|^{2}<\frac{A_{F}}{B_{F}}. \tag{3.1}\] _Then, \(F\) has infinitely many dual frames \(\{G_{\varsigma}\}_{\varsigma}\) for which and \(\{F_{\varsigma}\}_{\varsigma}\) are continuous woven._ Proof.: By Proposition 2.4 we have \(F\) and \(\{S_{F}^{-1}F_{\varsigma}\}\) are woven frames for \(\mathcal{H}\) with the lower bound \(\mathcal{R}:=(\sqrt{A}-\sqrt{B}\|I-S_{F}^{-1}\|)^{2}\). Now, let \(V=\{v_{\varsigma}\}_{\varsigma}\) be a bessel sequence, which satisfies \(f=\int_{\mathfrak{A}}\langle f,v_{\varsigma}\rangle F_{\varsigma}d\mu(\varsigma)\), (\(f\in\mathcal{H}\)) and let \(\varepsilon>0\) so that \[\varepsilon^{2}B_{V}+2\varepsilon\sqrt{B_{V}/A_{F}}<\mathcal{R}. \tag{3.2}\] Then, \(G_{\beta}:=\{S_{F}^{-1}F_{\varsigma}+\beta v_{\varsigma}\}_{\varsigma}\) is a dual frame of \(F\), for all \(0<\beta<\varepsilon\). To show \(F\) and \(G_{\beta}\) constitute woven continuous frames for \(\mathcal{H}\), by Proposition 2.3, we need to prove the existence of a lower bound. Suppose \(\mathfrak{B}\subset\mathfrak{A}\). Then \[\int_{\mathfrak{B}}|\langle f,F_{\varsigma}\rangle|^{2}d\mu( \varsigma)+\int_{\mathfrak{B}^{C}}|\langle f,S_{F}^{-1}F_{\varsigma}+\beta v_{ \varsigma}\rangle|^{2}d\mu(\varsigma)\] \[= \int_{\mathfrak{B}}|\langle f,F_{\varsigma}\rangle|^{2}d\mu( \varsigma)+\int_{\mathfrak{B}^{C}}|\langle f,S_{F}^{-1}F_{\varsigma}\rangle+ \langle f,\beta v_{\varsigma}\rangle|^{2}d\mu(\varsigma)\] \[\geq\int_{\mathfrak{B}}|\langle f,F_{\varsigma}\rangle|^{2}d\mu( \varsigma)+\int_{\mathfrak{B}^{C}}|\;|\langle f,S_{F}^{-1}F_{\varsigma}\rangle| -|\langle f,\beta v_{\varsigma}\rangle|\;|^{2}d\mu(\varsigma)\] \[\geq\int_{\mathfrak{B}}|\langle f,F_{\varsigma}\rangle|^{2}d\mu( \varsigma)+\int_{\mathfrak{B}^{C}}|\langle f,S_{F}^{-1}F_{\varsigma}\rangle d \mu(\varsigma)|^{2}\] \[-\int_{\mathfrak{B}^{C}}|\langle f,\beta v_{\varsigma}\rangle|^{2 }d\mu(\varsigma)-2\int_{\mathfrak{B}^{C}}|\langle f,S_{F}^{-1}F_{\varsigma} \rangle|\;|\langle f,\beta v_{\varsigma}\rangle|d\mu(\varsigma)\] \[\geq(-\beta^{2}B_{V}-2\beta\sqrt{B_{V}/A_{F}}+\mathcal{R})\|f\|^{2}.\] **Proposition 3.2**.: _Let \(F=\{F_{\varsigma}\}_{\varsigma\in\mathfrak{A}}\) be a redundant continuous frame for \(\mathcal{H}\) and suppose there exists an operator \(T\in B(\mathcal{H})\) so that_ \[\|I-T\|<1\,,\|I-T^{*}S_{F}^{-1}\|^{2}<\frac{A_{F}}{B_{F}}. \tag{3.3}\] _Then, \(F\) has infinitely many approximate dual frames such as \(\{G_{\varsigma}\}_{\varsigma\in\mathfrak{A}}\), for which \(\{F_{\varsigma}\}_{\varsigma\in\mathfrak{A}}\) and \(\{G_{\varsigma}\}_{\varsigma\in\mathfrak{A}}\) are continuous woven._ Proof.: The sequence \(\{T^{*}S_{F}^{-1}F_{\varsigma}+v_{\varsigma}\}_{\varsigma\in\mathfrak{A}}\) is an approximate dual of \(F\), with \(V=\{v_{\varsigma}\}_{\varsigma\in\mathfrak{A}}\) satisfies 3.1. Also by Proposition 2.4, \(F\) and \(\{T^{*}S^{-1}F_{\varsigma}\}_{\varsigma\in\mathfrak{A}}\) are continuous woven lower with the universal bound \((\sqrt{A_{F}}-\sqrt{B_{F}}\|I-T^{*}S_{F}^{-1}\|)^{2}\). Let \(\varepsilon>0\), such that \[\varepsilon^{2}\|T_{V}\|^{2}+2\varepsilon\|T_{V}\|\ \|S_{F}^{-1}T\|\sqrt{B_{F}} <(\sqrt{A_{G}}-\sqrt{B_{F}}\|I-T^{*}S_{F}^{-1}\|)^{2}.\] Then \(\Phi_{\beta}=\{T^{*}S_{F}^{-1}F_{\varsigma}+\beta v_{\varsigma}\}_{\varsigma \in\mathfrak{A}}\) is an approximate dual frame of \(F\) for all \(0<\beta<\varepsilon\). Using Theorem 3.1, we obtain that \(F\) and \(\Phi_{\beta}\) are woven continuous frames. **Theorem 3.3**.: _Let \(F=\{F_{\varsigma}\}_{\varsigma\in\mathfrak{A}}\) be a continuous frame for \(\mathcal{H}\). Then, the following assertions hold:_ 1. _If_ \(G=\{G_{\varsigma}\}_{\varsigma\in\mathfrak{A}}\) _is a dual frame of_ \(F\) _and_ \(\{F_{\varsigma}\}_{\varsigma\in K}\cup\{G_{\varsigma}\}_{\varsigma\in K^{C}}\) _for_ \(K\subset\mathfrak{A}\)_. Then,_ \(F\) _and_ \(G\) _are continuous woven._ 2. _If_ \(F=\{F_{\varsigma}\}_{\varsigma\in\mathfrak{A}}\) _is continuous Riesz basis for_ \(\mathcal{H}\)_. Then,_ \(F\) _and its canonical dual are continuous woven._ Proof.: Let \(f\in\mathcal{H}\) such that \(f\bot\{F_{\varsigma}\}_{\varsigma\in K}\cup\{G_{\varsigma}\}_{\varsigma\in K ^{C}}\). Then, we obtain \[\|f\|^{2}=\langle f,f\rangle =\langle f,\int_{\mathfrak{A}}\langle f,G_{\varsigma}\rangle F_{ \varsigma}d\mu(\varsigma)\rangle\] \[=\langle f,\int_{K}\langle f,G_{\varsigma}\rangle F_{\varsigma}d \mu(\varsigma)\rangle+\langle f,\int_{K^{C}}\langle f,G_{\varsigma}\rangle F_{ \varsigma}d\mu(\varsigma)\rangle\] \[=\int_{K}\langle f,G_{\varsigma}\rangle\langle f,F_{\varsigma} \rangle d\mu(\varsigma)+\int_{K^{C}}\langle f,G_{\varsigma}\rangle\langle f,F_ {\varsigma}\rangle d\mu(\varsigma)\] \[=0.\] Hence \(f=0\) and consequently \(\overline{span}\{\{F_{\varsigma}\}_{\varsigma\in K}\cup\{G_{\varsigma}\}_{ \varsigma\in K^{C}}\}=\mathcal{H}\) for \(K\subset\mathfrak{A}\). Then \(F\) and \(G\) are weakly continuous woven. For (2), let \(K\subset\mathfrak{A}\), \(X=\int_{K}\alpha_{\varsigma}F_{\varsigma}d\mu(\varsigma)\in\overline{span}\{\{F_ {\varsigma}\}_{\varsigma\in K}\text{ and }Y=\int_{K^{C}}\beta_{\varsigma}S_{F}^{-1}F_{ \varsigma}d\mu(\varsigma)\in\overline{span}\{\{S_{F}^{-1}F_{\varsigma}\}_{ \varsigma\in K^{C}}\text{ with }\|X\|=1\text{ and }\|Y\|=1\). Then, we have \[\|X-Y\|^{2} =\|\int_{K}\alpha_{\varsigma}F_{\varsigma}d\mu(\varsigma)-\int_{K^{C} }\beta_{\varsigma}S_{F}^{-1}F_{\varsigma}d\mu(\varsigma)\|^{2}\] \[=\|\int_{K}\alpha_{\varsigma}F_{\varsigma}d\mu(\varsigma)\|^{2}+\| \int_{K^{C}}\beta_{\varsigma}S_{F}^{-1}F_{\varsigma}d\mu(\varsigma)\|^{2}-2Re \langle\int_{K}\alpha_{\varsigma}F_{\varsigma}d\mu(\varsigma),\int_{K^{C}} \beta_{\varsigma}S_{F}^{-1}F_{\varsigma}d\mu(\varsigma)\rangle\] \[=\|\int_{K}\alpha_{\varsigma}F_{\varsigma}d\mu(\varsigma)\|^{2}+\| \int_{K^{C}}\beta_{\varsigma}S_{F}^{-1}F_{\varsigma}d\mu(\varsigma)\|^{2}\geq 1.\] Thus, \(F\) and \(G\) are continuous woven by Theorem 2.6. **Example 3.4**.: Let \(\mathcal{H}=L^{2}(\mathbb{N})\) and \(\mathfrak{A}=(\mathbb{R}^{2},\mu)\) where \(\mu\) is the Lebesgue measure. Let \(\chi_{I}\) denote the characteristic function of a set \(I\). Let \(\{\phi_{i}\}_{(1,2)}\) be any non-zero element in \(\mathcal{H}\) such \(\phi_{2}=2\phi_{1}\). Then, the family \(\{I_{x}T_{y}\phi_{1}\}_{(x,y)\in\mathfrak{A}}\) and \(\{I_{x}T_{y}\phi_{2}\}_{(x,y)\in\mathfrak{A}}\) are continuous frames for \(\mathcal{H}\) with respect to \(\mu\) with frame bounds \(A_{i}\) and \(B_{i}\) for \(i\in\{1,2\}\). For any subset \(K\) of \(\mathfrak{A}\) and for \(f\in\mathcal{H}\), we define the function \(\psi:\mathfrak{A}\to\mathbb{C}\) by \[\psi(x,y)=\psi_{1}(x,y).\chi_{K}(x,y)+\psi_{2}(x,y).\chi_{K^{C}}(x,y)\] where \(\psi_{i}(x,y)=\langle f,I_{x}T_{y}2\phi_{i}\rangle\), \(i\in\{1,2\}\) We have \(\{I_{x}T_{y}\phi\}_{(x,y)\in K}\cup\{I_{x}T_{y}2\phi\}_{(x,y)\in K^{C}}\) is a continuous Bessel sequence with Bessel bound \(\sum_{i\in\{1,2\}}B_{i}\). We obtain \[\|\psi\|_{L^{2}(\mu)}^{2} =\int\int_{K}|\langle f,I_{x}T_{y}\phi\rangle|^{2}dxdy+\int\int_{ K^{C}}|\langle f,I_{x}T_{y}2\phi\rangle|^{2}dxdy\] \[\geq\int\int_{K}|\langle f,I_{x}T_{y}\phi\rangle|^{2}dxdy+\int \int_{K^{C}}|\langle f,I_{x}T_{y}\phi\rangle|^{2}dxdy\] \[=\int\int_{\mathfrak{A}}|\langle f,I_{x}T_{y}\phi\rangle|^{2}dxdy\] \[\geq A_{1}\|f\|^{2}.\] Hence \(\{I_{x}T_{y}\phi_{1}\}_{(x,y)\in\mathfrak{A}}\) and \(\{I_{x}T_{y}\phi_{2}\}_{(x,y)\in\mathfrak{A}}\) are woven with universal bounds \(A_{1}\) and \(\sum_{i\in\{1,2\}}B_{i}\). **Corollary 3.5**.: _Let \(F=\{F_{\varsigma}\}_{\varsigma\in\mathfrak{A}}\) be a continuous frame for finite dimensional Hilbert space \(\mathcal{H}\). Then, \(F\) is continuous woven with all its duals._ Proof.: Suppose that \(G=\{G_{\varsigma}\}_{\varsigma\in\mathfrak{A}}\) is an arbitrary dual continuous frame of \(F\), then the family \(\{F_{\varsigma}\}_{\varsigma\in K}\cup\{G_{\varsigma}\}_{\varsigma\in K^{C}}\) is a continuous frame sequence, for every \(K\subset\mathfrak{A}\). Using Theorem 3.3 we have \(F\) and \(G\) are continuous woven. In the next theorem we show that in infinite dimension Hilbert spaces, continuous frames are continuous woven with their canonical duals. **Theorem 3.6**.: _Let \(F=\{F_{\varsigma}\}_{\varsigma\in\mathfrak{A}}\) be a continuous frame for \(\mathcal{H}\), so that the norm of its redundant elements be small enough. Then, \(F\) is continuous woven with its canonical dual._ Proof.: Without loss of generality, we can write \(F=\{F_{\varsigma}\}_{\varsigma\in K}\cup\{F_{\varsigma}\}_{\varsigma\in K^{C}}\) where \(K\subset\mathfrak{A}\) and \(F=\{F_{\varsigma}\}_{K}\) is a Riesz basis for \(\mathcal{H}\), then by Theorem 3.3, \(F\) and \(S_{F}^{-1}F\) are continuous woven, then \(F\) and \(S_{F}F\) are continuous woven with the universal lower bound \(A_{F}\), and \[\int_{K}\|F_{\varsigma}\|^{2}d\mu(\varsigma)<\sqrt{\frac{A_{F}}{B_{F}}}.\] Then, \(F\) is continuous woven with its canonical dual. **Theorem 3.7**.: _Let \(F=\{F_{\varsigma}\}_{\varsigma}\) and \(G=\{G_{\varsigma}\}_{\varsigma}\) be two continuous frames for \(\mathcal{H}\). The followings hold:_ 1. _If_ \(S_{F}^{-1}\geq I\) _and_ \(S_{F}S_{FK}=S_{FK}S_{F}\) _for all_ \(K\subset\mathfrak{A}\)_, then_ \(\{F_{\varsigma}\}_{\varsigma}\) _and_ \(\{S_{F}^{-1}F_{\varsigma}\}_{\varsigma}\) _are continuous woven._ 2. _If_ \(F\) _and_ \(G\) _are two woven continuous Riesz bases and_ \(T_{1}\) _and_ \(T_{2}\) _are invertible operators so that_ \[d_{FK,GK^{C}}>\max\{\|T_{1}-T_{2}\|\;\|T_{1}^{-1}\|,\|T_{1}-T_{2}\|\;\|T_{2}^{- 1}\|\}\] _where \(d_{FK,GK^{C}}\) is defined as in Theorem 2.6, then \(T_{1}F\) and \(T_{2}G\) are continuous woven._ Proof.: For (1), we consider \(F_{K}=\{F_{\varsigma}\}_{\varsigma\in K}\cup\{S_{F}^{-1}F_{\varsigma}\}_{ \varsigma\in K^{C}}\). Then \(F_{K}\) is a Bessel sequence for all \(K\subset\mathfrak{A}\), and \[S_{FK}f =\int_{K}\langle f,F_{\varsigma}\rangle F_{\varsigma}d\mu( \varsigma)+\int_{K^{C}}\langle f,S_{F}^{-1}F_{\varsigma}\rangle S_{F}^{-1}F_{ \varsigma}d\mu(\varsigma)\] \[=S_{FK}f+S_{F}^{-1}S_{FK^{C}}S_{F}^{-1}f\] \[=S_{F}f-S_{FK^{C}}f+S_{F}^{-1}S_{FK^{C}}S_{F}^{-1}f\] \[=S_{F}f+(S_{F}^{-1}-I)S_{FK^{C}}(I+S_{F}^{-1})f,\;\;\;\forall f \in\mathcal{H}.\] Since \((S_{F}^{-1}-I)S_{FK^{C}}(I+S_{F}^{-1})\) is a positive operator, we obtain \[S_{FK}\geq S_{F}.\] Then, \(T_{FK}^{*}\) is injective and \(F_{K}\) is a continuous frame for all \(K\). Hence, (1) is obtained. For (2), let \(f=\int_{K}\alpha_{\varsigma}T_{1}F_{\varsigma}d\mu(\varsigma)\) and \(g=\int_{K^{C}}\beta_{\varsigma}T_{2}G_{\varsigma}d\mu(\varsigma)\) with \(\|g\|=1\) then \[\|f-g\| =\|\int_{K}\alpha_{\varsigma}T_{1}F_{\varsigma}d\mu(\varsigma)- \int_{K^{C}}\beta_{\varsigma}T_{2}G_{\varsigma}d\mu(\varsigma)\|\] \[=\|\int_{K}\alpha_{\varsigma}T_{1}F_{\varsigma}d\mu(\varsigma)- \int_{K^{C}}\beta_{\varsigma}T_{1}G_{\varsigma}d\mu(\varsigma)+\int_{K^{C}} \beta_{\varsigma}T_{1}G_{\varsigma}d\mu(\varsigma)-\int_{K^{C}}\beta_{\varsigma }T_{2}G_{\varsigma}d\mu(\varsigma)\|\] \[\geq\|T_{1}(\int_{K}\alpha_{\varsigma}F_{\varsigma}d\mu(\varsigma) -\int_{K^{C}}\beta_{\varsigma}G_{\varsigma}d\mu(\varsigma))\|-\|(T_{1}-T_{2}) \int_{K^{C}}\beta_{\varsigma}G_{\varsigma}d\mu(\varsigma)\|\] \[\geq\|T_{1}^{-1}\|^{-1}\|\int_{K^{C}}\beta_{\varsigma}G_{\varsigma }d\mu(\varsigma)\|\ \|\frac{\int_{K}\alpha_{\varsigma}F_{\varsigma d\mu(\varsigma)}}{\|\int_{K^{C }}\beta_{\varsigma}G_{\varsigma}d\mu(\varsigma)\|}-\frac{\int_{K^{C}}\beta_{ \varsigma}G_{\varsigma}d\mu(\varsigma)}{\|\int_{K^{C}}\beta_{\varsigma}G_{ \varsigma}d\mu(\varsigma)\|}\|\] \[-\|T_{1}-T_{2}\|\ \|\int_{K^{C}}\beta_{\varsigma}G_{\varsigma}d\mu( \varsigma)\|\] \[\geq(d_{FK,GK^{C}}\|T_{1}\|^{-1}-\|T_{1}-T_{2}\|)\ \|\int_{K^{C}}\beta_{\varsigma}G_{\varsigma}d\mu(\varsigma)\|\] \[\geq(d_{FK,GK^{C}}\|T_{1}\|^{-1}-\|T_{1}-T_{2}\|)\|T_{2}\|^{-1}>0.\] If \(\|f\|=1\), then we obtain \[\|f-g\| =\|\int_{K}\alpha_{\varsigma}T_{1}F_{\varsigma}d\mu(\varsigma)- \int_{K}\alpha_{\varsigma}T_{1}F_{\varsigma}d\mu(\varsigma)+\int_{K}\alpha_{ \varsigma}T_{2}F_{\varsigma}d\mu(\varsigma)-\int_{K^{C}}\beta_{\varsigma}T_{2} G_{\varsigma}d\mu(\varsigma)\|\] \[\geq\|T_{2}(\int_{K}\alpha_{\varsigma}F_{\varsigma}d\mu(\varsigma) -\int_{K^{C}}\beta_{\varsigma}G_{\varsigma}d\mu(\varsigma))\|-\|(T_{1}-T_{2}) \int_{K}\alpha_{\varsigma}G_{\varsigma}d\mu(\varsigma)\|\] \[\geq(d_{FK,GK^{C}}\|T_{2}^{-1}\|^{-1}-\|T_{1}-T_{2}\|)\|T_{1}\|^{- 1}>0.\] By considering \[d_{1}=(d_{FK,GK^{C}}\|T_{1}^{-1}\|^{-1}-\|T_{1}-T_{2}\|)\|T_{2}\|^{-1}\] and \[d_{2}=(d_{FK,GK^{C}}\|T_{2}^{-1}\|^{-1}-\|T_{1}-T_{2}\|)\|T_{2}\|^{-1}\] we obtain \(d_{T_{1}FK,T_{2}GK^{C}}\geq\min\{d_{1},d_{2}\}>0\). Thus, the result is obtained. A consequence of the above theorem is that the canonical duals of two woven continuous frames are continuous woven. **Corollary 3.8**.: _Let \(F=\{F_{\varsigma}\}_{\varsigma}\) and \(G=\{G_{\varsigma}\}_{\varsigma}\) be two Riesz bases for \(\mathcal{H}\), so that for every \(K\subset\mathfrak{A}\)_ \[d_{FK,GK^{C}}>\max\{\|S_{F}^{-1}-S_{G}^{-1}\|\ \|S_{F}\|,\|S_{F}^{-1}-S_{G}^{-1}\|\ \|S_{G}\|\}.\] _Then, \(S_{F}^{-1}F\) and \(S_{G}^{-1}G\) are also continuous woven._ Proof.: By Theorem 3.7, we consider \(f=\int_{K}\alpha_{\varsigma}S_{F}^{-1}F_{\varsigma}d\mu(\varsigma)\) and \(g=\int_{K^{C}}\beta_{\varsigma}S_{G}^{-1}G_{\varsigma}d\mu(\varsigma)\) with \(\|g\|=1\) then \[\|f-g\| =\|\int_{K}\alpha_{\varsigma}S_{F}^{-1}F_{\varsigma}d\mu(\varsigma)- \int_{K^{C}}\beta_{\varsigma}S_{G}^{-1}G_{\varsigma}d\mu(\varsigma)\|\] \[=\|\int_{K}\alpha_{\varsigma}S_{F}^{-1}F_{\varsigma}d\mu(\varsigma)- \int_{K^{C}}\beta_{\varsigma}S_{F}^{-1}G_{\varsigma}d\mu(\varsigma)+\int_{K^{C} }\beta_{\varsigma}S_{F}^{-1}G_{\varsigma}d\mu(\varsigma)-\int_{K^{C}}\beta_{ \varsigma}S_{G}^{-1}G_{\varsigma}d\mu(\varsigma)\|\] \[\geq\|S_{F}^{-1}(\int_{K}\alpha_{\varsigma}F_{\varsigma}d\mu( \varsigma)-\int_{K^{C}}\beta_{\varsigma}G_{\varsigma}d\mu(\varsigma))\|-\|(S_ {F}^{-1}-S_{G}^{-1})\int_{K^{C}}\beta_{\varsigma}G_{\varsigma}d\mu(\varsigma)\|\] \[\geq\|S_{F}\|^{-1}\|\int_{K^{C}}\beta_{\varsigma}G_{\varsigma}d\mu (\varsigma)\|\ \|\frac{\int_{K}\alpha_{\varsigma}F_{\varsigma d\mu(\varsigma)}}{\|\int_{K^{ C}}\beta_{\varsigma}G_{\varsigma}d\mu(\varsigma)\|}-\frac{\int_{K^{C}}\beta_{ \varsigma}G_{\varsigma}d\mu(\varsigma)}{\|\int_{K^{C}}\beta_{\varsigma}G_{ \varsigma}d\mu(\varsigma)\|}\|\] \[-\|S_{F}^{-1}-S_{G}^{-1}\|\ \|\int_{K^{C}}\beta_{\varsigma}G_{ \varsigma}d\mu(\varsigma)\|\] \[\geq(d_{FK,GK^{C}}\|S_{F}\|^{-1}-\|S_{F}^{-1}-S_{G}^{-1}\|)\ \|\int_{K^{C}}\beta_{\varsigma}G_{\varsigma}d\mu(\varsigma)\|\] \[\geq(d_{FK,GK^{C}}\|S_{F}\|^{-1}-\|S_{F}^{-1}-S_{G}^{-1}\|)\|S_{G }\|^{-1}>0.\] If \(\|f\|=1\), then we obtain \[\|f-g\| =\|\int_{K}\alpha_{\varsigma}S_{F}^{-1}F_{\varsigma}d\mu(\varsigma )-\int_{K}\alpha_{\varsigma}S_{F}^{-1}F_{\varsigma}d\mu(\varsigma)+\int_{K} \alpha_{\varsigma}S_{G}^{-1}F_{\varsigma}d\mu(\varsigma)-\int_{K^{C}}\beta_{ \varsigma}T_{2}G_{\varsigma}d\mu(\varsigma)\|\] \[\geq\|S_{G}^{-1}(\int_{K}\alpha_{\varsigma}F_{\varsigma}d\mu( \varsigma)-\int_{K^{C}}\beta_{\varsigma}G_{\varsigma}d\mu(\varsigma))\|-\|(S_ {F}^{-1}-S_{G}^{-1})\int_{K}\alpha_{\varsigma}G_{\varsigma}d\mu(\varsigma)\|\] \[\geq(d_{FK,GK^{C}}\|S_{F}\|^{-1}-\|S_{F}^{-1}-S_{G}^{-1}\|)\|S_{F }\|^{-1}>0.\] By considering \[d_{1} =(d_{FK,GK^{C}}\|S_{F}^{-1}\|^{-1}-\|S_{F}-S_{G}\|)\|S_{G}\|^{-1}\] \[d_{2} =(d_{FK,GK^{C}}\|S_{G}^{-1}\|^{-1}-\|S_{F}-S_{G}\|)\|S_{G}\|^{-1}\] we obtain \(d_{S_{F}FK,S_{G}GK^{C}}\geq\min\{d_{1},d_{2}\}>0\). Thus, the result is obtained. In the following, we obtain some invertible operators \(T\) for which \(F\) and \(TF\) are woven continuous frames and we give some conditions which continuous frames with their perturbations constitute woven continuous frames. Let \(e=\{e_{i}\}_{i=1}^{n}\) and \(h=\{h_{i}\}_{i=1}^{n}\) be orthonormal bases for \(\mathcal{H}_{n}\). Also, let \(a=\{a_{i}\}_{i=1}^{n}\) and \(b=\{b_{i}\}_{i=1}^{n}\) be sequences of positive constants. An operator \(T:\mathcal{H}_{n}\rightarrow\mathcal{H}_{n}\) is called admissible for \((e,h,a,b)\), if there exists an orthonormal basis \(\{\lambda_{i}\}_{i=1}^{n}\) for \(\mathcal{H}_{n}\) satisfying \[T^{*}e_{i}=\sum_{j=1}^{n}\sqrt{\frac{a_{j}}{b_{j}}}\langle e_{i},h_{j}\rangle h_{j},\ (i=1,2,...,n)\] \(TF\) is a continuous frame for \(\mathcal{H}_{n}\) with the frame operator \(S_{TF}=TS_{F}T^{*}\). By considering \(\mathcal{F}=\{F_{j}\}_{j\in K}\cup\{TF_{j}\}_{j\in K^{c}}\), we obtain \[S_{\mathcal{F}K}=S_{FK}+S_{TFK^{C}}\ for\ K\subset\{1,2,...,n\},\] where \(S_{FK}\) and \(S_{TFK^{C}}\) are the continuous frames operators of \(\{F_{j}\}_{j}\) and \(\{T_{F_{j}}\}_{j}\), respectively. Then, \(S_{\mathcal{F}_{K}}\) the continuous frame operator of \(\mathcal{F}_{K}\), is a positive and invertible operator on \(\mathcal{H}_{n}\) and so \(F\) and \(T_{F}\) are continuous woven. **Theorem 3.9**.: _Let \(F=\{F_{\varsigma}\}_{\varsigma\in\mathfrak{A}}\) and \(G=\{G_{\varsigma}\}_{\varsigma\in\mathfrak{A}}\) are two continuous frames so that for all sequences of scalars \(\{\alpha_{\varsigma}\}_{\varsigma\in\mathfrak{A}}\), we have_ \[\|\int_{\mathfrak{A}}\alpha_{\varsigma}(F_{\varsigma}-G_{\varsigma})d\mu( \varsigma)\|\leq a\|\int_{\mathfrak{A}}\alpha_{\varsigma}F_{\varsigma}d\mu( \varsigma)\|+b\|\int_{\mathfrak{A}}\alpha_{\varsigma}G_{\varsigma}d\mu( \varsigma)\|+c(\int_{\mathfrak{A}}|\alpha_{\varsigma}|^{2}d\mu(\varsigma))^{ \frac{1}{2}}\] _for some positive numbers \(a,b,c\) such that_ \[a\sqrt{B_{F}}+b\sqrt{B_{G}}+c<\sqrt{A_{F}}.\] _Then, \(F\) and \(G\) are continuous woven._ Proof.: Consider \(T_{FK^{C}}\) and \(T_{GK^{C}}\) as the synthesis operators of Bessel sequences \(\{F_{\varsigma}\}_{\varsigma\in K}\) and \(\{G_{\varsigma}\}_{\varsigma\in K^{C}}\), respectively. Then, for every \(K\subset\mathfrak{A}\) and \(f\in\mathcal{H}\), we have \[(\int_{K}|\langle f,F_{\varsigma}\rangle)|^{2}d\mu(\varsigma)+ \int_{K^{C}}|\langle f,G_{\varsigma}\rangle|^{2}d\mu(\varsigma))^{1/2}\] \[=(\int_{K}|\langle f,F_{\varsigma}\rangle)|^{2}d\mu(\varsigma)+ \int_{K^{C}}|\langle f,G_{\varsigma}\rangle-\langle f,F_{\varsigma}-G_{ \varsigma}\rangle|^{2}d\mu(\varsigma))^{1/2}\] \[\geq(\int_{K}|\langle f,F_{\varsigma}\rangle)|^{2}d\mu(\varsigma) )^{1/2}-\int_{K^{C}}|\langle f,F_{\varsigma}-G_{\varsigma}\rangle|^{2}d\mu( \varsigma))^{1/2}\] \[\geq\sqrt{A_{F}}\|f\|-\|T_{FK^{C}}f-T_{GK^{C}}f\|\] \[\geq(\sqrt{A_{F}}-\|T_{FK^{C}}f-T_{GK^{C}})\|f\|\] \[\geq(\sqrt{A_{F}}-a\|T_{F}\|-b\|T_{G}\|-c)\|f\|\] \[\geq(\sqrt{A_{F}}-a\sqrt{B_{F}}-b\sqrt{B_{G}}-c)\|f\|.\] By \((\sqrt{A_{F}}-a\sqrt{B_{F}}-b\sqrt{B_{G}}-c)>0\), the lower bound is obtained. Clearly \(\{F_{j}\}_{j\in K}\cup\{G_{j}\}_{j\in K^{C}}\) is Bessel with an upper bound \(B_{F}+B_{G}\). Applying Theorem 3.9, we will obtain the following results. **Corollary 3.10**.: _Let \(F=\{F_{\varsigma}\}_{\varsigma\in\mathfrak{A}}\) is a continuous frame for \(\mathcal{H}\) and \(0\neq f\in\mathcal{H}\). Also, \(\{a_{\varsigma}\}_{\varsigma\in\mathfrak{A}}\) be a sequence of scalars so that_ \[\int_{\mathfrak{A}}|a_{\varsigma}|^{2}d\mu(\varsigma)<b\frac{A_{F}}{\|f\|^{2}},\] _for some \(b<1\). Then, \(\{F_{\varsigma}\}_{\varsigma\in\mathfrak{A}}\) and \(\{F_{\varsigma}+a_{\varsigma}f\}_{\varsigma\in\mathfrak{A}}\) are continuous woven._ Proof.: We have \(\{F_{\varsigma}+a_{\varsigma}f\}_{\varsigma\in\mathfrak{A}}\) is a Bessel sequence with the upper bound \((\sqrt{B_{F}}+\|\{a_{\varsigma}\}_{\varsigma\in\mathfrak{A}}\|\;\|f\|)^{2}\). And for any sequence \(\{\alpha_{\varsigma}\}_{\varsigma\in\mathfrak{A}}\) of scalars \[\|\int_{\mathfrak{A}}\alpha_{\varsigma}(F_{\varsigma}+a_{\varsigma }f-F_{\varsigma})d\mu(\varsigma)\| =\|\int_{\mathfrak{A}}\alpha_{\varsigma}a_{\varsigma}fd\mu( \varsigma)\|\] \[\leq\int_{\mathfrak{A}}|\alpha_{\varsigma}|\;|a_{\varsigma}|\;\|f \|d\mu(\varsigma)\] \[\leq(\int_{\mathfrak{A}}|\alpha_{\varsigma}|^{2}d\mu(\varsigma) )^{1/2}(\int_{\mathfrak{A}}|a_{\varsigma}|^{2}d\mu(\varsigma))^{1/2}\|f\|\] \[<\sqrt{bA_{F}}(\int_{\mathfrak{A}}|\alpha_{\varsigma}|^{2}d\mu( \varsigma))^{1/2}.\] The result follows by Theorem 3.9. ## Declarations ### Availability of data and materials Not applicable. ### Competing interest The authors declare that they have no competing interests. ### Fundings Authors declare that there is no funding available for this article. ### Authors' contributions The authors equally conceived of the study, participated in its design and coordination, drafted the manuscript, participated in the sequence alignment, and read and approved the final manuscript.
2307.05844
Self-consistent interaction of linear gravitational and electromagnetic waves in non-magnetized plasma
This paper explores the hybridization of linear metric perturbations with linear electromagnetic (EM) perturbations in non-magnetized plasma for a general background metric. The local wave properties are derived from first principles for inhomogeneous plasma, without assuming any symmetries of the background metric. First, we derive the effective (``oscillation-center'') Hamiltonian that governs the average dynamics of plasma particles in a prescribed quasimonochromatic wave that involves metric perturbations and EM fields simultaneously. Then, using this Hamiltonian, we derive the backreaction of plasma particles on the wave itself and obtain gauge-invariant equations that describe the resulting self-consistent gravito-electromagnetic (GEM) waves in a plasma. The transverse tensor modes of gravitational waves are found to have no interaction with the plasma and the EM modes in the geometrical-optics limit. However, for ``longitudinal" GEM modes with large values of the refraction index, the interplay between gravitational and EM interactions in plasma can have a strong effect. In particular, the dispersion relation of the Jeans mode is significantly affected by electrostatic interactions. As a spin-off, our calculation also provides an alternative resolution of the so-called Jeans swindle.
Deepen Garg, I. Y. Dodin
2023-07-11T23:35:21Z
http://arxiv.org/abs/2307.05844v2
Self-consistent interaction of linear gravitational and electromagnetic waves in non-magnetized plasma ###### Abstract This paper explores the hybridization of linear gravitational waves with linear electromagnetic (EM) waves in non-magnetized plasma. The local wave properties are derived from first principles for inhomogeneous plasma, without assuming any symmetries of the background metric. First, we derive the effective ("oscillation-center") Hamiltonian that governs the average dynamics of plasma particles in a prescribed quasinomochromatic wave that involves spacetime-metric perturbations and EM fields simultaneously. Then, using this Hamiltonian, we derive the backreaction of plasma particles on the wave itself and obtain gauge-invariant equations that describe the resulting self-consistent gravito-electromagnetic (GEM) waves in a plasma. In a sufficiently dense plasma, _transverse_ GEM modes consist of modes similar to the familiar transverse EM waves in plasma and gravitational waves in vacuum, respectively. Furthermore, the shift of the gravitational-wave frequency due to plasma is generally of the same order as diffraction caused by plasma's contribution to the background curvature; therefore, the geometrical-optics approximation should not be employed to study such gravitational-EM wave interactions. However, for _longitudinal_ GEM modes with large values of the refraction index, the interplay between gravitational and EM interactions in plasma can have a strong effect. In particular, the dispersion relation of the Jeans mode is significantly affected by electrostatic interactions. As a spin-off, our calculation also provides an alternative resolution of the so-called Jeans swindle. ## I Introduction Detection of correlated emission of gravitational waves (GWs) and electromagnetic (EM) radiation [1; 2; 3] has ushered a new era of multi-messenger astronomy. The idea behind this venture is to use observations of gravitational and EM signals synergistically to learn more about sources of these signals (which can range from compact object mergers [1; 2; 3] to early Universe [4; 5; 6]) than each signal type allows individually. However, utilizing this synergy requires that one understands the interaction between the multiple messengers, i.e., GWs and EM radiation. While this interaction has been studied in the past to some extent [7; 8; 9; 10; 11; 12; 13; 14; 15; 16], it was done under the simplifying assumption that GWs in plasma mostly inherit the vacuum-GW properties. For example, the transverse tensor polarization of the vacuum GWs is assumed a priori and the backreaction of plasma on GWs is considered only to a limited degree or not at all. Thus, a self-consistent theory of GW propagation in plasma remains to be developed. Here, we present such a theory for the first time, specifically, for non-magnetized plasma. We adopt a variational approach, which makes the calculations tractable without assuming any GW properties a priori (except linearity and the short-wavelength limit). Our formulation is similar to that in our Ref. [17], where we studied GWs in neutral gases, but now we introduce EM field as another degree of freedom. First, we derive the effective ("oscillation-center") Hamiltonian that governs the average dynamics of plasma particles in a prescribed quasinomochromatic wave that involves spacetime-metric perturbations and EM fields simultaneously. Then, using this Hamiltonian, we derive the backreaction of plasma on the wave itself and obtain gauge-invariant equations that describe the resulting self-consistent gravito-electromagnetic (GEM) waves in a plasma. We find that, in a sufficiently dense plasma, _transverse_ GEM modes consist of modes similar to the familiar transverse EM waves in plasma and gravitational waves in vacuum, respectively. The frequency shift of these waves due to plasma (as compared to the well-studied GWs in vacuum) is generally of the same order as diffraction caused by plasma's contribution to the curvature of the background spacetime; therefore, it is beyond the accuracy of the geometrical-optics (GO), or ray-optics [18], approximation, rendering the GO approximation invalid for studying such gravitational-EM wave interactions. However, for _longitudinal_ GEM modes with large values of the refraction index, the interplay between gravitational and EM interactions in plasma can have a strong effect. In particular, the dispersion relation of the Jeans mode is significantly affected by electrostatic interactions. As a spin-off, our calculation also provides an alternative resolution of the so-called Jeans swindle [19; 20; 21], by approaching the Jeans instability rigorously from the standpoint of general relativity rather than Newtonian gravity. Our paper is organized as follows. In Sec. II, we introduce the necessary basic concepts and notation. In Sec. III, we derive the oscillation-center Hamiltonian for a charged particle in a prescribed quasinomochromatic wave that involves metric perturbation and EM four-potential simultaneously. In Sec. IV, we use this result to derive gauge-invariant linear wave equations for self-consistent oscillations of gravitational and EM fields. In Sec. V, we explore solutions of these equations in the limit of large refraction index, where GO applies. In particular, we discuss how electrostatic interactions affect the Jeans instability in plasma, and we also discuss the Jeans swindle. In Sec. VI, we study the interaction between the transverse gravitational tensor modes and the transverse EM modes. In Sec. VII, we summarize the main results of our work. ## II Preliminaries ### Notation Let us consider plasma in the presence of an EM field characterized by a four-potential \(\mathsf{A}_{\alpha}\) and metric \(\mathsf{g}_{\alpha\beta}\) on a four-dimensional spacetime with coordinates \((x^{0},x^{1},x^{2},x^{3})\equiv x\) with signature \((-+++)\). Dynamics of this system is governed by the least-action principle \[\delta S=0,\qquad S=S_{\mathrm{m}}+S_{\mathrm{EM}}+S_{\mathrm{EH}}, \tag{1}\] where \(S_{\mathrm{m}}\) is the matter action, \[S_{\mathrm{EM}}=-\frac{\varepsilon_{0}}{4}\int\mathrm{d}^{4}x\,\sqrt{- \mathsf{g}}\,\mathsf{F}^{\alpha\beta}\mathsf{F}_{\alpha\beta} \tag{2}\] is Maxwell's action, \(\mathsf{F}_{\alpha\beta}\doteq\nabla_{\alpha}\mathsf{A}_{\beta}-\nabla_{ \beta}\mathsf{A}_{\alpha}\) (the symbol \(\doteq\) denotes definitions), \(\mathsf{g}\doteq\det\mathsf{g}_{\alpha\beta}\), \[S_{\mathrm{EH}}=\frac{1}{2\kappa}\int\,\mathrm{d}^{4}x\,\sqrt{-\mathsf{g}}\, \mathsf{R} \tag{3}\] is the Einstein-Hilbert action, \(\mathsf{R}\) is the Ricci scalar, and \(\kappa\doteq 8\pi G_{\mathrm{N}}/c^{4}\). We assume the same sign convention as in Ref. [22]. We also assume geometrized Heaviside-Lorentz units, in which \[c=8\pi G_{\mathrm{N}}=\varepsilon_{0}=1. \tag{4}\] Here, \(c\) is the speed of light, \(G_{\mathrm{N}}\) is the Newtonian constant of gravitation, and \(\varepsilon_{0}\) is the vacuum permittivity. We also assume that \(\mathsf{A}_{\alpha}\) and \(\mathsf{g}_{\alpha\beta}\) can be decomposed (in the way to be specified shortly) as follows: \[\mathsf{A}_{\alpha}=A_{\alpha}+a_{\alpha},\qquad\mathsf{g}_{\alpha\beta}=g_{ \alpha\beta}+h_{\alpha\beta}. \tag{5}\] Here, \(A_{\alpha}\) and \(g_{\alpha\beta}\) are order-one background fields with a characteristic scale \(\ell_{g}\), while \(a_{\alpha}\) and \(h_{\alpha\beta}\) are small perturbations: \[a_{\alpha}=\mathcal{O}(\mathsf{o}),\qquad h_{\alpha\beta}=\mathcal{O}( \mathbb{h}). \tag{6}\] The characteristic amplitude of the metric perturbation \(\mathbb{h}\ll 1\) is a dimensionless quantity. The characteristic amplitude of the EM four-potential \(\mathsf{o}\) is a dimensional quantity that scales linearly with \(\mathbb{h}\) in self-consistent GEM waves. We also assume that the characteristic spacetime scale \(\ell_{h}\) of these waves satisfies \[\epsilon\doteq\ell_{h}/\ell_{g}\ll 1.\] (7a) The existence of a small "GO parameter" \[\epsilon\] (whose existence cannot always be taken for granted; see Sec. VI) allows us to introduce an intermediate scale \[\ell_{a}\] such that \[\ell_{h}\ll\ell_{a}\ll\ell_{g},\] (7b) and define the local average \[\langle\ldots\rangle\] over this scale.1 Then, the splitting (5) is specified by requiring that \[\langle a_{\alpha}\rangle=0,\qquad\langle h_{\alpha\beta}\rangle=0,\] (8) and, accordingly, \[A_{\alpha}=\langle\mathsf{A}_{\alpha}\rangle\,,\qquad g_{\alpha\beta}=\langle \mathsf{g}_{\alpha\beta}\rangle\,. \tag{9}\] In this paper, we will also assume that Footnote 1: Various averaging schemes can be used for this [23; 24; 25] and produce equivalent results [26; 27; 28; 29; 30] under the limit of scale separation (7b). For details about one possible implementation of the averaging, see Ref. [31], and a more general approach is presented in Ref. [32]. \[A_{\alpha}=0. \tag{10}\] In particular, this means that the plasma is assumed to be non-magnetized. This assumption is adopted only to streamline the demonstration of our general approach to deriving GEM modes. Generalization to nonzero \(\langle A_{\alpha}\rangle\) is conceptually straightforward and is left to future work. ### Approximate action Following the standard approach [33; 34], we also assume that the plasma responds adiabatically to gravitational and EM fields, meaning that all perturbations to plasma parameters can be unambiguously expressed through \(h_{\alpha\beta}\) and \(a_{\alpha}\) (and parameters of the unperturbed system). Then, using \[h_{\alpha\beta}=\mathcal{O}(\mathbb{h}^{1}),\qquad a_{\alpha}=\mathcal{O}( \mathbb{h}^{1}), \tag{11}\] one can represent the total action \(S\) [Eq. (1)] as a power series in \(\mathbb{h}\): \[S=\sum_{n}S^{(n)},\qquad S^{(n)}=\mathcal{O}(\mathbb{h}^{n}). \tag{12}\] To the extent that terms of the third and higher orders in \(\mathbb{h}\) are negligible [17], this yields \[S\simeq S^{(0)}[g]+S^{(2)}[g,h,a], \tag{13}\] where the square brackets denote functional arguments (whose indices are omitted for brevity) and \[S^{(2)}=S^{(2)}_{\mathrm{EM}}+S^{(2)}_{\mathrm{EH}}+S^{(2)}_{\mathrm{m}}. \tag{14}\] Note that due to the scale separation (7b), the integrands in \(S^{(n)}\) can be replaced with their averaged values. For the same reason [and the fact that the integrand in \(S^{(1)}\) has zero average due to Eq. (8)], the action \(S^{(1)}\) does not contribute to Eq. (13). As seen easily, the second-order EM action is given by \[S^{(2)}_{\rm EM}=-\frac{1}{4}\int\mathrm{d}^{4}x\,\sqrt{-g}\,( \nabla^{\alpha}a^{\beta}-\nabla^{\beta}a^{\alpha})(\nabla_{\alpha}a_{\beta}- \nabla_{\beta}a_{\alpha}), \tag{15}\] where \(g\doteq\det g_{\alpha\beta}\) and \(\nabla\) is the covariant derivative associated with the background metric \(g_{\alpha\beta}\). (From now on, the indices of \(a_{\alpha}\) and \(h_{\alpha\beta}\) are also manipulated using the background metric, as usual.) Also, as shown in Ref. [17] and references cited therein, one has \[S^{(2)}_{\rm EH}=\int\mathrm{d}^{4}x\,\left(\mathcal{L}^{(2)}_{G }+\mathcal{L}^{(2)}_{\rm vac}\right), \tag{16}\] \[S^{(2)}_{\rm m}=\int\mathrm{d}^{4}x\,\mathcal{L}^{(2)}_{\rm m}, \tag{17}\] where \[\mathcal{L}^{(2)}_{G}\doteq\frac{\sqrt{-g}}{4}\bigg{(}-\frac{1}{2 }\,Rh^{\alpha\beta}h_{\alpha\beta}-R_{\alpha\beta}h^{\alpha\beta}h\] \[\qquad\qquad\qquad\qquad+\frac{1}{4}\,Rh^{2}+2R_{\alpha\beta}h^{ \alpha\rho}h_{\rho}{}^{\beta}\bigg{)}, \tag{18}\] \[\mathcal{L}^{(2)}_{\rm vac}\doteq\frac{\sqrt{-g}}{4}\bigg{(}- \frac{1}{2}\,\nabla^{\rho}h^{\alpha\beta}\nabla_{\rho}h_{\alpha\beta}+\frac{1 }{2}\,\nabla^{\rho}h\nabla_{\rho}h\] \[\qquad\qquad\qquad-\nabla_{\alpha}h\nabla_{\beta}h^{\alpha\beta} +\nabla^{\rho}h^{\alpha\beta}\nabla_{\alpha}h_{\beta\rho}\bigg{)}, \tag{19}\] \(R_{\alpha\beta}\) is the Ricci tensor associated with the background metric, and \(\mathcal{L}^{(2)}_{\rm m}\) is the second-order Lagrangian density of the matter. Under the assumption of scale separation (7b), one can as well replace \(\mathcal{L}^{(2)}_{\rm m}\) with its spatial average, \(\langle\mathcal{L}^{(2)}_{\rm m}\rangle\). Below, we discuss how to calculate the latter in detail. ## III Matter action ### Fluid model As a preliminary step to considering actual plasma, let us calculate the average Lagrangian density of a single relativistic fluid in prescribed fields, assuming that the fluid consists of particles with masses \(m\) and charges \(e\). As usual, spin effects are considered negligible. Then, following Ref. [35] (see also Refs. [36; 37]), the corresponding Lagrangian density can be obtained as the semiclassical limit of the Klein-Gordon Lagrangian density \[\mathcal{L}_{\rm m}=\frac{\sqrt{-\mathsf{g}}}{2m}\Big{[}-\mathsf{ g}^{\alpha\beta}\left(\partial_{\alpha}\psi^{*}+\mathrm{i}e\mathsf{A}_{\alpha} \psi^{*}\right)(\partial_{\beta}\psi-\mathrm{i}e\mathsf{A}_{\beta}\psi)\\ -m^{2}|\psi|^{2}\Big{]}, \tag{20}\] where \(\psi\) is a quantum mean field normalized such that \(\mathcal{I}\doteq|\psi|^{2}\) is the local number density. (For the correspondence between quantum and classical variational principles, see Ref. [38].) Let us express this wavefunction in the Madelung form, \(\psi=\sqrt{\mathcal{I}}\,\mathrm{e}^{\mathrm{i}\vartheta}\), where \(\vartheta\) is a real phase. In the semiclassical limit, in which \[p_{\alpha}\doteq\nabla_{\alpha}\vartheta\gg\nabla_{\alpha}\ln I, \tag{21}\] Eq. (20) can be simplified as follows: \[\mathcal{L}_{\rm m}=-\sqrt{-\mathsf{g}}\,\mathcal{I}(x)H(x,\nabla\vartheta). \tag{22}\] Here, the function \(H(x,p)\) is given by \[H(x,p)=\frac{1}{2m}\,[\mathsf{g}^{\alpha\beta}(p_{\alpha}-e\mathsf{A}_{\alpha} )(p_{\beta}-e\mathsf{A}_{\beta})+m^{2}] \tag{23}\] and can be recognized as a Hamiltonian of the particle dynamics in spacetime. Let us expand \(H\) in powers of \(\hbar\) and, as before, keep terms only to \(\mathcal{O}(\hbar^{2})\). This gives \(H\simeq H^{(0)}+H^{(1)}+H^{(2)}\), where \(H^{(n)}=\mathcal{O}(\hbar^{n})\). Using Eq. (10) and \(\mathsf{g}^{\alpha\beta}=g^{\alpha\beta}-h^{\alpha\beta}+h^{\alpha}{}_{\gamma}h ^{\gamma\beta}+\mathcal{O}(\hbar^{3})\), one readily finds that \[H^{(0)}=\frac{1}{2m}\left(g^{\alpha\beta}p_{\alpha}p_{\beta}+m^{ 2}\right), \tag{24a}\] \[H^{(1)}=-\frac{1}{2m}\,h^{\alpha\beta}p_{\alpha}p_{\beta}-\frac{e }{m}\,p^{\alpha}a_{\alpha},\] (24b) \[H^{(2)}=\frac{1}{2m}\,h^{\alpha}{}_{\gamma}h^{\gamma\beta}p_{ \alpha}p_{\beta}+\frac{e}{m}\,h^{\alpha\beta}p_{\alpha}a_{\beta}+\frac{e^{2}}{ 2m}\,g^{\alpha\beta}a_{\alpha}a_{\beta}. \tag{24c}\] Next, let us assume that the EM field and the metric perturbation are quasimonochromatic, i.e., \[h_{\alpha\beta}=\mathrm{Re}\,(\mathrm{e}^{\mathrm{i}\theta}\mathfrak{h}_{ \alpha\beta}),\quad a_{\alpha}=\mathrm{Re}\,(\mathrm{e}^{\mathrm{i}\theta} \mathfrak{a}_{\alpha}), \tag{25}\] where \(\theta\) is a rapid phase and \(\mathfrak{h}_{\alpha\beta}\) and \(\mathfrak{a}_{\alpha}\) are slow envelopes, with the local wavevector defined by \[k_{\alpha}\doteq\partial_{\alpha}\theta=\nabla_{\alpha}\theta\sim\ell_{h}^{-1}. \tag{26}\] Then, like in Ref. [35], it can be shown that \[\langle\mathcal{L}_{\rm m}\rangle=-\sqrt{-g}\,\mathcal{I}\mathcal{H}(x,\nabla \bar{\vartheta}), \tag{27}\] where \(\sqrt{-g}\,\mathcal{I}\doteq\langle\sqrt{-\mathsf{g}}\,\mathcal{I}\rangle\), \(\bar{\vartheta}\doteq\langle\vartheta\rangle\), and \(\mathcal{H}=H^{(0)}+\mathcal{H}^{(2)}\), with \[\mathcal{H}^{(2)}=\langle H^{(2)}\rangle-\frac{mk_{\mu}}{2}\frac{\partial}{ \partial p_{\mu}}\bigg{(}\frac{\langle(H^{(1)})^{2}\rangle}{k_{\lambda}p^{ \lambda}}\bigg{)}. \tag{28}\] (Here, we assume nonresonant particles, i.e., particles with \(k_{\lambda}p^{\lambda}\) sufficiently far from zero, so that \(\mathcal{H}^{(2)}\ll H^{(0)}\). Resonant particles can be rigorously accommodated within more comprehensive approaches, such as those in Ref. [32; 39] or simply as in homogeneous-plasma wave theory [40]. The corresponding modifications of the theory are obvious; see Sec. III. One can understand \(\mathcal{H}\) as the effective, or "oscillation-center" (OC), Hamiltonian that governs the average dynamics of particles in spacetime [32; 35; 36; 41]. (The term OC denotes a fictitious particle whose trajectory coincides with the average trajectory of the actual particle.) The corresponding Euler-Lagrange equations can be found by considering the variation of the action with respect to \(\mathcal{\bar{I}}\) and \(\bar{\vartheta}\), yielding \[\mathcal{H}(x,\nabla\bar{\vartheta})=0, \tag{29a}\] \[\frac{\partial}{\partial x^{\alpha}}\left[\sqrt{-g}\,\mathcal{ \bar{I}}(x)\,\frac{\partial\mathcal{H}(x,\bar{\vartheta})}{\partial p_{ \alpha}}\right]=0, \tag{29b}\] where Eq. (29a) determines the energy-momentum relation (on-shell condition) for an OC, and Eq. (29b) represents the continuity equation for OCs. A direct calculation shows that \[\mathcal{H}^{(2)}(x,p) =\frac{\left<h_{\alpha\beta}h_{\gamma\delta}\right>}{2m}\,g^{ \beta\gamma}p^{\alpha}p^{\delta}\] \[\quad+\frac{c}{m}\left<a_{\alpha}h_{\beta\gamma}\right>g^{\alpha \gamma}p^{\beta}\] \[\quad+\frac{e^{2}}{2m}\left<a_{\alpha}a_{\beta}\right>g^{\alpha\beta}\] \[\quad-\left<h_{\alpha\beta}h_{\gamma\delta}\right>\frac{k_{\mu}} {8m}\frac{\partial}{\partial p_{\mu}}\bigg{(}\frac{p^{\alpha}p^{\beta}p^{ \gamma}p^{\delta}}{k_{\lambda}p^{\lambda}}\bigg{)}\] \[\quad-e\left<a_{\alpha}h_{\beta\gamma}\right>\frac{k_{\mu}}{2m} \frac{\partial}{\partial p_{\mu}}\bigg{(}\frac{p^{\alpha}p^{\beta}p^{\gamma}} {k_{\lambda}p^{\lambda}}\bigg{)}\] \[\quad-e^{2}\left<a_{\alpha}a_{\beta}\right>\frac{k_{\mu}}{2m} \frac{\partial}{\partial p_{\mu}}\bigg{(}\frac{p^{\alpha}p^{\beta}}{k_{\lambda }p^{\lambda}}\bigg{)}. \tag{30}\] Then, the part of \(\left<\mathcal{L}_{\text{m}}\right>\) that is of the second order in the wave amplitude can be expressed as follows: \[\left<\mathcal{L}_{\text{m}}^{(2)}\right>=-\sqrt{-g}N\Phi, \tag{31}\] where \(N\doteq\mathcal{\bar{I}}p^{0}/m\) and \(\Phi\doteq m\mathcal{H}^{(2)}/p^{0}\). The function \(N\) can be understood as the OC number density2 but, within linear-wave theory, does not need to be distinguished from the unperturbed number density of the fluid. The function \(\Phi\) is understood as the second-order part of OC's _spatial_-dynamics Hamiltonian (as opposed to the spacetime-dynamics Hamiltonian), or, loosely, the "ponderomotive potential" [35]. Using Eq. (30), it can also be written as Footnote 2: To the leading order, Eq. (29b) can be viewed as the continuity equation for \(N\), where we ignored \(\mathcal{O}(\mathbb{h}^{2})\) terms and replaced \(\partial\mathcal{H}/\partial p_{\alpha}\) with \(p^{\alpha}/m\) (24a). \[\Phi =\frac{1}{2p^{0}}\left<h_{\alpha\beta}h_{\gamma\delta}\right>\ \left[g^{\beta\gamma}p^{\alpha}p^{\delta}-\frac{k_{\mu}}{4}\frac{\partial}{ \partial p_{\mu}}\left(\frac{p^{\alpha}p^{\beta}p^{\gamma}p^{\delta}}{k_{ \lambda}p^{\lambda}}\right)\right]\] \[\quad+\frac{e^{2}}{2p^{0}}\left<a_{\alpha}h_{\beta\gamma} \right>\left[g^{\alpha\beta}-k_{\mu}\frac{\partial}{\partial p_{\mu}}\bigg{(} \frac{p^{\alpha}p^{\beta}}{k_{\lambda}p^{\lambda}}\bigg{)}\right], \tag{32}\] a form that will be particularly convenient below. ### General case The general case, when plasma consists of multiple species with general distributions of momenta, can be considered as the case of multiple fluids that contribute to \(\left<\mathcal{L}_{\text{m}}^{(2)}\right>\) independently. Suppose particles of type \(s\), with masses \(m_{s}\) and charges \(e_{s}\), are characterized by a distribution function \(f_{s}\). (A single particle corresponds to a delta-shaped \(f_{s}\).) As usual in plasma theory, we assume that this function is normalized such that \[\int\mathrm{d}\mathbf{p}\,f_{s}(x,\mathbf{p})=N_{s}(x) \tag{33}\] (here, \(x\equiv(t,\mathbf{x})\) is the four-dimensional spacetime coordinate, \(t\) is time, \(\mathbf{x}\) is the three-dimensional spatial coordinate, and \(\mathbf{p}\) is the three-dimensional spatial momentum), or equivalently, \[\int\frac{\mathrm{d}\mathbf{p}}{p^{0}}\,f_{s}(x,\mathbf{p})=\frac{\rho_{s}(x)}{m_{s}^{ 2}}, \tag{34}\] where \(\rho_{s}(x)\) is the local proper mass density. Then, \[\left<\mathcal{L}_{\text{m}}^{(2)}\right> =-\sqrt{-g}\sum_{s}N_{s}\Phi_{s}\] \[=-\sqrt{-g}\sum_{s}\int\mathrm{d}\mathbf{p}\,\Phi_{s}f_{s}(x,\mathbf{p}). \tag{35}\] Then, using Eq. (32), one obtains \[S_{\text{m}}^{(2)}= -\frac{1}{2}\sum_{s}\int\mathrm{d}^{4}x\,\sqrt{-g}\int\frac{ \mathrm{d}\mathbf{p}}{p^{0}}\,f_{s}(x,\mathbf{p})\] \[\times\bigg{\{}h_{\alpha\beta}h_{\gamma\delta}\left[g^{\beta\gamma }p^{\alpha}p^{\delta}-\frac{k_{\mu}}{4}\frac{\partial}{\partial p_{\mu}} \left(\frac{p^{\alpha}p^{\beta}p^{\gamma}p^{\delta}}{k_{\lambda}p^{\lambda}} \right)\right]\] \[\quad+2e_{s}a_{\alpha}h_{\beta\gamma}\left[g^{\alpha\gamma}p^{ \beta}-\frac{k_{\mu}}{2}\frac{\partial}{\partial p_{\mu}}\left(\frac{p^{ \alpha}p^{\beta}p^{\gamma}}{k_{\lambda}p^{\lambda}}\right)\right]\] \[\quad+e_{s}^{2}a_{\alpha}a_{\beta}\left[g^{\alpha\beta}-k_{\mu} \frac{\partial}{\partial p_{\mu}}\bigg{(}\frac{p^{\alpha}p^{\beta}}{k_{\lambda }p^{\lambda}}\bigg{)}\right]\bigg{\}}, \tag{36}\] where the space averaging on the right-hand side is dropped as it has no impact on the integral within the GO limit. Also note that the first term in the third line above (\(\propto g^{\alpha\gamma}p^{\beta}\)) provides zero contribution to \(S_{\text{m}}^{(2)}\), because the assumed absence of background EM fields [Eq. (10)] implies neutrality of the background plasma and the absence of background currents, i.e., \[\sum_{s}\int\frac{\mathrm{d}\mathbf{p}}{p^{0}}\,f_{s}(x,\mathbf{p})\,e_{s}\,p^{\alpha}=0. \tag{37}\] ### Dispersion functions Let us introduce the following inner product for any pair of fields \(u_{1}\) and \(u_{2}\) on the background space: \[\left<u_{1},u_{2}\right>=\int\mathrm{d}^{4}x\,\sqrt{-g}\,u_{1}(x)u_{2}(x). \tag{38}\] Using this notation, the matter action (36) can also be expressed as follows: \[S_{\mathrm{m}}^{(2)} =\frac{1}{2}\Big{(}\left<h^{\alpha\beta},D_{\alpha\beta\gamma}^{ \mathrm{mG}}h^{\gamma\delta}\right>\] \[\quad+2\left<a^{\alpha},D_{\alpha\beta\gamma}^{\mathrm{mGEM}}h^{ \beta\gamma}\right>+\left<a^{\alpha},D_{\alpha\beta}^{\mathrm{mEM}}a^{\beta} \right>\Big{)}, \tag{39}\] and \[D_{\alpha\beta\gamma\delta}^{\mathrm{mG}}\doteq\sum_{s}\int\frac{ \mathrm{d}\mathbf{p}}{p^{0}}\,f_{s}(x,\mathbf{p})\bigg{[}\frac{k_{\mu}}{4}\frac{\partial }{\partial p_{\mu}}\left(\frac{\mathcal{T}_{\alpha\beta}\mathcal{T}_{\gamma \delta}}{\Omega}\right)\\ -Q_{\alpha\beta\gamma\delta}\bigg{]}, \tag{40a}\] \[D_{\alpha\beta\gamma}^{\mathrm{mGEM}}\doteq\sum_{s}\int\frac{ \mathrm{d}\mathbf{p}}{p^{0}}\,f_{s}(x,\mathbf{p})e_{s}\bigg{[}\frac{k_{\mu}}{2}\frac{ \partial}{\partial p_{\mu}}\left(\frac{\mathcal{T}_{\alpha\beta}p_{\gamma}}{ \Omega}\right)\\ -g_{\alpha(\gamma}p_{\beta)}\bigg{]},\] (40b) \[D_{\alpha\beta}^{\mathrm{mEM}}\doteq\sum_{s}\int\frac{\mathrm{d} \mathbf{p}}{p^{0}}\,f_{s}(x,\mathbf{p})e_{s}^{2}\bigg{[}k_{\mu}\frac{\partial}{\partial p _{\mu}}\bigg{(}\frac{\mathcal{T}_{\alpha\beta}}{\Omega}\bigg{)}\\ -g_{\alpha\beta}\bigg{]}, \tag{40c}\] where we introduced \[Q_{\alpha\beta\gamma\delta}\doteq\frac{1}{4}\left(g_{\beta \gamma}\mathcal{T}_{\alpha\delta}+g_{\alpha\delta}\mathcal{T}_{\beta\gamma}+g _{\alpha\gamma}\mathcal{T}_{\beta\delta}+g_{\beta\delta}\mathcal{T}_{\alpha \gamma}\right), \tag{41}\] \[\Omega_{\alpha\beta}\doteq p_{(\alpha}k_{\beta)},\quad\Omega \doteq p_{\alpha}k^{\alpha},\] (42) \[\mathcal{T}_{\alpha\beta}\doteq p_{\alpha}p_{\beta}. \tag{43}\] (Note that \(\mathcal{X}_{\alpha\beta\gamma\delta}\) introduced in [35, Eq. (120)] is the same as \(D_{\alpha\beta\gamma\delta}^{\mathrm{mG}}\) in Eq. (40a), as can be seen by comparing Eq. (40a) with [35, Eq. (B1)].) The functions (40) will be called dispersion functions.3 In particular, notice that the last term in the square bracket in Eq. (40b) vanishes because of Eq. (37); however, it is retained here to accentuate parallels between the three expressions (40). Finally, following the same steps as in [35, Appendix B], one also obtains the following alternative representations of the dispersion functions to be used below Footnote 3: In a more general sense, these are the Weyl symbols of the corresponding dispersion operators [42]. However, in the GO limit considered here, these subtleties can be ignored. \[D_{\alpha\beta\gamma\delta}^{\mathrm{mG}}\doteq\sum_{s}\int \frac{\mathrm{d}\mathbf{p}}{4(p^{0})^{2}}\bigg{(}\frac{\mathbf{k}\cdot\partial_{\mathbf{p }}f_{s}}{\omega-\mathbf{k}\cdot\mathbf{v}}\,\mathcal{T}_{\alpha\beta}\mathcal{T}_{ \gamma\delta}\\ +f_{s}J_{\alpha\beta\gamma\delta}^{\mathrm{mG}}\bigg{)}, \tag{44a}\] \[D_{\alpha\beta\gamma}^{\mathrm{mGEM}}\doteq\sum_{s}\int\frac{ \mathrm{d}\mathbf{p}}{2(p^{0})^{2}}\,e_{s}\bigg{(}\frac{\mathbf{k}\cdot\partial_{\mathbf{p }}f_{s}}{\omega-\mathbf{k}\cdot\mathbf{v}}\,\mathcal{T}_{\alpha\beta}p_{\gamma}\\ +f_{s}J_{\alpha\beta\gamma}^{\mathrm{mGEM}}\bigg{)},\] (44b) \[D_{\alpha\beta}^{\mathrm{mEM}}\doteq\sum_{s}\int\frac{\mathrm{d} \mathbf{p}}{(p^{0})^{2}}\,e_{s}^{2}\bigg{(}\frac{\mathbf{k}\cdot\partial_{\mathbf{p}}f_{s} }{\omega-\mathbf{k}\cdot\mathbf{v}}\,\mathcal{T}_{\alpha\beta}\\ +f_{s}J_{\alpha\beta}^{\mathrm{mEM}}\bigg{)}, \tag{44c}\] where \[J_{\alpha\beta\gamma\delta}^{\mathrm{mG}}\doteq\frac{\partial( \mathcal{T}_{\alpha\beta}\mathcal{T}_{\gamma\delta})}{\partial p_{0}}-\frac{g ^{00}}{p^{0}}\,\mathcal{T}_{\alpha\beta}\mathcal{T}_{\gamma\delta}-4p^{0}Q_{ \alpha\beta\gamma\delta}, \tag{45a}\] \[J_{\alpha\beta\gamma}^{\mathrm{mGEM}}\doteq\frac{\partial( \mathcal{T}_{\alpha\beta}p_{\gamma})}{\partial p_{0}}-\frac{g^{00}}{p^{0}}\, \mathcal{T}_{\alpha\beta}p_{\gamma}-2p^{0}g_{\alpha(\gamma}p_{\beta)},\] (45b) \[J_{\alpha\beta}^{\mathrm{mEM}}\doteq\frac{\partial\mathcal{T}_{ \alpha\beta}}{\partial p_{0}}-\frac{g^{00}}{p^{0}}\,\mathcal{T}_{\alpha\beta}-p ^{0}g_{\alpha\beta}. \tag{45c}\] As a reminder, the above calculation assumes that no plasma particles are resonant to the wave; i.e., \(f_{s}(x,\mathbf{p})=0\) at those \(\mathbf{p}\) that satisfy \(\Omega=0\) at given \(x\). However, resonant particles can be added into consideration in the same way as it is commonly done in theory of plasma dispersion [39, 40]. (For the most comprehensive treatment, see Ref. [32].) Then, Eqs. (45) remain valid for \(\mathrm{Im}\,\omega>0\), and analytic continuation of the corresponding expressions for the dispersion functions should be used otherwise. These analytic continuations can be obtained by integrating over the Landau contour as opposed to the real momentum space [40]. Also, as a side remark, note that within this model considered here, EM and gravitational perturbations couple only via plasma and vanish in the limit when the plasma density is negligible. This is because the direct contribution of the photons to the spacetime curvature is assumed negligible compared to that of massive plasma particles. This means that the coupling considered here is different from the known direct photon-graviton conversion in the presence of magnetic fields [43, 44]. ## IV Equation for Gravito-Electromagnetic Waves ### Assumptions In what follows, we use normal coordinates, in which the first-order derivatives of the background metric van ish. Then, \[g_{\alpha\beta}=\eta_{\alpha\beta}+\frac{1}{2}\left(\partial_{\sigma}\partial_{ \rho}g_{\alpha\beta}\right)x^{\rho}x^{\sigma}+\mathcal{O}(\ell_{g}^{-3}), \tag{46}\] and metric's second-order derivatives can be expressed through the Riemann tensor \(R_{\alpha\beta\gamma\delta}\) as [45] \[\partial_{\sigma}\partial_{\rho}g_{\alpha\beta}=-\frac{1}{3}\left(R_{\alpha \rho\beta\sigma}+R_{\alpha\sigma\beta\rho}\right). \tag{47}\] As discussed in Ref. [17], the interaction of GWs with matter (in our case, plasma) is significant only when the Weyl tensor is not the dominant contributor to the curvature and \(R_{\alpha\beta}\), which is \(\mathcal{O}(\rho)\) by Einstein field equations, is of the same order as \(R_{\alpha\beta\gamma\delta}\), which is \(\ell_{g}^{-2}\). Thus, below we assume that \[\hbar\ll\epsilon\sim\ell_{h}/\ell_{g}\sim\ell_{h}\sqrt{\rho}\ll 1, \tag{48}\] where \(\rho\) is the total mass density. Note that in typical plasmas, the ions are the dominant contributors to the mass density. Hence \[\rho\simeq\rho_{i}, \tag{49}\] where the index \(i\) denotes typical ions in the plasma with mass \(m_{i}\gg m_{e}\). (For simplicity, we assume that, in addition to electrons, plasma contains ions of only one type.) Our results are also applicable at smaller densities, but then wave's gravitational coupling with matter is beyond the accuracy of our approximation and thus must be neglected. Another small parameter that we will use is \(m_{s}/e_{s}\). In typical plasmas, the values of \(m_{s}/e_{s}\) for various species are very small compared to unity. For example, electrons (denoted with index \(e\)) have \(m_{e}/e_{e}\sim 10^{-21}\), and protons (denoted with index \(p\)) have \(m_{p}/e_{p}\sim 10^{-18}\). We will also assume \[\frac{m_{s}^{2}}{e_{s}^{2}}\ll\epsilon \tag{50}\] for all species. Like in the case with Eq. (48), our results are also applicable at smaller densities, but then wave's gravitational coupling with matter must be neglected. ### Euler-Lagrange equations for the EM field The first set of GEM equations is obtained by considering the variation of the action (14) with respect to the EM four-potential: \[0=\frac{\delta S^{(2)}}{\delta a_{\alpha}}=\frac{\delta S^{(2)}_{\rm EM}}{ \delta a_{\alpha}}+\frac{\delta S^{(2)}_{\rm m}}{\delta a_{\alpha}}. \tag{51}\] The first term on the right-hand side is well known [46, Sec. 30]: \[\frac{g_{\alpha\beta}}{\sqrt{-g}}\frac{\delta S^{(2)}_{\rm EM}}{\delta a_{ \beta}}=-{a_{\beta,\alpha}}^{\beta}+{a_{\alpha,\beta}}^{\beta}. \tag{52}\] Using Eq. (25), this can also be expressed as \[\frac{g_{\alpha\beta}}{\sqrt{-g}}\frac{\delta S^{(2)}_{\rm EM}}{ \delta a_{\beta}}=k_{\alpha}k^{\beta}a_{\beta}-g^{\beta\gamma}\mathfrak{R}^{ \rm EM}_{\beta\alpha\gamma}\\ -k^{2}a_{\alpha}+g^{\beta\gamma}\mathfrak{R}^{\rm EM}_{\alpha\beta \gamma}, \tag{53}\] where we have introduced \[\mathfrak{R}^{\rm EM}_{\alpha\beta\gamma}\doteq\mathrm{e}^{\mathrm{i}\theta}( \mathfrak{a}_{\alpha,\beta\gamma}+\mathrm{i}k_{\gamma}\mathfrak{a}_{\alpha, \beta}+\mathrm{i}k_{\beta}\mathfrak{a}_{\alpha,\gamma}+\mathrm{i}k_{\gamma, \beta}\mathfrak{a}_{\alpha}). \tag{54}\] Note that \(\mathfrak{R}^{\rm EM}_{\alpha\beta\gamma}\sim\epsilon\), while \[D^{\rm mEM}_{\alpha\beta}\sim e^{2}\rho_{e}/m_{e}^{2}\sim e^{2}\epsilon^{2}/ m_{e}m_{i}, \tag{55}\] where \(e=|e_{e}|=-e_{e}\) is the elementary charge, and we used \(\rho_{e}/\rho_{i}\sim m_{e}/m_{i}\) due to the background-plasma neutrality. Then, under the assumption (50), \(\mathfrak{R}^{\rm EM}_{\alpha\beta\gamma}\) can be ignored compared with \(D^{\rm mEM}_{\alpha\beta}\). This leads to \[\frac{g_{\alpha\beta}}{\sqrt{-g}}\frac{\delta S^{(2)}_{\rm EM}}{\delta a_{ \beta}}\simeq k_{\alpha}k^{\beta}a_{\beta}-k^{2}a_{\alpha}. \tag{56}\] In conjunction with Eqs. (40) or Eqs. (44) for \(S^{(2)}_{\rm m}\), Eq. (56) can be used to rewrite Eq. (51) as \[k_{\alpha}k^{\beta}a_{\beta}-k^{2}a_{\alpha}+D^{\rm mEM}_{\alpha\beta}a^{ \beta}+D^{\rm mGEM}_{\alpha\beta\gamma}h^{\beta\gamma}=0. \tag{57}\] Equation (57) determines the GO dispersion relation and the local polarization of GEM waves. (For this, it must be combined with the equation for \(h^{\beta\gamma}\), which is derived in Sec. IV.3.) In the absence of gravitational interactions, Eq. (57) coincides with the well-known equation for dispersive waves in relativistic nonmagnetized plasma, which is usually expressed through the corresponding dielectric tensor; see, e.g., Eq. (9.42) in Ref. [32]. Note that, although approximate, Eq. (57) honors symmetries of the exact equations that describe the whole system. Indeed, as one can check by direct calculation [with Eq. (37) taken into account], the dispersion functions satisfy \[k^{\alpha}D^{\rm mEM}_{\alpha\beta}=0, \tag{58a}\] \[k^{\alpha}D^{\rm mGEM}_{\alpha\beta\gamma}=0,\] (58b) \[k^{\beta}D^{\rm mGEM}_{\alpha\beta\gamma}=0. \tag{58c}\] From here, one can see that Eq. (57) is invariant with respect to the EM gauge transformations \[a_{\alpha}\to a_{\alpha}+k_{\alpha}\varphi, \tag{59}\] where \(\varphi\) is any scalar function. Equation (58c) also ensures invariance of Eq. (57) with respect to the gauge transformation of the metric perturbation, \[h_{\alpha\beta}\to h^{\prime}_{\alpha\beta}=h_{\alpha\beta}-\pounds_{\xi}g_{ \alpha\beta}, \tag{60}\] where \(\pounds_{\xi}g_{\alpha\beta}=2\Lambda_{(\alpha}k_{\beta)}\mathrm{e}^{\mathrm{ i}\theta}\) is the Lie derivative of the background metric with respect to an arbitrary vector wave field \(\xi^{\alpha}=\mathrm{Re}\left(-\mathrm{i}\Lambda^{\alpha}\mathrm{e}^{\mathrm{ i}\theta}\right)=\mathcal{O}(\hbar)\) in the GO limit. ### Euler-Lagrange equations for the metric perturbation The other set of GEM equations can be obtained by considering the variation of the action (14) with respect to the metric perturbation: \[\frac{\delta S^{(2)}}{\delta h^{\alpha\beta}}=0. \tag{61}\] These equations can be expressed as \[\left(\widehat{D}^{\rm vac}_{\alpha\beta\gamma\delta}+\widehat{\mathcal{G}}_{ \alpha\beta\gamma\delta}+D^{\rm mG}_{\alpha\beta\gamma\delta}\right)h^{\gamma \delta}+a^{\mu}D^{\rm mGEM}_{\mu\alpha\beta}=0, \tag{62}\] with the operators \(\widehat{D}^{\rm vac}_{\alpha\beta\gamma\delta}\), and \(\widehat{\mathcal{G}}_{\alpha\beta\gamma\delta}\) defined as [17] \[\widehat{D}^{\rm vac}_{\alpha\beta\gamma\delta}h^{\gamma\delta} \doteq\frac{1}{4}\big{(}\partial^{\rho}\partial_{\rho}h_{\alpha \beta}-g_{\alpha\beta}g^{\rho\sigma}\partial^{\lambda}\partial_{\lambda}h_{ \rho\sigma}+g^{\rho\sigma}\partial_{\alpha}\partial_{\beta}h_{\rho\sigma}\\ +g_{\alpha\beta}\partial^{\rho}\partial^{\sigma}h_{\rho\sigma}- \partial^{\rho}\partial_{\alpha}h_{\beta\rho}-\partial^{\rho}\partial_{\beta} h_{\alpha\rho}\big{)}, \tag{63}\] \[\widehat{\mathcal{G}}_{\alpha\beta\gamma\delta}h^{\gamma\delta} \doteq\frac{1}{4}\big{(}-Gh_{\alpha\beta}-G_{\alpha\beta}h+2G_{\alpha\rho}h_ {\beta}{}^{\rho}\\ +2G_{\rho\beta}h^{\rho}{}_{\alpha}-2g_{\alpha\beta}R_{\rho\sigma }h^{\rho\sigma}+2R_{\rho\alpha\sigma\beta}h^{\rho\sigma}\big{)}. \tag{64}\] For quasimonochromatic waves [Eq. (25)], Eq. (62) can also be written as \[4D^{(0)}_{\alpha\beta\gamma\delta}h^{\gamma\delta}+M_{\alpha\beta}+4a^{\mu}D^ {\rm mGEM}_{\mu\alpha\beta}=0. \tag{65}\] Here, \(D^{(0)}_{\alpha\beta\gamma\delta}\) is defined as \[D^{(0)}_{\alpha\beta\gamma\delta}\doteq\frac{1}{4}\big{(}-k^{2} g_{\alpha\gamma}g_{\beta\delta}+g_{\alpha\beta}g_{\gamma\delta}k^{2}-k_{ \alpha}k_{\beta}g_{\gamma\delta}\\ -g_{\alpha\beta}k_{\gamma}k_{\delta}+k_{\alpha}k_{\gamma}g_{\beta \delta}+k_{\beta}k_{\delta}g_{\alpha\gamma}\big{)}, \tag{66}\] \(M_{\alpha\beta}\) describes wave's gravitational coupling to the matter, \[M_{\alpha\beta}\doteq\mathfrak{R}_{\alpha\beta\rho}{}^{\rho}-g_ {\alpha\beta}g^{\rho\sigma}\mathfrak{R}_{\rho\sigma\lambda}{}^{\lambda}+g^{ \rho\sigma}\mathfrak{R}_{\rho\sigma\alpha\beta}\\ +g_{\alpha\beta}\mathfrak{R}_{\rho\sigma}{}^{\rho\sigma}- \mathfrak{R}_{\beta\beta\alpha}{}^{\rho}-\mathfrak{R}_{\rho\alpha\beta}{}^{ \rho}+2h^{\rho\sigma}C_{\rho\alpha\sigma\beta}\\ -g_{\alpha\beta}G_{\rho\sigma}h^{\rho\sigma}+h_{\beta}{}^{\rho}G _{\alpha\rho}+h^{\rho}{}_{\alpha}G_{\rho\beta}\\ -(h_{\alpha\beta}-hg_{\alpha\beta})G/3+4D^{\rm mG}_{\alpha\beta \gamma\delta}h^{\gamma\delta}, \tag{67}\] and \(\mathfrak{R}_{\alpha\beta\mu\nu}\doteq\mathrm{e}^{i\theta}(\partial_{\nu} \partial_{\mu}\mathfrak{h}_{\alpha\beta}+2\mathrm{i}k_{\mu}\partial_{\nu} \mathfrak{h}_{\alpha\beta})\mathfrak{h}_{\alpha\beta}+\mathrm{i}\mathfrak{h}_{ \alpha\beta}\partial_{\mu}k_{\nu})\). Note that \(D^{\rm mG}_{\alpha\beta\gamma\delta}\) and \(D^{\rm mGEM}_{\alpha\beta\gamma\delta}\) do not have any particular symmetries in the general case. Hence, the symmetry considerations that are commonly used to study waves in vacuum are not necessarily applicable in the presence of plasma. This means that, to find the wave polarization, in the general case one actually has to solve Eq. (65). In particular, this requires calculating \(M_{\alpha\beta}\), which can also be written as follows: \[M_{\alpha\beta}=4D^{\rm mG}_{\alpha\beta\gamma\delta}h^{\gamma\delta}+\mathcal{ O}(\epsilon\hbar). \tag{68}\] The term \(\mathcal{O}(\epsilon\hbar)\), which contains the derivatives of the amplitudes, is not necessarily small compared with \(D^{\rm mG}_{\alpha\beta\gamma\delta}h^{\gamma\delta}\), which is \(\mathcal{O}(\rho\hbar)\) in the general case (48). Hence, one must either give up the GO approximation and consider the amplitude derivatives along with \(D^{\rm mG}_{\alpha\beta\gamma\delta}h^{\gamma\delta}\), or neglect the gravitational coupling to the matter, \(M_{\alpha\beta}\), altogether. However, as already noted in Ref. [17], \(M_{\alpha\beta}\) can be retained in special cases where additional large parameters are present. In particular, one such case is the quasistatic limit, where waves have a large refraction index. This limit is discussed in detail in Sec. V. Like Eq. (57), equation (65) is invariant with respect to EM gauge transformations (59), as one can verify by a straightforward calculation using Eq. (58b). To address invariance with respect to gravitational gauge transformations (60), one can use the corresponding argument from Ref. [17] and adapt it to the case when EM interactions are present. Specifically, let us decompose Eq. (65) into the longitudinal part (cf. Eq. (4.20) from Ref. [17]) \[k^{\alpha}M_{\alpha\beta}+4k^{\alpha}a^{\mu}D^{\rm mGEM}_{\mu\alpha\beta}=0 \tag{69}\] and the transverse part (cf. Eq. (4.21) from Ref. [17]) \[\Pi^{\gamma\rho}\Pi^{\delta\sigma}\big{(}k^{2}h_{\rho\sigma}- \bar{M}_{\rho\sigma}\\ -4a^{\mu}D^{\rm mGEM}_{\mu\rho\sigma}+2g_{\rho\sigma}a^{\mu}D^{ \rm mGEM}_{\mu\alpha\beta}g^{\alpha\beta}\big{)}=0, \tag{70}\] where \(\Pi^{\alpha\beta}\doteq g^{\alpha\beta}-k^{\alpha}k^{\beta}/k^{2}\) is a projection tensor,4 and the overbar represents the trace-reverse of the corresponding rank-2 tensor. According to Eq. (70), one has Footnote 4: Here, we assume \(k^{2}\neq 0\), which is a valid assumption in the presence of plasma with nonzero density. The vacuum case can be considered within this approach as the limit in which the plasma density is nonzero but vanishingly small. \[k^{2}h_{\rho\sigma}-\bar{M}_{\rho\sigma}-4a^{\mu}D^{\rm mGEM}_{\mu \rho\sigma}+2g_{\rho\sigma}a^{\mu}D^{\rm mGEM}_{\mu\alpha\beta}g^{\alpha\beta}\\ =\lambda_{\rho}k_{\sigma}+k_{\rho}\lambda_{\sigma}, \tag{71}\] where \(\lambda_{\alpha}\) is some vector field. Comparing the trace-reverse of Eq. (65), \[-k^{2}h_{\alpha\beta}+k_{\alpha}k_{\gamma}\bar{h}_{\beta}^{\gamma}+ k_{\beta}k_{\gamma}\bar{h}_{\alpha}^{\gamma}+\bar{M}_{\alpha\beta}\\ +4a^{\mu}D^{\rm mGEM}_{\mu\alpha\beta}-2g_{\alpha\beta}a^{\mu}D^{ \rm mGEM}_{\mu\gamma\delta}g^{\gamma\delta}=0 \tag{72}\] one immediately finds from Eq. (71) that \[\lambda_{\alpha}=k^{\beta}\bar{h}_{\alpha\beta}, \tag{73}\] which are the degrees of freedom always afforded by the metric gauge invariance [47, Sec. 8.3]. Thus, up to gauge freedom, the transverse part encodes all the physical information required to determine the solution for the metric perturbation. Then, Eq. (69) can be used as a test of whether the model for the dispersion function \(D^{\rm mG}_{\alpha\beta\gamma\delta}\) preserves gauge invariance. Using Eq. (58c), this equation can be further simplified down to \[k^{\alpha}M_{\alpha\beta}=0, \tag{74}\] which coincides with the corresponding equation for neutral matter in Ref. [17]. As shown in Ref. [17], Eq. (74) is also equivalent to \[M_{\rho\sigma}[2\Lambda_{(\alpha}k_{\beta)}\mathrm{e}^{\mathrm{i}\theta}]=0, \tag{75}\] where the square brackets denote the argument on which \(M_{\rho\sigma}[h_{\alpha\beta}]\) is evaluated, and \(\Lambda_{\alpha}\) is an arbitrary small field. ## V Quasistatic limit ### Basic equations As mentioned in Sec. IV.3, gravitational coupling with matter can be retained within the GO approximation if there is a large parameter that allows one to ignore the amplitude derivatives in the \(\mathcal{O}(\epsilon\mathbb{h})\) term in Eq. (68) with respect to the matter coupling term given by \(D^{\mathrm{mG}}_{\alpha\beta\gamma\delta}h^{\gamma\delta}\). A possible candidate for such a parameter is the refraction index \(N\). In what follows, we assume the parametrization \(k^{\alpha}=(\omega,0,0,\mathsf{k})\), so \(\omega\) is the wave frequency and \(\mathsf{k}\) is the spatial wavenumber; then \(N\doteq\mathsf{k}/\omega\). The dispersion functions (44) scale as \(\mathcal{O}(\epsilon^{2}N^{2})\), as can be seen by expanding the derivatives in Eqs. (40). Thus for a large enough \(N\,\gg 1\), the GO limit can be employed as follows. In this limit, the dispersion functions can be approximated as \[D^{\mathrm{mG}}_{\alpha\beta\gamma\delta} =\sum_{s}\int\frac{\mathrm{d}\mathbf{p}}{p^{0}}\,f_{s}(x,\mathbf{p})\, \frac{k_{\mu}}{4}\frac{\partial}{\partial p_{\mu}}\left(\frac{\mathcal{T}_{ \alpha\beta}\mathcal{T}_{\gamma\delta}}{\Omega}\right), \tag{76a}\] \[D^{\mathrm{mGEM}}_{\alpha\beta\gamma} =\sum_{s}\int\frac{\mathrm{d}\mathbf{p}}{p^{0}}\,f_{s}(x,\mathbf{p})e_{s} \,\frac{k_{\mu}}{2}\frac{\partial}{\partial p_{\mu}}\left(\frac{\mathcal{T}_{ \alpha\beta}p_{\gamma}}{\Omega}\right),\] (76b) \[D^{\mathrm{mEM}}_{\alpha\beta} =\sum_{s}\int\frac{\mathrm{d}\mathbf{p}}{p^{0}}\,f_{s}(x,\mathbf{p})e_{s} ^{2}k_{\mu}\,\frac{\partial}{\partial p_{\mu}}\bigg{(}\frac{\mathcal{T}_{ \alpha\beta}}{\Omega}\bigg{)}, \tag{76c}\] where Eqs. (40) are used, the subdominant terms \(\mathcal{O}(N^{0})\) are ignored, and the remaining dominant terms are \(\mathcal{O}(N^{2})\).5 Furthermore, Eqs. (67) and (68) can be written as Footnote 5: The smaller terms \(\mathcal{O}(N^{1})\) are also retained in general. \[M_{\alpha\beta}=4D^{\mathrm{mG}}_{\alpha\beta\gamma\delta}h^{\gamma\delta}. \tag{77}\] Then, it is readily seen that Eqs. (58) and (74) are satisfied identically up to term \(\mathcal{O}(N^{0}\mathsf{k})\) that are negligible within the assumed approximation. This makes Eqs. (76) a satisfactory model that retains gauge invariance both with respect to the EM gauge as well as the coordinate gauge. The modes that satisfy this approximation can be called _gravito-electrostatic_ by analogy with electrostatic waves such as Langmuir mode and the gravitostatic waves [17] such as the Jeans mode. Similarly, Eqs. (44) for the dispersion functions can be approximated as \[D^{\mathrm{mG}}_{\alpha\beta\gamma\delta} =\sum_{s}\int\frac{\mathrm{d}\mathbf{p}}{4(p^{0})^{2}}\frac{\mathbf{k} \cdot\partial_{\mathbf{p}}f_{s}}{\omega-\mathbf{k}\cdot\mathbf{v}}\,\mathcal{T}_{\alpha \beta}\mathcal{T}_{\gamma\delta}, \tag{78a}\] \[D^{\mathrm{mGEM}}_{\alpha\beta\gamma} =\sum_{s}\int\frac{\mathrm{d}\mathbf{p}}{2(p^{0})^{2}}\,e_{s}\frac{ \mathbf{k}\cdot\partial_{\mathbf{p}}f_{s}}{\omega-\mathbf{k}\cdot\mathbf{v}}\,\mathcal{T}_{ \alpha\beta}p_{\gamma},\] (78b) \[D^{\mathrm{mEM}}_{\alpha\beta} =\sum_{s}\int\frac{\mathrm{d}\mathbf{p}}{(p^{0})^{2}}\,e_{s}^{2}\frac {\mathbf{k}\cdot\partial_{\mathbf{p}}f_{s}}{\omega-\mathbf{k}\cdot\mathbf{v}}\,\mathcal{T}_{ \alpha\beta}. \tag{78c}\] These are equivalent to Eqs. (76) up to subdominant terms \(\mathcal{O}(N^{0})\), as can be seen by comparing Eqs. (40) and (44). ### Newtonian limit Now, let us consider the Newtonian limit,6 in which \(N\to\infty\) and the background energy-momentum tensor is nonrelativistic, Footnote 6: Some effects of Newtonian gravity on plasma dynamics have been addressed in the literature; see, for example, [48; 49; 50; 51]. \[\frac{\mathcal{T}_{\alpha\beta}}{(p^{0})^{2}}\simeq\delta^{0}_{\alpha}\delta^{ 0}_{\beta}. \tag{79}\] (This approximation does not exclude all thermal effects; see below.) Pursuing an approach similar to the one used in Ref. [17], let us change the normalization of the distribution functions \(f_{s}(x,\mathbf{p})\,\mathrm{d}\mathbf{p}\to N_{s}(x)\,f_{s}(\mathbf{v})\,\mathrm{d}\mathbf{v}\), as common in nonrelativistic plasma theory [40], with \(N_{s}=\rho_{s}/m_{s}\). Then, Eqs. (78) can be written as \[D^{\mathrm{mG}}_{\alpha\beta\gamma\delta} =\frac{1}{4}\sum_{s}\mathcal{X}_{s}\delta^{0}_{\alpha}\delta^{0}_ {\beta}\delta^{0}_{\gamma}\delta^{0}_{\delta}, \tag{80a}\] \[D^{\mathrm{mGEM}}_{\alpha\beta\gamma} =-\frac{1}{2}\sum_{s}\frac{e_{s}}{m_{s}}\,\mathcal{X}_{s}\delta^{0 }_{\alpha}\delta^{0}_{\beta}\delta^{0}_{\gamma},\] (80b) \[D^{\mathrm{mEM}}_{\alpha\beta} =\sum_{s}\frac{e_{s}^{2}}{m_{s}^{2}}\,\mathcal{X}_{s}\delta^{0}_ {\alpha}\delta^{0}_{\beta}, \tag{80c}\] where we introduced \[\mathcal{X}_{s}\doteq\rho_{s}(x)\int_{\mathcal{L}}\frac{\mathbf{k}\cdot\partial_{ \mathbf{v}}f_{s}(\mathbf{v})}{\omega-\mathbf{k}\cdot\mathbf{v}}\,\mathrm{d}\mathbf{v}. \tag{81}\] Then, Eq. (77) can be used to write \[M_{\alpha\beta}=\sum_{s}\mathcal{X}_{s}\,\delta^{0}_{\alpha}\delta^{0}_{\beta}h ^{00},\] (82a) whence \[\bar{M}_{\alpha\beta} =\sum_{s}\mathcal{X}_{s}h^{00}\left(\delta^{0}_{\alpha}\delta^{0}_ {\beta}+\frac{1}{2}\,\eta_{\alpha\beta}\right)\] \[=\sum_{s}\frac{1}{2}\,\mathcal{X}_{s}h^{00}I_{\alpha\beta}, \tag{82b}\] where \(I_{\alpha\beta}\) is a unit matrix. Using the EM and gravitational gauge invariance, let us adopt the Lorenz gauge both for the EM potential and the metric perturbation, i.e., \[k^{\alpha}a_{\alpha}=0,\qquad k^{\beta}\bar{h}_{\alpha\beta}=0. \tag{83}\] Then, Eqs. (57) and (72) can be written as follows: \[-k^{2}a_{\alpha}+\sum_{s}\left(\frac{e_{s}^{2}}{m_{s}^{2}}\, \mathcal{X}_{s}\ a^{0}-\frac{e_{s}}{2m_{s}}\,\mathcal{X}_{s}h^{00}\right) \delta_{\alpha}^{0}=0, \tag{84a}\] \[-k^{2}h_{\alpha\beta}+\sum_{s}\left(\frac{1}{2}\,\mathcal{X}_{s}h ^{00}-\frac{e_{s}}{m_{s}}\,\mathcal{X}_{s}\ a^{0}\right)I_{\alpha\beta}=0. \tag{84b}\] It is seen from here that gravito-electrostatic modes have the longitudinal polarization similar to that found for the Langmuir mode and the Jeans mode, i.e., \[h_{\alpha\beta}=I_{\alpha\beta}\mathfrak{h},\qquad a_{\alpha}=\delta_{\alpha} ^{0}\mathfrak{a}, \tag{85}\] where \(\mathfrak{h}\) and \(\mathfrak{a}\) denote the scalar amplitudes of the respective wave fields. Substituting this into Eq. (84) readily yields \[k^{2}\mathfrak{a}+\sum_{s}\frac{e_{s}}{m_{s}}\left(\frac{1}{2} \,\mathcal{X}_{s}\mathfrak{h}+\frac{e_{s}}{m_{s}}\,\mathcal{X}_{s}\mathfrak{a }\right)=0, \tag{86}\] \[-k^{2}\mathfrak{h}+\sum_{s}\left(\frac{1}{2}\,\mathcal{X}_{s} \mathfrak{h}+\frac{e_{s}}{m_{s}}\,\mathcal{X}_{s}\mathfrak{a}\right)=0, \tag{87}\] so the wave polarization is found to be \[\frac{\mathfrak{a}}{\mathfrak{h}}=\frac{k^{2}-W_{0}/2}{W_{1}}, \tag{88}\] where we have introduced the notation \[W_{n}\doteq\sum_{s}\left(\frac{e_{s}}{m_{s}}\right)^{n}\mathcal{X}_{s}. \tag{89}\] The corresponding dispersion relation is given by \[k^{2}=\frac{W_{0}}{4}-\frac{W_{2}}{2}\pm\sqrt{\left(\frac{W_{0}}{4}+\frac{W_{ 2}}{2}\right)^{2}-\frac{W_{1}^{2}}{2}}. \tag{90}\] Below, we explore this result in several limits. ### Cold plasma limit In the cold-plasma limit, where all particle velocities satisfy \(v\ll\omega/\mathsf{k}\) and can be neglected completely, one has \[\int_{\mathcal{L}}\mathrm{d}\boldsymbol{v}\,\frac{\boldsymbol{k} \cdot\partial_{\boldsymbol{v}}f(\boldsymbol{v})}{\omega-\boldsymbol{k}\cdot \boldsymbol{v}} \simeq\int\mathrm{d}\boldsymbol{v}\left(1+\frac{\boldsymbol{k} \cdot\boldsymbol{v}}{\omega}\right)\frac{\boldsymbol{k}}{\omega}\cdot \partial_{\boldsymbol{v}}f(\boldsymbol{v})\] \[=\frac{1}{\omega^{2}}\int\mathrm{d}\boldsymbol{v}\,\left( \boldsymbol{k}\cdot\boldsymbol{v}\right)\boldsymbol{k}\cdot\partial_{ \boldsymbol{v}}f(\boldsymbol{v})\] \[=-\frac{\mathsf{k}^{2}}{\omega^{2}}\int\mathrm{d}\boldsymbol{v }\,f(\boldsymbol{v})\] \[=-\frac{\mathsf{k}^{2}}{\omega^{2}}. \tag{91}\] This readily yields \[\mathcal{X}_{s}=-\rho_{s}\,\frac{\mathsf{k}^{2}}{\omega^{2}}, \tag{92}\] Due to the assumed background neutrality \(\sum_{s}\left(e_{s}\rho_{s}/m_{s}\right)\!=\!0\) [Eq. (37)], this gives \(W_{1}=0\). Remember also that \(k^{2}\simeq\mathsf{k}^{2}\), since \(N\gg 1\) is assumed. Then, it is readily seen that Eq. (90) predicts the following two modes. One of them satisfies \[k^{2}=-W_{2}, \tag{93}\] whence \(\omega^{2}=\omega_{\mathrm{p}}^{2}\), where \(\omega_{\mathrm{p}}\) is the plasma frequency: \[\omega_{\mathrm{p}}^{2}\doteq\sum_{s}\frac{e_{s}^{2}}{m_{s}^{2}}\,\rho_{s}. \tag{94}\] The corresponding polarization is \((\mathfrak{h},\mathfrak{a})=(0,1)\times\mathrm{const}\). This is the familiar Langmuir mode [40]. The other mode predicted by Eq. (90) satisfies \[k^{2}=\frac{W_{0}}{2}. \tag{95}\] This leads to \(\omega^{2}=-\omega_{\mathrm{J}}^{2}\), where \(\omega_{\mathrm{J}}\) is the Jeans frequency: \[\omega_{\mathrm{J}}^{2}\doteq\frac{1}{2}\sum_{s}\rho_{s}. \tag{96}\] The corresponding polarization is \((\mathfrak{h},\mathfrak{a})=(1,0)\times\mathrm{const}\). This is the familiar Jeans mode [52; 53; 54]. Notice that \[\omega_{\mathrm{p}}^{2}\simeq e^{2}\rho_{\mathrm{e}}/m_{e}^{2},\qquad\omega_{ \mathrm{J}}^{2}\simeq\rho_{\mathrm{i}}/2, \tag{97}\] so \(\omega_{\mathrm{J}}^{2}/\omega_{\mathrm{p}}^{2}\ll 1\) by Eq. (50). As a side remark, note that our derivation of the Jeans mode also presents an alternative resolution of the "Jeans swindle", the problem that the traditional derivation of the Jeans mode arbitrarily ignores the background gravitational field, which is necessarily present in the presence of matter [19; 21]. The proposed resolutions to the Jeans swindle involve assuming infinite homogeneous background [21] or cancelling out the background effect by the Hubble expansion [20]. In contrast, our derivation arrives at the local Jeans dispersion relation using formal asymptotic analysis of the linear GWs even for a Minkowski background with an inhomogeneous mass distribution. Also note that the absence of coupling between the electrostatic and gravitational modes in the cold-plasma limit is a general statement that is valid also beyond the quasistatic approximation. This is seen from the fact that this interaction is determined by the "cross term" \(\langle a^{\alpha},D_{\alpha\beta\gamma}^{\mathrm{mGEM}}h^{\beta\gamma}\rangle\) in the action (39). As easily seen from Eq. (40b), the dispersion function \(D_{\alpha\beta\gamma}^{\mathrm{mGEM}}\) vanishes in the cold limit due to neutrality of the background plasma. Thus, in the cold limit, gravitational modes and EM modes cannot interact (within the model assumed in this paper) irrespective of the value of \(k^{\alpha}\). ### Warm plasma In a warm plasma, when the characteristic speeds of particles are small compared to \(\omega/\mathsf{k}\) but not entirely negligible, the coupling term \(W_{1}\) is small but nonzero and can be treated perturbatively. In this case, the square root in Eq. (90) can be Taylor-expanded in \(W_{1}\). This results in small modifications of the Jeans mode and the Langmuir mode. For the modified Jeans mode, one obtains \[k^{2} =-\frac{\mathsf{k}^{2}}{\omega^{2}}\,\omega_{\rm J}^{2}-\frac{W_ {1}^{2}}{2W_{2}}, \tag{98a}\] \[\frac{\mathsf{a}}{\mathsf{\mathfrak{h}}} =-\frac{W_{1}}{2W_{2}}, \tag{98b}\] where we used Eq. (96), and the modified Langmuir mode satisfies \[k^{2} =\frac{\mathsf{k}^{2}}{\omega^{2}}\,\omega_{\rm p}^{2}+\frac{W_{1 }^{2}}{2W_{2}}, \tag{99a}\] \[\frac{\mathsf{a}}{\mathsf{\mathfrak{h}}} =-\frac{W_{2}}{W_{1}}+\frac{W_{1}}{2W_{2}}, \tag{99b}\] where we used Eq. (94) and \(W_{0}\ll W_{2}\). For example, let us consider a specific example of a warm Maxwellian plasma consisting of electrons with temperature \(T_{e}=m_{e}v_{Te}^{2}\) and ions with comparable (or smaller) temperature \(T_{i}=m_{i}v_{Ti}^{2}\). Due to the smallness of the ratio \(m_{e}/m_{i}\) and the assumed condition \(v_{s}\ll\omega/\mathsf{k}\) for all \(s\), one has \(v_{Ti}\ll v_{Te}\ll\omega/\mathsf{k}\). This renders the ion temperature negligible, while the electron temperature can be treated as a small but nonvanishing correction to the case considered in Sec. V.3. Then, one obtains [40] \[\mathcal{X}_{e} \simeq-\rho_{e}\,\frac{\mathsf{k}^{2}}{\omega^{2}}\left(1+\frac{ 3\mathsf{k}^{2}v_{Te}^{2}}{\omega^{2}}\right), \tag{100}\] \[\mathcal{X}_{i} \simeq-\rho_{i}\,\frac{\mathsf{k}^{2}}{\omega^{2}}, \tag{101}\] whence \[W_{1} \simeq 3\,eN_{e}\,\frac{\mathsf{k}^{4}v_{Te}^{2}}{\omega^{4}}. \tag{102}\] Using this in Eqs. (98a) and (99a) immediately leads to the dispersion relation, \(\omega^{2}=-\omega_{\rm J}^{2}+\Delta^{2}\) for the modified Jeans mode, and \(\omega^{2}=\omega_{\rm p}^{2}-\Delta^{2}\) for the modified Langmuir mode respectively, with \[\frac{\Delta^{2}}{\omega_{\rm J}^{2}} \simeq\frac{9m_{e}}{m_{i}}\,\frac{\mathsf{k}^{4}v_{Te}^{4}}{ \omega^{4}},\quad\frac{\Delta^{2}}{\omega_{\rm p}^{2}}\simeq\frac{9m_{e}^{2} }{2e^{2}}\,\frac{\mathsf{k}^{4}v_{Te}^{4}}{\omega^{4}}, \tag{103}\] where we have retained only the leading-order terms and neglected corrections \(\mathcal{O}(\mathsf{k}^{2}v_{s}^{2}/\omega^{2})\). The relative strength of the correction for the Jeans mode is very small, which is in line with the cold plasma limit of no interaction at all. Similarly, the relative strength of the modification of the Langmuir mode due to gravitational effects is further suppressed by an additional factor of \(m_{i}m_{e}/e^{2}\ll 1\), due to the gravitational forces being weaker compared to the EM forces. ### Plasma with cold ions and hot electrons A more significant interaction between the Jeans mode and plasma electrostatic modes can be found at low frequencies that satisfy \[v_{Ti}\ll\omega/\mathsf{k}\ll v_{Te}. \tag{104}\] In this regime, Eq. (81) can be approximated as [40, Sec. 8-13] \[\mathcal{X}_{e}=\frac{\rho_{e}}{v_{Te}^{2}},\qquad\mathcal{X}_{i} =-\frac{\rho_{i}\mathsf{k}^{2}}{\omega^{2}}. \tag{105}\] In the absence of gravitational interactions, such a plasma supports ion acoustic oscillations \[\omega^{2}\simeq\frac{k^{2}c_{\rm s}^{2}}{1+k^{2}\lambda_{De}^{2}} \to k^{2}c_{\rm s}^{2}. \tag{106}\] Here, \(\lambda_{De}\doteq v_{Te}/\omega_{\rm p}\) is the electron Debye length, \(c_{\rm s}\doteq(ZT_{e}/m_{i})^{1/2}\) is the ion sound speed (assuming \(e_{i}=Ze\)), and the second part of Eq. (106) corresponds to the limit \(k^{2}\lambda_{De}^{2}\ll 1\), where the dispersion relation becomes particularly simple (sound-like). Although electric charge \(e\) does not enter Eq. (106) explicitly, the electrostatic interactions are important in that they tie cold ions, which provide inertia, to hot electrons, which carry pressure. Now, let us reinstate gravitational interactions, assuming \(\mathsf{k}^{2}c_{\rm s}^{2}\sim\omega_{\rm J}^{2}\). In this case, the square root in the dispersion relation (90) can still be Taylor-expanded in \(W_{1}\) as per Eqs. (98) and (99), since \[\frac{W_{1}}{W_{2}} =\left(-1-\frac{\mathsf{k}^{2}v_{Te}^{2}}{\omega^{2}}\right)\left( \frac{e}{m_{e}}-\frac{Ze\mathsf{k}^{2}v_{Te}^{2}}{m_{i}\omega^{2}}\right)^{-1}\] \[\simeq\frac{m_{i}}{e}\left(Z-\frac{m_{i}/m_{e}}{\mathsf{k}^{2}v_ {Te}^{2}/\omega^{2}}\right)^{-1} \tag{107}\] is small, as can be verified a posteriori [using Eq. (109)]. Then, \[\frac{W_{1}^{2}}{W_{2}} \simeq-\rho_{e}\,\frac{\mathsf{k}^{2}}{\omega^{2}}\left(-\frac{ \mathsf{k}^{2}v_{Te}^{2}}{\mathsf{k}^{2}v_{Te}^{2}}+\frac{Zm_{e}}{m_{i}} \right)^{-1}\] \[=\rho_{i}\,\frac{\mathsf{k}^{2}}{\omega^{2}}\left(\frac{\mathsf{ k}^{2}c_{\rm s}^{2}}{\omega^{2}-\mathsf{k}^{2}c_{\rm s}^{2}}\right), \tag{108}\] so Eq. (98a) leads to the following dispersion relation: \[\omega^{2}=-\omega_{\rm J}^{2}+\mathsf{k}^{2}c_{\rm s}^{2}, \tag{109}\] where we have again approximated \(\omega_{\rm J}^{2}\simeq\rho_{i}/2\). One can understand this as the Jeans mode hybridized with the ion-acoustic branch. (For comparison: in a neutral gas, the ion sound speed in the Jeans mode's dispersion relation is replaced by the particle thermal speed [17].) One can similarly calculate the correction to the ion acoustic waves that result from their hybridization with the Jeans mode. It is easy to see that this correction is of order \(\sim m_{e}m_{i}/e^{2}\). Thus, the effect of gravitational interactions on electrostatic waves is negligible, which is again due to the relative weakness of the gravitational forces compared with the EM forces. Transverse waves Now let us consider transverse waves. To the extent that gravitational effects can be neglected, transverse EM waves in nonrelativistic plasma satisfy the well-known GO dispersion relation \[k^{2}=-\omega_{\rm p}^{2}, \tag{110}\] and GWs satisfy the GO dispersion relation \[k^{2}=0, \tag{111}\] because corrections due to the interaction with the matter are beyond the GO approximation [17]. Let us consider the interaction of these waves perturbatively. Assuming the Lorenz gauge (83) again, Eqs. (57) and (72) can be written as follows: \[-k^{2}a_{\alpha}+D_{\alpha\beta}^{\rm mEM}a^{\beta}+D_{\alpha \beta\gamma}^{\rm mGEM}h^{\beta\gamma}=0, \tag{112}\] \[-k^{2}h_{\alpha\beta}+\bar{M}_{\alpha\beta}+4a^{\mu}D_{\mu\alpha \beta}^{\rm mGEM}-2g_{\alpha\beta}a^{\mu}D_{\mu\gamma\delta}^{\rm mGEM}g^{ \gamma\delta}=0. \tag{113}\] Let us first consider the mode whose frequency is closest to that given by the EM dispersion relation (110). For this mode, the last term in Eq. (112) is a small perturbation and \(h_{\alpha\beta}\), can be found perturbatively from Eq. (113) by substituting \(k^{2}\simeq-\omega_{\rm p}^{2}\) there, i.e., adopting \[\omega_{\rm p}^{2}h_{\alpha\beta}+\bar{M}_{\alpha\beta}+4a^{\mu} D_{\mu\alpha\beta}^{\rm mGEM}-2g_{\alpha\beta}a^{\mu}D_{\mu\gamma\delta}^{ \rm mGEM}g^{\gamma\delta}=0. \tag{114}\] Since \(N\sim 1\) for such waves, one has \(M_{\alpha\beta}\sim{\cal O}(\epsilon)\), so \(M_{\alpha\beta}\) can be ignored. Let us also simplify our notation by introducing \(D_{\alpha\beta\gamma}\doteq D_{\alpha\beta\gamma}^{\rm mGEM}\). Then, one can write that \[h_{\alpha\beta}=-\frac{1}{\omega_{\rm p}^{2}}\left(4a^{\mu}D_{\mu \alpha\beta}-2g_{\alpha\beta}a^{\mu}D_{\mu\gamma\delta}g^{\gamma\delta}\right). \tag{115}\] By substituting this into Eq. (112), one obtains \[-k^{2}a_{\alpha}+{\cal D}_{\alpha\mu}^{\rm mEM}a^{\mu}=0, \tag{116}\] where \({\cal D}_{\alpha\mu}^{\rm mEM}\) can be understood as the EM dispersion function dressed by gravitational interactions: \[{\cal D}_{\alpha\mu}^{\rm mEM}\doteq D_{\alpha\mu}^{\rm mEM}-2 \omega_{\rm p}^{-2}D_{\alpha\beta\gamma}\left(2D_{\mu}{}^{\beta\gamma}-g^{ \beta\gamma}D_{\mu\delta}{}^{\delta}\right). \tag{117}\] Since \(D_{\alpha\beta\gamma}\) vanishes in the the cold-plasma limit, the difference between \({\cal D}_{\alpha\mu}^{\rm mEM}\) and \(D_{\alpha\mu}^{\rm mEM}\) is entirely due to finite particle speeds (in the frame specified in Sec. IV.1). This difference can be estimated as follows. Assuming nonrelativistic speeds, one has \(p_{\alpha}\simeq-m\delta_{\alpha}^{0}\), so \(a^{\mu}D_{\mu\alpha\beta}\simeq a^{0}D_{0\alpha\beta}\). Then, like in Sec. V.4, one finds that \[D_{0\alpha\beta}\sim\frac{{\rm k}^{2}v_{Te}^{2}}{\omega^{2}}\,Ne, \tag{118}\] and \({\rm k}v_{Te}\ll\omega\) in nonrelativistic plasma, because \(\omega/{\rm k}\gtrsim 1\). Also note that \(D_{\alpha\beta}^{\rm mEM}\sim\omega_{\rm p}^{2}\) and \(\omega_{\rm p}^{2}\sim Ne^{2}/m_{e}\). Then, the second term on the right-hand side of Eq. (117) scales relative to the first term as \[\left(\frac{{\rm k}v_{Te}}{\omega}\right)^{4}\frac{N^{2}e^{2}}{ \omega_{p}^{4}}\lesssim\frac{m_{e}^{2}}{e^{2}}. \tag{119}\] In other words, the frequency shift of an EM waves in plasma due to gravitational effects is of order \({\cal O}(m_{e}^{2}/e_{e}^{2})\), which is extremely small. Note that this conclusion holds even in the limit of vanishingly small plasma density, where GWs and EM waves have the same dispersion relation, \(k^{2}=0\). This is due to the fact that both the deviation from the linear resonance between these waves and their coupling (\(\sim D^{2}/\omega_{\rm p}^{2}\)) are equally proportional to \(N\), so these waves remain nonresonant even at \(N\to 0\). A similar procedure can be used to obtain the correction to the dispersion relation for the GWs with \(a_{\alpha}\) treated as a perturbation. Substituting \(k^{2}=0\) in Eq. (112) yields \[D_{\alpha\beta}^{\rm mEM}a^{\beta}=-D_{\alpha\beta\gamma}h^{ \beta\gamma}, \tag{120}\] \[D_{\alpha\beta}^{\rm mEM}=-\omega_{\rm p}^{2}g_{\alpha\beta}+ \sum_{s}e_{s}^{2}\int\frac{{\rm d}{\mathbf{p}}}{p^{0}}\,f_{s}(x,{\mathbf{p}})\,\frac{k_ {\alpha}p_{\beta}}{k^{\mu}p_{\mu}}, \tag{121}\] where Eq. (40c) is used to obtain the expression for \(D_{\alpha\beta}^{\rm mEM}\). Let us denote the inverse of \(D_{\alpha\beta}^{\rm mEM}\) as \(D_{-1}^{\alpha\beta}\). Then, \[a^{\rho}=-D_{-1}^{\rho\alpha}D_{\alpha\beta\gamma}h^{\beta\gamma}. \tag{122}\] By substituting this to Eq. (113), one obtains \[-k^{2}h_{\alpha\beta}+\bar{M}_{\alpha\beta}-4D_{-1}^{\mu\rho}D_{ \rho\sigma\tau}h^{\sigma\tau}D_{\mu\alpha\beta}\\ +2g_{\alpha\beta}D_{-1}^{\mu\rho}D_{\rho\sigma\tau}h^{\sigma\tau} D_{\mu\gamma}{}^{\gamma}=0. \tag{123}\] The last two terms in Eq. (123) represent the correction caused by GW coupling with EM field. Like in the case with EM waves, let us use that \(D_{\alpha\beta}\sim\omega_{\rm p}^{2}\) and \(D_{\alpha\beta\gamma}\ll Ne\). Then, said correction is much less than \(\rho_{e}\), which is negligible compared with \(M_{\alpha\beta}\sim\rho_{i}\) [see Eq. (92), with \({\rm k}^{2}\simeq\omega^{2}\)]. This means that EM effects in nonrelativistic nonmagnetized have a much smaller (at least by the factor \(m_{e}/m_{i}\ll 1\)) effect on GWs than gravitational interactions with the plasma. Furthermore, as we have pointed out earlier, even the effect of the latter is negligible within the GO approximation. ## VII Conclusions In summary, here we explore the hybridization of linear gravitational waves with linear EM waves in non-magnetized plasma. First, we derive the effective ("oscillation-center") Hamiltonian that governs the average dynamics of plasma particles in a prescribed quasi-monochromatic wave that involves spacetime-metric perturbations and EM fields simultaneously. Then, using this Hamiltonian, we derive the backreaction of plasma particles on the wave itself and obtain gauge-invariant equations that describe the resulting self-consistent gravito-electromagnetic (GEM) waves in a plasma. In a sufficiently dense plasma, _transverse_ GEM modes consist of modes similar to the familiar transverse EM waves in plasma and gravitational waves in vacuum, respectively. Furthermore, the shift of the gravitational-wave frequency due to plasma is generally of the same order as diffraction caused by plasma's curving the background spacetime; therefore, it is beyond the accuracy of the geometrical-optics approximation. However, for _longitudinal_ GEM modes with large values of the refraction index, the interplay between gravitational and EM interactions in plasma can have a strong effect. In particular, the dispersion relation of the Jeans mode is significantly affected by electrostatic interactions. The approach used in this work can also be readily extended to magnetized plasma, an endeavour that we leave to future work. This material is based upon the work supported by National Science Foundation under the grant No. PHY 1903130.
2307.14791
Automatic Parallelization of Software Network Functions
Software network functions (NFs) trade-off flexibility and ease of deployment for an increased challenge of performance. The traditional way to increase NF performance is by distributing traffic to multiple CPU cores, but this poses a significant challenge: how to parallelize an NF without breaking its semantics? We propose Maestro, a tool that analyzes a sequential implementation of an NF and automatically generates an enhanced parallel version that carefully configures the NIC's Receive Side Scaling mechanism to distribute traffic across cores, while preserving semantics. When possible, Maestro orchestrates a shared-nothing architecture, with each core operating independently without shared memory coordination, maximizing performance. Otherwise, Maestro choreographs a fine-grained read-write locking mechanism that optimizes operation for typical Internet traffic. We parallelized 8 software NFs and show that they generally scale-up linearly until bottlenecked by PCIe when using small packets or by 100Gbps line-rate with typical Internet traffic. Maestro further outperforms modern hardware-based transactional memory mechanisms, even for challenging parallel-unfriendly workloads.
Francisco Pereira, Fernando M. V. Ramos, Luis Pedrosa
2023-07-27T11:42:55Z
http://arxiv.org/abs/2307.14791v2
# Automatic Parallelization of Software Network Functions ###### Abstract Software network functions (NFs) trade-off flexibility and ease of deployment for an increased challenge of performance. The traditional way to increase NF performance is by distributing traffic to multiple CPU cores, but this poses a significant challenge: _how to parallelize an NF without breaking its semantics?_ We propose Maestro, a tool that analyzes a sequential implementation of an NF and automatically generates an enhanced parallel version that carefully configures the NIC's Receive Side Scaling mechanism to distribute traffic across cores, while preserving semantics. When possible, Maestro orchestrates a shared-nothing architecture, with each core operating independently without shared memory coordination, maximizing performance. Otherwise, Maestro choreographs a fine-grained read-write locking mechanism that optimizes operation for typical Internet traffic. We parallelized 8 software NFs and show that they generally scale-up linearly until bottlenecked by PCIe when using small packets or by 100 Gbps line-rate with typical Internet traffic. Maestro further outperforms modern hardware-based transactional memory mechanisms, even for challenging parallel-unfriendly workloads. INESC-ID, Instituto Superior Tecnico, University of Lisbon ## 1 Introduction With the transition of Network Functions (or NFs) from custom, fixed-function devices to software running on commodity hardware came a well known performance challenge. As line-rates kept increasing, the networking community kept proposing new tools, techniques, and architectural enhancements to overcome individual bottlenecks. User-mode frameworks, like DPDK [39], bypass the kernel, avoiding costly context switches; DDIO [37] places incoming packets directly in the CPU cache as they arrive; and NICs implement Receive Side Scaling (RSS) [53] to consistently distribute traffic across multiple CPU cores using a configurable hash-function. Despite this wealth of tools, the challenge of developing performant software at these time scales is considerable, typically requiring parallelization [29] and, with it, a deep knowledge of low-level architectural details such as cache-friendly allocation, cache-coherence-aware coordination, and a deep understanding of the RSS hashing mechanism. Although parallelization is paramount to achieving high performance, ensuring equivalence between parallel and sequential implementations is hard [21, 34, 47, 59, 62]. Thus, we argue that _developers need not shoulder the burden of fine-grained parallelization themselves_. Much like how developers typically do not write entire code-bases in assembly language, allowing a compiler to analyze their code, extract its functionality, and build an assembly implementation that is equivalent in semantics, we argue that the fine-scaled parallelization of NFs should follow a similar approach. Developers should implement sequential versions of their NFs, benefiting from the inherent simplicity of testing, debugging, and updating such systems, and when deploying to production they can "compile" the NF to obtain its parallelized version. There are two key insights supporting the solution for this challenge. Due to the increasingly pervasive use of NF frameworks amenable to symbolic execution [1, 36, 43, 49, 12, 16, 3, 70, 63], the first key insight is that this technique can be used to not only analyze the NF and infer how it maintains state, but also automatically generate modified versions of it. The second key insight is that by knowing how the NF maintains its state, we can configure the RSS mechanism to send packets accessing the same state to the same core, aiming to minimize inter-core coordination in a parallel implementation, thus maximizing performance. With these key insights in mind, we propose **Maestro**, a tool that automatically analyzes a software NF and generates a new implementation that distributes the workload across multiple cores while preserving the semantics of the sequential implementation. This analysis builds a comprehensive symbolic model of how the NF stores and accesses state, and how that state is structured around flows. Flows (also called _flowspace_[47] and _scope_[21] in prior work) describe related packets--identified through packet header fields--that the NF logically tracks as an isolated unit. A firewall, for example, often tracks TCP/UDP flows, identified by the packet 5-tuple (source and destination IPs and ports and the IP protocol number), whereas a traffic monitor may identify flows by destination IP alone. As NFs typically store state on a per-flow basis [47, 64], Maestro learns how flows are defined in the NF by extracting the constraints that define how packets access state. We then use a solver to find an RSS configuration that distributes traffic across multiple CPU cores, in such a way as to minimize costly inter-core coordination. Our tool then automatically generates a new implementation of the NF that parallelizes its operation accordingly. When possible, Maestro generates an implementation based on a _shared-nothing architecture_, wherein RSS is configured to forward packets of the same flow to the same CPU core, completely eliminating any inter-core coordination. When the NF is not compatible with such a model, Maestro can still generate a parallel implementation where cores share state but accesses to that state are coordinated by a read-write locking mechanism that, while not as performant as a shared-nothing architecture, can still perform well under typical (zipfian) Internet traffic. Maestro draws inspiration from prior work in NF analysis [47] and verification [70, 71], as well as the wisdom of a wide body of research on NF performance [24, 29, 43, 57]. We also use the lessons learned by many before us that address the challenges of _manually_ parallelizing NFs, including NUMA considerations [28], configuring RSS for symmetric flow handling [68], and rebalancing load with skew [7]. We evaluate the performance of Maestro by parallelizing 8 DPDK NFs. Our experimental evaluation shows that NFs that can be parallelized using the shared-nothing architecture scale linearly with the number of cores used until bottlenecked by PCIe when using small packets or by 100 Gbps line-rate with typical Internet traffic [11]. The remaining NFs that require read-write locks to maintain their semantics vary their performance with the workload. High-churn traffic-where most packets establish a new flow-requires more writing to shared state, degrading performance. Fortunately, the majority of packets in typical Internet traffic belong to a minority of flows [11], requiring less state writing and allowing more concurrency. Under this read-heavy traffic, Maestro's lock-based parallel NFs perform comparably to a shared-nothing model. Notably, when Maestro had to resort to locking, equivalent versions of the NFs that use hardware transactional memory [51] (TM) to preserve semantics were unable to outperform our optimized locks, as we show in SS6.3. We also show that NFs automatically parallelized by Maestro rival in performance with ones manually parallelized using VPP [6]. In SS2, we describe the inherent challenge of parallelizing NFs, to better motivate our work. We subsequently present the main contributions of our work, describing the Maestro architecture in SS3 and several key optimizations in SS4. In SS5 we discuss Maestro's inherent limitations. In SS6, we evaluate Maestro and the performance of the parallel NFs it generates. Finally, we describe related work in SS7 and conclude with final thoughts in SS8. ## 2 Why Parallelization is Hard Ideally, one would parallelize an NF by spinning up individual instances per core, each running independently, and using the NIC to evenly distributing traffic among them. NFs, however, typically store state that persists across packets. Sharing this state among cores requires coordinating access to it, but minimizing this coordination is crucial to achieving high performance. Parallel implementations that require no state sharing among their instances (and therefore no synchronization) are called _shared-nothing_. Implementing a shared-nothing implementation of a stateful NF requires carefully configuring the NIC to distribute traffic to each core in a way that aligns with how state is structured in the NF. With such a mechanism, state is _sharded_ across cores and packets accessing the same state always find themselves on the same core. The NIC can perform this traffic distribution in hardware using the Receive-Side Scaling (RSS) mechanism [53]. This mechanism hashes packet headers using a user-defined set of fields and a hash key. The computed hash is subsequently used to direct traffic to different queues which can deliver the packets to different cores. To send, for example, packets of the same TCP flow to the same core, one would configure RSS to hash the source and destination IP addresses, and TCP/UDP ports, and the IP protocol number (_i.e._ the 5-tuple), ensuring that any two packets with the same 5-tuple will have the same hash and will end up on the same core. This leads us to the traditional method for building parallel shared-nothing NFs: first, developers shard state in the NF, building a full understanding of how state is accessed under all circumstances. They then use this sharding solution to construct an RSS configuration that distributes traffic accordingly. This approach, however, poses three big challenges: **1. Finding the right sharding solutions is hard.** Though some NFs simply shard on the 5-tuple, many others require a more careful approach. One common use case involves symmetrical access to state based on the 5-tuple so that incoming traffic--that has the source and destination swapped--access the same state as outgoing traffic [68]. Other NFs require a more coarse-grained partitioning: some policers and traffic monitors only use the destination addresses to index state, connection limiters may only use source addresses, and network address translators (NATs) will typically shard on the WAN's server address and port (as all the other addresses and ports are translated). Simply sharding on the 5-tuple here would require expensive coordination (_e.g._ locks), as cores are unable to act independently. Arriving at sharding solutions is harder than generically using locks each time state is accessed. The developer needs intricate knowledge of the NF's semantics and internals, particularly around how state is kept and manipulated. This thought process must not only take place upon initial implementation, but also as the NF code evolves over time. Augmenting a firewall with a connection limiter feature renders the previously configured 5-tuple sharding obsolete, requiring a complete rethink of how it should be sharded. **2. Finding the right RSS configuration is hard.** Even if we take the sharding solution for granted, configuring RSS accordingly is difficult. For trivial cases, this is just a matter of selecting the right fields to hash but more complex scenarios can require carefully crafting the RSS key. Such an approach was used in [68] to handle symmetrical TCP/UDP flows, but manually tracking the sharding constraints and finding internal symmetries in the hash key that pair with those constraints quickly becomes unmanageable. For NFs with other sharding requirements, the problem becomes even harder. Not all sets of fields are supported by NICs [40, 41], requiring a specific RSS key that cancels out some bits to circumvents this limitation. One might even require symmetry between different interfaces (when incoming and outgoing traffic use different NICs), which requires a separate but interrelated configura tion and key for each NIC. More complex NFs can shard state in ways that do not neatly fit into any common case, requiring a custom formulation which, as before, may need to be completely rethought from scratch should the NF change over time. Some cases are outright infeasible, due to inherent NIC limitations, at which point a well-placed warning could help guide developers towards better solutions. **3. Writing performant parallel code is hard.** Even if a developer correctly shards the NF and properly configures RSS to achieve a valid shared-nothing solution, they can still be leaving performance on the table. Though shared-nothing goes a long way towards ensuring good performance, many more minute details play a further role in parallel code. Packet buffers and state must now be cache-aligned to avoid false cache-line sharing. Memory allocation must be NUMA-aware to avoid slower remote accesses across the QPI bus. Even exogenous factors like traffic skew must now be considered [7] to fully realize the potential of a parallel implementation. Getting any of these issues wrong can stand in the way of performance, correctness, or both, but are ultimately amenable to automation. Our tool--Maestro--tackles the first challenge by analyzing how the NF keeps its state and finding the constraints that packets that need to be sent to the same core must satisfy. It further tackles the second challenge by formulating an SMT problem and using a solver to find the right RSS keys that satisfy the sharding requirements. Finally, Maestro addresses the third challenge by automatically generating a parallel implementation that is semantically equivalent to its sequential counterpart. The generated code fully handles NIC initialization and RSS configuration, cache-alignment, load-balancing, and NUMA considerations. Even when a shared-nothing approach is not possible, Maestro can still help by generating an optimized lock-based parallel implementation that uses carefully crafted read-write locks to minimize inter-core coordination with typical Internet power-law traffic. ## 3 Maestro Architecture Maestro uses symbolic analysis to extract information on how the NF maintains state, and with it infer possible dependencies between parallel instances. This analysis is crucial to achieve synchronization-free parallelization that shards state by carefully splitting traffic among cores. How this careful orchestration of packets can be used to avoid synchronization among parallel instances is better explained via an example. ### Parallelizing a firewall Consider a firewall NF connecting a LAN and a WAN that only forwards packets from the WAN that correspond to flows started in the LAN. To keep track of ongoing flows, it stores flow information in a map. Packets from the WAN lookup flow information symmetrically relative to packets from the LAN, naturally swapping source and destination fields. Note that not all packets need access to all entries in the map: only the ones belonging to the packet's flow. As such, in a parallel execution, making sure that _packets of the same flow are sent to the same core_, conjoined with the fact that packets of the same core are processed sequentially, allows us to parallelize this firewall without any synchronization between its instances--a _shared-nothing_ architecture. This orchestration of packets from the same flow to the same core requires a specific RSS configuration. Not only must we send LAN packets of the same flow to the same core, but also their (symmetric) WAN responses. A configuration partially fulfilling these requirements was already found by Woo and Park [68]1. By adapting their configuration to the firewalls' needs, we ensure that every packet that needs access to the same memory region is sent to the same core. Footnote 1: Woo and Park’s solution considers only a single RSS configuration, whereas our firewall deals with two ports (LAN and WAN), each requiring independent configurations. Although their findings are transposable to this scenario, it still requires expertise from the developers. ### Generalizing NF parallelization The above parallelization process is well tailored for our firewall, but different NFs keep state in different ways, and thus require different sharding solutions. Moreover, when access to specific state precludes flow-sharding, synchronization is necessary to maintain semantics. Maestro deals with this parallelization process automatically by using the architecture shown in Figure 1. Maestro starts by analyzing the NF using Exhaustive Symbolic Execution (ESE) [17, 43, 70] to retrieve a sound and complete model of its behavior. Then, it hands the model over to a three stage pipeline: (1) the Constraints Generator, which uses this model to analyze how the NF keeps its state and arrive at a sharding solution; then (2) the RSS Configuration Generator, which uses a solver to find an RSS configuration that steers packets following the sharding rules found by the previous stage to the same core; and finally (3) the Code Generator, that generates a parallel implementation of the original NF that configures the RSS accordingly and adds additional synchronization mechanisms if needed. ### Extracting the NF's model Maestro uses ESE to extract the complete NF's model. This allows us to not only analyze how the NF maintains its state, but also generate modified versions of its implementation. The extracted model is an execution tree containing all the possible code execution paths a packet can trigger. Each node on this graph is either conditional (representing a branch Figure 1: Maestro’s architecture condition), a stateful operation (representing a call to a stateful data structure, _e.g._ a map or a vector), or packet operation (_e.g._, forwarding, dropping, etc.). Both the packet and stateful data are traced as symbols, and every node contains a list of constraints on these symbols that can be given to a solver to query their possible values under any code path. ### Finding the sharding solution The NF model is passed to the Constraints Generator, which is tasked with finding a sharding solution that allows shared-nothing parallelization. The idea is to find the constraints that hold true between packets that access the same state, _i.e._ packets that must be processed on the same core. This is intrinsically tied to how the NF maintains state. For example, in a map for two operations to access the same state they must use the same _key_. By symbolically tracking how such keys are derived from packets, we reason about the constraints on packets that access common state. **Building a stateful report.** The Constraints Generator starts by analyzing the NF's model and builds a stateful report (SR) of all the performed stateful operations. Each SR entry specifies the operation's name (_e.g._ map_put), object instance, and other relevant arguments (_e.g._ the key used), and all the possible constraints on both the received packet and other stateful data when the operation was performed (_e.g._ map_put was called when a UDP packet arrived from interface 0). **Filtering entries.** After building the SR, the Constraints Generator removes all entries related to read-only objects (_e.g._ routing tables that are filled on start-up and never updated). Such read-only accesses to shared state do not require coordination among cores and need not be reasoned about. Should all accesses be read-only, the SR will be left empty and Maestro asks the Code Generator to generate a parallel implementation that uses RSS with the sole purpose of load-balancing traffic among cores (we explain the RSS mechanism in SS3.5). **Analyzing the entries.** The use of any data structure can potentially preclude a shared-nothing approach, and therefore we need to infer the conditions under which it is safe to perform stateful operations concurrently for each of them (or if no such conditions exist). We present the analysis for one of the most predominant data structures: the map [70, 2, 47, 47]. The map stores data indexed by a key. This data can be accessed via the function map_get, and modified with map_put. Two map calls access the same memory region if and only if they are given the same key. For a shared-nothing approach, packets that trigger map calls to the same instance using the same key need to be steered to the same core. This alone is, however, insufficient: we need to not only take into consideration any RSS limitations, but also reason about the use of multiple different map instances (or other data structures), each independently tied to the previous requirement. With this in mind, we designed a set of rules to guide Maestro towards finding correct shared-nothing sharding solutions: **R1**: _Key equality._ The most obvious case is when two packets access the same map instance using the same key. In this case, the Constraints Generator builds the constraint from the formulas for the keys (1 in Figure 2). **R2**: _Subsumption._ If a map instance is accessed using a subset of the packet fields used to access a second instance, then the subset takes precedence over its larger counterpart. That is, the coarser-grained requirement wins over the finer-grained one. This is exemplified in scenario 1 in Figure 2: sending packets with the same source address to the same core will also guarantee that packets with the same 5-tuple are also sent to the same core. More generally, we can always use a subset of the required packet fields. As we will see later, this rule can act further in concert with others to resolve incompatibilities. **R3**: _Disjoint dependencies._ Accesses using disjoint sets of packet fields are problematic. For example, an NF that keeps a pair of independent counters, one for source addresses and other for destination addresses, requires packets with the same source address _or_ the same destination address to be sent to the same core. Due to limitations in the RSS mechanism, this is not possible, and so Maestro warns the user and provides the fundamental reason why the shared-nothing approach cannot be applied (1 in Figure 2). **R4**: _Incompatible dependencies._ RSS uses packet fields to steer packets to cores. This means that using keys containing (1) incompatible RSS packet fields or (2) no packet fields at all will completely block our attempt at Figure 2: Example outputs of the Constraints Generator. correctly steering packets to cores. This is the case, for example, of NFs which index data with constant keys (as exemplified in case 1 of Figure 2). Again, in this case, Maestro provides feedback to the user as to why the shared-nothing approach is unfeasible. **R5**: _Interchangeable constraints._ We define a pair of constraints as _interchangeable_ if they trigger the same NF behavior. This allows us to completely replace constraints matching rules R3 or R4 with others that, if interchangeable, do not prohibit shared-nothing parallelization. Example 1 of Figure 2 showcases this scenario: although the NF stores source addresses using an RSS-incompatible dependency (source MAC), the Constraints Generator finds that the NF's behavior is _exactly the same_ whether we shard on the MAC address or the destination IP address. In this case, these constraints are interchangeable, which allows Maestro to shard on either of them. Because the first constraint uses an incompatible RSS field (a MAC address), the Constraints Generator opts for using the second one (the destination IP) for sharding. These rules allow Maestro to correctly find sharding solutions for a wide range of NFs (as we show in SS6). Note that only R1 is specific to data structures that use a key to index state (_e.g._ maps, vectors, sketches). R2, R3, R4, and R5 are otherwise data structure agnostic, and Maestro employs them to all entries, regardless of their specific data structure. Though much of this analysis focuses on maps, it can be used as building blocks for others (_e.g._ vectors, sketches, token buckets). Moreover, we need only reason about these details _once_ per data-structure (or, at most, each time a breaking change is made). Once data-structure developers encode such properties into Maestro, NF developers can freely use these stateful data structures to build their NFs. Even when Maestro fails to find a shared-nothing solution, it still provides the developer the fundamental reason why (_e.g._ constant keys or non-packet dependencies). When met with this result, the developer is faced with a decision: either use this feedback to tweak the NF implementation so that it becomes amenable to shared-nothing parallelism, or request a lock-based implementation from Maestro. **Generating the constraints.** The next step in the Maestro pipeline is to generate the actual constraints, _i.e._, the conditions that, if satisfied by a pair of packets, dictate that they must be sent to the same core. Towards this end, Maestro iterates over each pair of report entries of the same state instances, creating SMT formulas stating that both keys must be equal, and joining them all together with logical _ANDs_. Finally, we note that RSS must be independently configured on each interface. As such, the constraints generated by Maestro are interface-specific, reasoning about pairs of packets which may arrive from separate interfaces. Case 1 from Figure 2 exemplifies this. It requires LAN packets to be sent to the same core as packets from the WAN if the source address of the former equals the destination address of the latter. Figure 3 shows the constraints found by the Constraint Generator when analyzing our firewall example. It finds that LAN packets with the same addresses and ports must be sent to the same core, and similarly for WAN packets. It also finds that WAN and LAN packets must be sent to the same core if they have the same, but swapped, sources and destinations. ### Finding the right RSS configuration The previous stage tackled the challenge of finding a shared-nothing sharding solution, producing constraints between packets that when true require the packets to be processed on the same core. We now focus on materializing this sharding solution by automatically finding RSS configurations that satisfy these constraints. RSS is a hardware mechanism in the NIC that steers packets to core-specific queues. Once configured with an RSS key and a set of packet fields, it extracts from incoming packets the values of those fields and feeds them to a toeplitz-based hash-function [54] (depicted in Figure 4). The hash is used to index an indirection table containing queue identifiers, and the packet is inserted in the corresponding queue. Two packets with the same hash will be sent to the same core. Given the configurability of the RSS hashing function, we use this to ensure that packets that need to be processed on the same core will have the same hash. For simple constraints we can arrive at a satisfying RSS configuration solely by correctly choosing the packet field set (_e.g._, hashing only source and destination IPs and ports when requiring TCP packets with the same 5-tuple to be sent to the same core). However, what if (1) the NF requires a subset of packet fields that can only be used as a group in the RSS mechanism (_e.g._, a traffic monitor that shards solely the destination IP), (2) it requires complex constraints between packets (_e.g._, a Hierarchical Heavy Hitter sharding on multiple subnets of the source IP and/or source ports), or (3) there are constraints between packets arriving in different interfaces (which is the case for many NFs requiring both LAN and WAN interfaces, as in NATs, Firewalls, Connection Limiters, _etc._)? To address these scenarios in a generalized way, we built RS3, a C library capable of taking constraints as inputs and outputting RSS configurations that satisfy them. It uses the Z3 solver [22] to find suitable configurations by encoding the problem in a logical format. Maestro uses RS3 to generate Figure 3: From the firewall’s SR to its sharding constraints. RSS configurations that satisfy the constraints given by the Constraints Generator module. **Building the statement.** The query given to the solver needs to encode the following problem: _given set of constraints, find RSS keys that generate the same hash for every pair of packets that satisfy them_. To build this statement, we need to encode both the hash function and the constraints into an SMT format. Let \(k\) be a 52 byte2 RSS key, \(d\) and \(d^{\prime}\) hash inputs for each of the packets (whose sizes depend on the extracted packet fields, _e.g._ 12 bytes for source and destination IPs and ports), and \(h(k,d)\) the 32 bit hash. Also, let \(|k|\geq|d|+|h|\), \(H(k,k^{\prime},d,d^{\prime})\) be true iff \(h(k,d)=h(k^{\prime},d^{\prime})\), and \(C(d,d^{\prime})\) be the constraint between \(d\) and \(d^{\prime}\) provided by the constraint generator. Footnote 2: Value for the Intel E810 100G NIC [40], but trivially adjustable in RS3. **Hash function.** As shown in Figure 4, \(H(k,k^{\prime},d,d^{\prime})\) can be represented as: \[\bigwedge_{b=0}^{|h|-1}\left\{\bigoplus_{x=0}^{|d|}(d[x]\wedge k[x+b])= \bigoplus_{y=0}^{|d^{\prime}|}(d^{\prime}[y]\wedge k^{\prime}[y+b])\right\} \tag{1}\] **Base statement.** Initially, let us encode the following query: _find a single key \(k\) such that, given any two hash inputs \(d\) and \(d^{\prime}\) that obey the constraints \(C\), their corresponding hashes will always be equal._ That is: \[\forall_{d,d^{\prime}}\cdot k\neq 0\wedge\left[(C(d,d^{\prime})\wedge d \neq d^{\prime})\to H(k,k,d,d^{\prime})\right] \tag{2}\] Having the key be 0 would always output 0 valued hashes, steering all packets to a single core, so we prevent the key from taking that value. **Compatibility with multiple keys.** Each interface can have its RSS mechanism individually configured. With that in mind, let \(C_{ij}(d,d^{\prime})\) be the constraint between a pair of packets coming from ports \(i\) and \(j\), configured with the keys \(k_{i}\) and \(k_{j}\) respectively. Note that \(C_{ij}=C_{ji}\), therefore it is enough to consider, for example, all the constraints \(C_{ij:\{j\leq i\}}\). For Equation (2) to be multi-key aware, we simply conjunct the constraints across all \(i\) and \(j\), allowing the solver to manage each key combination problem as a specific statement that must be true. That is, for \(n\) ports: \[\forall_{d,d^{\prime}}\cdot\bigwedge_{i=1}^{n}\bigwedge_{j=1}^{i}\left[(C_{ij} (d,d^{\prime})\wedge d\neq d^{\prime})\to H(k_{i},k_{j},d,d^{\prime})\right] \tag{3}\] **Compatibility with varying sets of RSS packet fields.** Just as different ports may need distinct RSS keys, we may also need to configure RSS to use different sets of packet fields depending on the interface. One way to address this would be to consider hash inputs \(d_{0},...,d_{n}\) for \(n\) interfaces. This, however, highly increases the complexity of the query3. Another way to look at it would be to extend the hash inputs to include the union of both field-sets and to deal with any unused bits. To make the statement in Equation (3) consider constraints between packets arriving at different ports with different RSS packet field options, we again add more clauses to our large conjunction, now considering all relevant RSS field sets, all while extracting for each one the required least significant bits of \(d\) and \(d^{\prime}\) accordingly. Footnote 3: For \(n\) interfaces, and thus considering \(d_{0},d_{0}^{\prime},...,d_{n-1},d_{n-1}^{\prime}\), with 96 bit hash inputs we would have to deal with \(2\times 96\times n\) free bits. When given the constraints of our firewall, RS3 outputs two RSS keys, one for each NIC interface. The symmetry between the keys resembles the findings in [68], but generalized to two interfaces, rather than just one. ### Code Generator This stage takes the generated RSS configuration, as well as the NF's model, and outputs a parallel implementation of the original NF. Because the model is a sound and complete representation of the original NF, it can be used to generate an implementation identical in functionality to the original one. More importantly, it can be modified to employ shared-nothing parallelism by (1) configuring RSS, (2) allocate the state independently for each core, (3) making sure that each stateful call uses the data structures' instances of that particular core, and (4) launching the NF in multiple cores. **Parallel implementation with locking mechanisms.** When Maestro rules out a shared-nothing solution, it can fall back to generating parallel implementations that use locking mechanisms. In this scenario, it configures RSS with both a random key and all the available RSS-compatible packet fields, as now all cores share the same state. Maestro also needs to carefully coordinate access to shared data using read/write locks. As such, we distinguish read-packets from write-packets: the former trigger only stateful read operations, and the latter trigger at least one write. To efficiently handle this scenario, we created a custom, highly optimized read/write lock implementation that entirely avoids cache-line sharing when acquiring read locks. We do this with a series of per-core, cache-aligned, atomic spin-locks that indicate whether the core has permission to proceed. Acquiring a read lock requires just locking the current core's lock. To perform a write, however, one must lock all core-specific locks (in order, to avoid deadlocks). With this in place, we speculatively process all packets as read-only until they attempt to perform a write operation, at which point we stop processing, release the local lock, acquire all core-specific locks, and restart processing the packet from the beginning. Figure 4: Toeplitz-based hash function. The performance toll is minimized when an NF is subjected to read-heavy workloads (see SS6.3), as read-only packets need only acquire a core-specific cache-aligned lock, and have no need to atomically write to any shared variable, or write to shared data. As all write-packets start out as read-packets before backtracking, starvation is not an issue. ## 4 Implementation challenges **Finding good RSS keys.** The first set of keys found by the solver is often not ideal. If, for example, the solver finds a key with all but the first bit set to zero, the hash, though semantically valid, will only ever be 0x0 or 0x80000000. This leads to packets being sent to only two cores. The solution employed by RS3 involves setting the value 1 to as many bits as possible in the keys, so long as they still satisfy the given statement. This is known as a Partial MAXSAT problem [18]. We give the solver a statement that its corresponding solutions should always satisfy--Equation (3), hard constraints--and also a set of clauses that they should try to satisfy--soft constraints. The soft constraints correspond to a chain of logical _ANDs_ setting each key bit to 1. There is no need for maximizing the number of satisfied soft constraints. Most of the times, a randomly selected set of bits with the value 1 is enough to avoid corner case problems like the one mentioned above. As such, Maestro uses a slightly modified version of the diagnosis-based approach introduced by Fu and Malik [32]. It begins by seeding the key with random bits. Then, if the combined hard and soft constraints are not satisfiable, we get the UNSAT core from the solver and randomly discard a subset of these soft constraints, repeating as necessary until either a key is found or no further soft constraints are left, indicating that no such key exists. Due to the randomized nature of this algorithm, we use multiple parallel solvers to independently find keys until one is found with an acceptable workload distribution. **NUMA considerations.** In a NUMA environment, each possible combination of NIC, memory, and CPU pinning influences throughput. Our machines (see SS6) have 100 Gbps NICs with 2 interfaces, thus both interfaces are pinned to the same NUMA node. Under these circumstances, pinning the packet buffers to the same NUMA node as the NIC is optimal [28]. Another important consideration is that the dominant contention factor in parallel packet processing applications is the cache, specifically for Intel Data Direct I/O (DDIO) resources [52, 24]. Using DDIO, the packets coming from the NIC are directly placed in the last level cache (LLC) of the NUMA node. Contention happens when the number of concurrent packets exceeds the available reserved space for I/O in the LLC, at which point packets evict each other and performance suffers. Maestro allocates packet buffers close to the NIC, but keeps state local to each core's NUMA node. Deciding where to run each thread is, however, a deployment challenge, not an implementation one, and therefore out of scope for Maestro. Nevertheless, our experience has taught us a simple rule of thumb: if the LLC is large enough to hold all packet buffers at line-rate, then we should pin both the CPU and memory to the same NUMA node as the NIC. If, however, the LLC is too small, resulting in contention--as occurs with older processors--then it's better to distribute cores evenly across NUMA nodes, thus increasing the total available LLC. Though we have seen scenarios where using multiple NUMA nodes was best, in our testbed the LLC proved sufficiently large to justify using a single NUMA node, and all our experiments in this paper follow this guideline. **Traffic skew.** The expression "mice and elephants" is typically used to describe packet flow distributions on the Internet [50, 11, 35]. These follow a zipfian distribution, where a large fraction of packets relate to but a few flows, and the remaining ones share a small slice of traffic. While traffic with a uniform distribution leads to packets being uniformly distributed to cores, traffic following a zipfian distribution can overload a subset of cores, causing _skew_. This performance difference is shown in Figure 5, which demonstrates how the parallel firewall throughput varies with the traffic distribution. The zipfian traffic was generated with parameters from [57], which were found by analyzing a real-world traffic sample from a University network in [11]. This generated traffic has 50k packets and 1k flows, 48 of which responsible for 80% of the traffic. RSS was configured with five different random keys and the error bars represent the min/max performance. Performance is influenced by both the RSS key and the indirection table, as more hash collisions cause more packets being sent to the same core. Under uniform traffic, the indirection table's entries are expected to be equally accessed, and thus uniformily filling it leads to evenly spreading packets across cores. With zipfian traffic, however, the higher density of certain flows leads to more accesses to some entries, overloading some cores. Note that when using a single core we see better performance under zipfian traffic due to an increased cache hit-rate when accessing state [57], though the effect is less prominent when more cores are used. RSS++ [7] fixes the distribution problem imposed by zipfian traffic by dynamically adjusting the indirection table according to the traffic. It balances the indirection table by swapping entries associated with overloaded cores for ones associated with underloaded ones. We incorporate this balancing mechanism in Maestro. **State sharding.** When applying shared-nothing parallelization, Maestro not only allocates each data structure instance on each core, but further adjusts each data-structure's capacity, keeping approximately constant the total amount of memory used for all cores by reducing the per-core amount. This raises an interesting question about the semantics of filling up state in a shared-nothing parallel version of an NF, which slightly differs from the sequential or lock-based parallel versions. As each core now has a reduced capacity, it is possible to exhaust the capacity of one core despite there being spare room in others. Ultimately, when a core becomes "full", it will behave in the same way locally as the sequential NF would globally (_e.g._ by dropping packets from new flows). As the RSS++ mechanism redistributes flows across cores to counteract traffic skew, this also affects state distribution, making it harder to exhaust any one core. This state sharding has the desirable side-effect of optimizing the NF's cache utilization. If each core has a smaller working-set, more of it will fit in the local L1+L2 data caches. This provides an extra performance advantage to the shared-nothing approach on top of that of parallelization on its own. **Lock-based rejuvenation.** When following a read-write lock-based parallelization approach, flow rejuvenation can be a challenge. As simply reading state requires updating the flow entry aging data, a naive implementation would require a write lock for all packets, with dire consequences for performance. Maestro circumvents this issue by implementing an optimized rejuvenation algorithm that operates locally in each core for most cases. We first modify the data-structures to hold multiple cache-aligned copies of the entry aging data, one per core. Each core then manages state aging locally for each entry, allowing the age of the entries to deviate from core to core as packets from the same flow arrive at different cores at different times. When eventually one core believes it should expire an entry, only then does it acquire a write lock. At this point, the core inspects the aging data for that entry on all cores. If the flow indeed expired on all cores, it is cleared out globally. If, however, another core is found where the entry has not yet expired, the local timestamp is re-synced with the newest one. Ultimately, if packets from the same flow regularly hit all cores, no write-locks are ever needed. **Implementation.** Maestro uses the KLEE symbolic execution engine, extending it with 14,859 lines of C++ code. We also implemented RS3 in 3,964 lines of C code, independently from Maestro. We make them openly available at [5]. ## 5 Assumptions and limitations **NF limitations.** To allow ESE, NFs must fit within some limitations, much like the ones enumerated in [42]: i) there must be a clean separation between stateful and stateless operations, a constraint put in practice by only allowing state to persist within a set of well-defined data structures; ii) loops must be statically bounded; and iii) no pointer arithmetic is allowed outside the data-structures. These constraints are already enforced for safety reasons in commonly used packet processing framework like eBPF4[3], a widely used framework in both academia an industry [1, 16, 12, 63, 49, 63]. Footnote 4: NFs developed in eBPF store their state in kernel-maintained maps [2]. **RSS limitations.** For Maestro to consider other hash function besides the standard toeplitz-based one, they would have to be formulated as an SMT problem and added to RS3. This requires having their implementation openly disclosed. In practice, a more limiting factor is packet field selection: shared-nothing approaches can only be applied if state is sharded using RSS-compatible packet fields. DPDK's API [38] reference includes all possible field combinations that RSS can use (_e.g._ IPv4/IPv6 TCP/UDP flow tuples), but each NIC only implements a subset of them [40, 41]. **Attacking state sharding.** We mentioned earlier that it would be possible to "fill-up" a single core with fewer flows in a shared-nothing parallel NF than would otherwise be needed in the sequential or lock-based parallel versions. This could potentially be used as a DoS attack vector, reducing the cost for an attacker to block new flows from being admitted. RSS++ flow redistribution addresses this for well-behaved traffic, but an attacker can subvert this by specifically using flows that induce exact RSS hash collisions. Colliding flows end up on the same entry within the RSS indirection table and thus cannot be split apart. Though out-of-scope for this paper, Maestro provides some defense from such attacks due to the randomization used to generate RSS keys. Even assuming the attacker has access to the NF source code and understands how it can be sharded across cores, different random RSS keys that comply with the sharding constraints will still distribute different flows in a different way. Without access to the actual key generated in RS3, the attacker would have a harder time reverse-engineering a set of co-located flows, mitigating their ability to induce the kind of persistent skew needed in a successful attack. ## 6 Evaluation In this section, we evaluate Maestro and the implementations it generates, aiming to answer four questions: (i) how long does it take Maestro to parallelize NFs? (ii) how well does the performance of these parallel implementations scale with the number of cores? (iii) what are the impacts on performance of the various parallelization strategies that Maestro can use? and (iv) how do Maestro's automatic parallel implementations fare against highly-optimized manually parallelized versions? ### Target NFs and Microbenchmarks To evaluate Maestro we analyzed eight NFs--a simple forwarder (NOP), a policer, a bridge, a firewall (FW), a port scan detector (PSD), a NAT, a load-balancer (LB), and a connection limiter (CL). These are open-source NFs, most are non-trivial in complexity, and all have been used by a body of previous work [42, 43, 70]. In this section, we present a brief description of each, and show how Maestro parallelizes them. For each NF, we measured how much time Maestro took to generate a parallel implementation (shared-nothing when possible, lock-based otherwise), summarizing the results in Figure 6. Figure 5: Scaling of the firewall’s throughput under uniform and zipfian traffic, with and without balanced tables. **NOP.** This is a simple forwarding no-operation NF, _i.e._ a stateless NF that simply forwards all packets that arrive from one interface to the other. Maestro finds that this NF has no state, and provides no constraints between packets arriving at the same core. RSS is thus configured with all available packets fields and a random key on both ports. **Policer.** This NF aims to limit each user's download rate, identifying users by their IPv4 address. When Maestro analyzes this NF, it finds that state is indexed by the destination IP address, implying that packets with the same destination address must be sent to the same core. Because this constraint uses the destination IP address, the chosen RSS packet field options must contain this field. Although DPDK allows RSS packet field options containing only IP addresses, our NICs do not support this option. Maestro thus chooses a packet field option that includes IP addresses and TCP/UDP ports. This increases the complexity of the constraints on the key, increasing the generation time in Figure 6. **Bridge.** A bridge associates MAC addresses with interfaces, and redirects packets accordingly. In a typical MAC learning bridge, the association between source MAC addresses and input interface is learned dynamically. When analyzing this NF, Maestro detects that state is indexed by a packet's MAC address, which is a field not supported by RSS. As such, Maestro warns the user that it cannot generate a shared-nothing implementation, opting for read/write locks instead. By modifying the NF to disable dynamic MAC learning, leaving only statically configured MAC-Port bindings, the NF becomes more amenable to parallelization (as all state is read-only), albeit with reduced functionality. This further illustrates the ability of Maestro to inform developers and help guide the development process by pointing out relevant trade-offs between functionality and performance. With this in mind, we created two versions of this NF: the standard bridge with dynamic MAC learning (DBridge) and a static one with fixed bindings (SBridge). When analyzing SBridge, Maestro encounters only read-only data structures, requiring no specific constraints on the RSS configuration. As with NOP, Maestro generates a random RSS key and uses all the available packet fields on all ports. **FW.** This is the same firewall we have been using as a running example throughout the paper (SS3.1). It indexes state with typical flow information on the LAN (source and destination addresses and ports), and symmetrically on the WAN. Maestro generates a shared-nothing implementation that shards state by the flow information, sending WAN packets corresponding to symmetric LAN sessions to the same core as these (as shown in Figure 3). **PSD.** A Port Scan Detector (PSD) counts how many distinct destination TCP/UDP ports each host (source IP) has touched within a given time frame. Above a threshold, connections to new ports are blocked, preventing port scans. Maestro analyzes the PSD and finds that it uses only the source IP to access one map, but also the source IP and destination port to access another. As such, the constraints for accessing the first map subsume those of the second (R2) and Maestro finds an RSS key that shards based only on source IPs. **NAT.** A NAT translates addresses between a LAN and a WAN, allowing multiple clients in the LAN to share a single public IP in the WAN [65]. It keeps track of flows initiated in the LAN, but to aid with translation it associates a unique external port with each flow. Reply packets from the WAN are checked to see if their address and port match those on record before subsequently translating the destination address and port to match those of the client. Maestro notices that the NAT associates flows with external ports using a map, fitting case R4 in SS3.4. However, it also finds an additional constraint fitting case R5: packets from the WAN are only translated if they target the hosts that started the session in the first place. This constraint allows for sharding based on the external server's IP address and port. **CL.** A Connection Limiter (CL) aims to limit how many connections any single client (source IP) can make to any single server (destination IP) over a wider time frame (_e.g._ several days). Given the longer time frames involved, this NF uses a memory-efficient count-min sketch [20] to estimate the connection count from each client to each server. For new connections, the source and destination IPs are used to index the sketch, indexing a configurable number of entries based on different hashes (5 by default in our case). If all entries surpass the connection limit, the packet is dropped, preventing the new connection. Otherwise, each entry is incremented. As with the PSD, Maestro finds two different access patterns: the 5-tuple indexes a connection tracking map, while the source and destination IPs index the sketch. Again, the latter constraint subsumes the former and Maestro shards based on source and destination IPs. **LB.** LB is a Maglev-like load balancer [26]. Its main goal is to distribute traffic coming from the WAN to a series of identical servers on the LAN. LB registers new servers when it receives their packets coming from the LAN, and matches packets coming from the WAN with previously registered servers, keeping track of flows to ensure the same server handles packets from the same flow. In order to maintain semantic equivalency between a shared-nothing parallel implementation and a sequential im Figure 6: Time (in minutes) to generate parallel implementations for each NF (averaged over 10 runs). Figure 7: Testbed for our experiments. plementation, packets that find an available server in the sequential implementation must also find it available in the other. This ultimately means that all cores would need to have all backends registered in their local state. That said, packets coming in from the LAN in such a parallel implementation would only be able to be registered in a single core, preventing packets that arrive at other cores from seeing it. With this limitation in mind, it becomes impossible for multiple cores to hold an identical set of backend servers without coordination, thus preventing the use of a shared-nothing model. The Maestro analysis detects this issue when analyzing the LB SR. Lacking a better alternative, Maestro issues a warning and opts for a read/write lock based approach. ### Performance Benchmarking Methodology To benchmark the NFs, we use a standard testbed topology [15], connecting a traffic generator (TG) and a device under test (DUT), as shown in Figure 7. Both devices connect through a top-of-rack (TOR) switch from which we collect packet counters at the end of each experiment. Both TG and DUT are equipped with dual socket Intel Xeon Gold 6226R @ 2.90GHz, 96 GB of DRAM, and Intel E810 100 Gbps NICs. Turbo Boost, Hyper-Threading, and power saving features were disabled, as recommended by DPDK. To measure throughput, the TG replays a given traffic sample (a PCAP file) in a loop at a given rate via the outbound cable for 10s per experiment. The DUT receives this traffic, processes it, and sends it back via the return cable, allowing the TG to measure latency. We further use the TOR to infer loss at the DUT, and--through comparison with the TG report--to also detect when packets were lost within the TG as well. We use DPDK-Pktgen [46] on the TG to find the maximum rate with less than 0.1% loss. We exclude and repeat sporadic experiment runs where loss within the TG--as opposed to the DUT--limited the results. When studying scalability, we repeatedly reevaluate the NF, while varying the number of cores it may use. We perform 10 measurements per experiment for statistical relevance and show error bars with min/max values. Our experiments properly handle NUMA considerations and indirection table rebalancing (SS4). **Packet size.** To measure the impact of packet size on the performance of NFs, we ran NOP on all cores and generated traffic with fixed-sized packets (40k uniformly distributed flows), varying the size on each iteration. The results (Figure 8) show that typical Internet traffic [11] and large packets easily achieve line-rate (100G), but that smaller packets struggle to keep up, reaching only ~45Gbps with 64B packets--even with such a trivial NF. Prior work [55, 4] has pointed out that this bottleneck comes from PCIe 3.0 x16 and cannot be overcome without improved hardware. Unless stated otherwise, further experiments in this paper use 64B packets. As we measure more complex NFs that limit throughput below the 90Mpps shown in Figure 8, the bottleneck shifts from PCIe to the CPU, illustrating the NF's intrinsic performance. **Churn.** The performance of parallel NFs can vary significantly for read or write workloads. In networking terms, this typically relates to _churn_, or the rate at which new flows are added and expired. This is particularly important for lock and TM based implementations, where creating new flows can lead to costly aborted transactions or exclusive write locks. We start by studying these churn effects on performance by focusing on the read/write lock-based parallel firewall, and comparing it to its shared-nothing counterpart. To conduct churn experiments, ideally one would generate traffic live that changes flows periodically in an online manner. We found it challenging to generate such traffic programmatically at line-rate so we followed an alternative solution: generating PCAPs with different levels of _relative churn_--measured in flows/Gbit. As Pktgen varies the replay rate of the PCAP to probe the NF, the resulting _absolute churn_--measured in flows/minute or fpm--changes in tandem. This guarantees that our experiments converge to an equilibrium where the highest rate is found for the given churn. Once we find this rate, we can multiply the PCAP's relative churn with the experimental rate to compute the absolute churn. With this in mind, we built PCAPs which (i) were small enough to fit in memory; (ii) changed enough flows to produce the desired relative churn; (iii) evenly spread these changes throughout the traffic; and (iv) were cyclic (_i.e._ the flows that expire at the start of the PCAP are created at the end). We then replay these files in a loop for 10s as in all other experiments. Figure 9 shows how the FW--parallelized with different approaches--scales under varying amounts of churn. As absolute churn is computed based on the achieved rate, note that it too has error bars. Under low or no churn, the lock-based FW scales well until bottlenecked by PCIe. At a churn of ~100k Figure 8: Throughput in Gbps (blue) and Mpps (red) of the parallel NOP running on 16 cores for different packet sizes. Figure 9: Churn study of the shared-nothing (top), lock-based (middle), and TM (bottom) parallel firewall. fpm we start observing the collapse of performance as the use of more cores just wastes more cycles busy-waiting under exclusive write locks. Under heavy churn, performance is abysmal as all cores end up contending for write locks. Note that the churn limit of an NF depends on the size of packets--Figure 9 uses 64B packets but for Internet traffic [11] the lock-based FW handles churn up to 400k fpm. The results also show just how badly the FW parallelized with transactional memory handles churn. Although a useful tool in other domains, it proves ineffective when dealing with networked applications under churn. The shared-nothing approach, unlike the lock-based one, suffers almost no performance variation with churn up to at least ~100M fpm, a great advantage over the lock-based implementation. Benson _et al._[11] tell us to expect up to 6M fpm in typical data-center traffic--within the ability of our shared-nothing FW, but not the lock-based one. University networks--typically with less than 15k fpm--could easily be handled even by our lock-based FW. We focus the rest of this evaluation on studies without churn, giving the lock and TM based approaches the benefit of the doubt and illustrating their _best-case_ performance. ### Performance benchmarks With parallel versions of each of the above 8 NFs generated, we now evaluate their performance and scalability. By default, Maestro generates a shared-nothing implementation when possible, falling back to read/write locks otherwise. This choice can, however, be overriden, and Maestro can specifically generate parallel implementations using read/write locks and TM for any of the NFs, upon request. **Parallelization technologies.** We now study the performance and scalability of each NF, while being parallelized for each of the three approaches. Figure 10 shows throughput as a function of the number of cores. Our raw performance is comparable to measurements from other recent works [29], but we focus our attention on _scalability_. Though most NFs top out their performance before using all 16 cores due to bottlenecks in the PCIe bus or the memory controller, the takeaway here is the relative performance of the different approaches. For all NFs where a shared-nothing approach was feasible, this option scales linearly until bottlenecked by the PCIe bus and then plateaus--an ideal outcome. The lock-based implementations--though slower than their shared-nothing counterparts when available--still scale fairly well but do not always reach the PCIe bottleneck with 16 cores5. The Policer shows what happens to these locks when writes are inevitable: as every packet must update the token bucket state, every packet requires an exclusive write lock, and performance suffers catastrophically. Fortunately, this NF can be sharded by IP address, so is amenable to the shared-nothing approach. Footnote 5: Eventually, all lock-based NFs except for the Policer and CL can reach the PCIe bottleneck using extra cores from the remote NUMA node. The benefits of state sharding (SS4) become clear when we compare the shared-nothing approaches with the lock-based ones for the more state intensive NFs, _i.e._ the FW, NAT, CL, and PSD. When each core holds less state due to sharding, more of it fits in the core-local L1+L2 cache. In a shared-nothing approach where cores work independently on different working-sets this leads to an added performance improvement due to better caching, in addition to the benefits of parallelization. As a result, performance for few (\(<4\)) cores can be worse than linear scalability would predict and using many cores can have an added boost in comparison. Running these experiments with a workload of only 256 flows--which fits entirely in L1 cache--nullifies this effect. A surprising takeaway is that TM does not work well with the kinds of workloads found in more complex NFs, even in the absence of churn. For simpler NFs it performs quite well, scaling linearly with the number of cores, though still operating more slowly than both shared-nothing and lock-based alternatives. In these cases TM eventually catches up with the other approaches, albeit needing more cores to do so. However, for more complex NFs TM performs abysmally, as the likelihood of a transaction aborting increases. Ultimately, the clear winner is the shared-nothing approach, with the best backup option consistently being our read/write locks. The PSD--our most CPU intensive NF which stands to gain the most from parallelization--performs 19\(\times\) better with 16 cores than a single-core version, due to the _compound_ Figure 10: Parallel NF implementation scalability, using a shared-nothing approach when possible, read/write locks, and TM. Maestro cannot do a shared-nothing DBridge or LB. _effects_ of parallelization and improved cache efficiency. Latency is not deeply affected by the Maestro approach. We detected no noticeable differences in latency between the NFs and parallelization approaches, with Pktgen measuring around \(12\pm 2\mu s\) for CL and \(11\pm 1\mu s\) for the remaining NFs. **VPP comparison.** Finally, we compare Maestro with the Vector Packet Processing framework (VPP) [6, 30], which was recently open sourced in the context of the Fast Data Project. VPP is a packet processing framework that extends the concept of batch processing to the entire packet processing pipeline with the purpose of increasing performance by minimizing instruction cache misses. VPP follows a converse approach to Maestro: packets are processed in batches in a shared-memory parallel environment where packets can end-up on any core without regard to flows or locality. Developers must then adapt the way they implement the NF to those assumptions. This approach can require more expertise and development effort, but once NFs are built in this way the framework handles many of the low-level details. To compare the performance of a Maestro parallelized NF with an expertly developed one for VPP, we pitch our NAT against the VPP nat44-ei with the DPDK plugin. Though these two NFs are the most similar we found between the VPP distribution and our corpus, it is important to note that they implement slightly different semantics (nat44-ei collects statistics and has other features not in the Maestro NAT). Figure 11 shows the performance comparison between the parallel Maestro NAT (shared-nothing and lock-based) and nat44-ei, all under uniformly distributed 64B packets. Though all approaches scale well, Maestro's shared-nothing decisively outperforms VPP, reaching the PCIe bootleneck with 10 cores. This is due to the shared-memory design that VPP follows. A fairer comparison would be between VPP and the lock-based Maestro NAT, as both use shared-memory. Here both scale more slowly, never fully reaching the PCIe bottleneck up to 16 cores. Maestro slightly outperforms VPP but that can be due to extra features in the VPP NAT. The key takeaway though, is that Maestro's _automatically_ parallelized NFs perform competitively with expertly developed, manually parallelized NFs, without as much of a hassile. ## 7 Related Work **Fast packet processing.** To address the performance challenges associated with software NFs, new packet I/O frameworks were proposed [3, 14, 23, 60]. To achieve high packet processing rates these solutions explore several types of optimizations including zero-copy, kernel bypass, I/O batching, and multi-queue support [8]. VPP [6] even expands batching to the whole packet processing pipeline in order to reduce instruction cache misses. Most implementations of network functions today [69, 27, 71, 66], including those from Maestro, rely on Intel DPDK [39], a kernel-bypass packet processing framework that provides a set of software libraries and drivers for fast packet processing, providing multi-core and NUMA-aware functionalities. **NF acceleration.** PacketMill [29] accelerates NFs by carefully managing packet metadata and performing code-optimizations across the whole network stack. Another approach to improve the performance of a software NF is to leverage the platform hardware. Previous work [67, 25, 66, 44, 26] has explored multi-core CPU architectures, showing the significant improvements they can achieve on an NF's performance, but also the challenges involved. Papadogiannaki _et al._[56], for instance, explored the advantages of a shared-nothing model over a lock based implementation. The goal of Maestro is to offer the advantages of parallelization to NFs, for free. Although their work focused on the most efficient utilization of available resources, we use their shared-nothing model as guidance for automated generation of parallel network functions. These solutions are _manual_, requiring extensive expertise and painstaking effort from the developer. De Carli _et al._[21] proposed a concurrency model for software IDSes that uses program analysis to infer the NF's flow semantics, feeding that information to a software scheduler that steers packets to shared-nothing threads. Though the concepts share similarities, Maestro's approach differs from theirs by (1) considering a wider class of NFs more generally, rather than IDSes in particular; (2) using ESE to extract fine-grained state access patterns, as opposed to their less granular program-slicing approach; and (3) handling packet steering entirely in hardware by generating RSS configurations for NICs, avoiding the bottleneck of the software scheduler and allowing Maestro parallelized NFs to scale better. **NF verification and synthesis.** In recent years, verification techniques have started to be applied to network functions. Some of the most relevant work includes verification of network properties [45, 48], configurations [31, 10], and NFs [70]. More recently, the research community has started exploring synthesis approaches for SDN-based control [19], data plane programs [72, 58, 33], and BGP configurations [13, 61]. Our work fits into this line, by analyzing sequential NFs to automatically generate accelerated versions. ## 8 Conclusions In this paper we presented Maestro, a tool to automatically parallelize sequential network functions. Maestro judiciously configures the NIC's RSS mechanism to distribute traffic across cores, while preserving semantics, resorting to locking mechanisms only when necessary. Maestro significantly improved performance for all the NFs we analyzed--scaling-up performance linearly until hitting fundamental bottlenecks in PCIe, the memory controller, or line-rate--while reducing developer effort to the push of a button. Figure 11: VPP and Maestro NAT performance comparison.
2303.08897
Extragalactic Jets from Radio to Gamma-rays
Despite the fact that jets from black holes were first understood to exist over 40 years ago, we are still in ignorance about many primary aspects of these systems -- including the radiation mechanism at high energies, the particle makeup of the jets, and how particles are accelerated, possibly to energies as high as 100 TeV and hundreds of kpc from the central engine. We focus in particular on the discovery (and mystery) of strong X-ray emission from radio jets on kpc-scales, enabled by the unequaled high resolution of the \emph{Chandra} X-ray observatory. We review the main evidence for and against the viable models to explain this X-ray emission over the last 20 years. Finally, we present results of a recent study on the X-ray variability of kpc-scale jets, where we find evidence that between 30-100\% of the X-ray jet population is variable at the tens-of-percent level. The short ($\sim$years) variability timescale is incompatible with the IC/CMB model for the X-rays and implies extremely small structures embedded within the kpc-scale jet, and thus requires a reconsideration of many assumptions about jet structure and dynamics.
Eileen T. Meyer, Aamil Shaik, Karthik Reddy, Markos Georganopoulos
2023-03-15T19:33:50Z
http://arxiv.org/abs/2303.08897v1
# Extragalactic Jets from Radio to Gamma-rays ###### Abstract Despite the fact that jets from black holes were first understood to exist over 40 years ago, we are still in ignorance about many primary aspects of these systems - including the radiation mechanism at high energies, the particle makeup of the jets, and how particles are accelerated, possibly to energies as high as 100 TeV and hundreds of kpc from the central engine. We focus in particular on the discovery (and mystery) of strong X-ray emission from radio jets on kpc-scales, enabled by the unequaled high resolution of the _Chandra_ X-ray observatory. We review the main evidence for and against the viable models to explain this X-ray emission over the last 20 years. Finally, we present results of a recent study on the X-ray variability of kpc-scale jets, where we find evidence that between 30-100% of the X-ray jet population is variable at the tens-of-percent level. The short (\(\sim\)years) variability timescale is incompatible with the IC/CMB model for the X-rays and implies extremely small structures embedded within the kpc-scale jet, and thus requires a reconsideration of many assumptions about jet structure and dynamics. AGN, Blazar, Jets, Variability 10.1017/xxxxx ## 1 Introduction Active galactic nuclei (AGN) are powered by the supermassive black holes (SMBH) which reside in the centers of all relatively massive galaxies (Kormendy and Ho, 2013). A small percentage of these sources eject bipolar, collimated jets of relativistic plasma which emit brightly at radio and sometimes optical and X-ray frequencies kiloparsecs (kpc) away from the central engine. Resolved radio jets are often separated into two classes based on radio morphology: Fanaroff and Riley (1974) class I (FRI) jets have plume-like jets and are dominated by emission near the core, while FRII jets are highly collimated and are dominated by bright hotspots where the jet impacts into the intergalactic medium (IGM). Historically, this classification system was also associated with a difference in jet power, with FRI and FRII jets representing low- and high-power jets respectively. However, more recent studies have discovered low-power FRII galaxies (Mingo et al., 2019) and large population studies suggest a large range in jet power overlap (Keenan et al., 2021). One of the major discoveries by the _Chandra_ X-ray Observatory has been the detection of X-rays from radio jets on kpc-scales (Chartas et al., 2000; Schwartz et al., 2000; Harris and Krawczynski, 2006; Worrall, 2009; Marshall et al., 2018). Assuming a leptonic jet model, the X-ray emission of most low-power or FRI class sources, including nearby sources such as M87 (Harris, 2003) and Cen A (Hardcastle et al., 2007), are often (but not always) well-described by synchrotron radiation from a single electron population which extends from radio to optical and X-ray energies. However in many (typically the most powerful) quasar-hosted jets, the X-ray emission is too high, and the spectral index too hard, to be attributed to the low-energy synchrotron component. A classic example (and the first X-ray source _Chandra_ observed) of PKS 0637-752 is shown in Figure 1. The observation of 'anomalously' bright and hard X-rays has since been extended to virtually all powerful FRII jets and even some low-power FRI jets such as in the case of M84 (Meyer et al., 2018). ## 2 The Rise and Fall of the IC/CMB model The most commonly adopted explanation for the 'anomalous' X-ray emission from kpc-scale jets is inverse-Compton scattering of Cosmic Microwave Background (IC/CMB) photons by a still-relativistic jet. Under this model, high-energy electrons in the jet upscatter low-energy CMB photons, provided that the jet is both highly relativistic with high bulk Lorentz factors (\(\Gamma>10\)) on kpc-scales and closely aligned to our line-of-sight (\(\theta<5^{\circ}\); Tavecchio et al. 2000; Celotti et al. 2001). This produces the high Doppler boosting required to reproduce the observed bright and hard-spectrum X-ray flux. However, this emission mechanism also sometimes predicts kinetic powers in excess of the Eddington limit (Atoyan and Dermer 2004) and is in many cases at odds with other more recent observations, including significantly higher polarization of the ultraviolet/X-ray component than expected (Cara et al. 2013) and jet-counterjet flux ratios which are inconsistent with IC/CMB predictions (Kataoka et al. 2008; Clautice et al. 2016; Hardcastle et al. 2016). Another major effort at testing the IC/CMB model has been through looking for the high levels of GeV emission that it predicts, first proposed before the launch of _Fermi_ by Georganopoulos et al. (2006). The analysis is complicated by the poor (fractions of a degree) resolution of _Fermi_ as it is impossible to spatially distingues the jet from the core, which is highly variable and is expected dominate over the jet especially at lower (MeV) energies. However, through the use of a recombined light-curve analysis, Meyer and Georganopoulos (2014) first showed that the IC/CMB model could be strongly ruled out through gamma-ray upper limits for the X-ray jet of 3C 273, before doing the same in PKS 0637-752 (Meyer et al. 2015, 2017). Peter Breiding then widely applied this test to essentially all the'multiple spectral component' or MSC jets for his PhD thesis work (Breiding et al. 2017, 2023), finding that 24/45 sources tested would require IC/CMB-predicted GeV emission significantly in violation of the gamma-ray limits (see example SEDs for two of the sources from Breiding et al. (2023) in Figure 2). Another difficulty for the IC/CMB model is the change in apparent morphology (knot location, size, etc) from radio to optical to X-ray. The simplest IC/CMB model requires cospatial Figure 1: Upper left, an ATCA radio image of PKS 0637-752 (provided by Leith Godfrey), below, an HST image of the jet with inverted color scale and radio contours overlaid. Note the small knots of optical emission coinciding with the brightest radio knots. At right, an early SED of the bright knots of the jet, showing the very high level of X-ray emission and the difficulty of explaining it with either SSC or (unbeamed) IC/CMB. emission in these different bands, since the radio-to-optical synchrotron spectrum is produced by the same electron energy distribution upscattering the CMB to X-ray energies (or if there is an offset, for the X-rays to persist beyond the radio since these arise from very low-energy electrons in the IC/CMB interpretation). However, an extensive study of essentially all X-ray jets discovered to date finds significant offsets between X-rays and radio on the order of \(\sim\)1 kpc or more (e.g., Reddy et al., 2021, 2022). ## 3 Variability of X-ray Jets One consequence of the IC/CMB model is a low energy extension of the electron energy distribution. That is, in order for IC/CMB to be the dominant X-ray emission mechanism, there must be a significant population of particles at low-energies (\(\gamma_{min}\sim 10-20\); Celotti et al., 2001; Georganopoulos et al., 2006). Besides the considerable energy requirements of this extension, the implied radiative cooling times are extremely long (longer than the jet lifetimes), which means that under an IC/CMB X-ray mechanism we should absolutely not expect to see variability in X-ray emission on timescales within the lifetime of _Chandra_. It was thus surprising when Marshall et al. (2010) examined the case of the nearby (\(z=0.035\)) FRII Pictor A, a radio galaxy with a very long (arcminute-scale) radio jet and steep radio core, and a clearly visible counterjet. A knot in the jet of Pictor A was seen to fade over a timescale of a few years with a reported significance of \(3.4\sigma\). Hardcastle et al. (2016) and Thimmappa et al. (2020) later confirmed these results at similar significances (\(p<0.011\)). While variability had been observed by this point in nearby FRI jets (most dramatically in the mid-2000s outburst of HST-1 in the jet of M87; Harris et al., 2003) variability on this timescale was completely unexpected. In the case of Pictor A, assuming minimum energy conditions and an emitting region the size of the jet cross-section, the synchrotron cooling timescale is on the order of 1200 years, implying that the X-rays arise from a region much smaller and with a magnetic field several times higher than equipartition. ### New Results Over the past few years, we have conducted a comprehensive archival study of X-ray jets, to look for variability in other sources. Our sample of 53 comprises nearly all known X-ray jets imaged more than once by the _Chandra_ Advanced CCD Imaging Spectrometer (ACIS) instrument. The average number of observations per source in our sample is 3.4, with a mean Figure 2: Example large-scale (kpc-scale) jet SEDs, with synchrotron (thin line) and IC/CMB (thick line) model curves. As shown, matching the X-ray emission unavoidably predicts a high level of GeV emission, which is in violation with the deep _Fermi_/LAT upper limits (red). Taken from Breiding et al. (2023). spacing of 2.6 years. Two sources (3C 305 and 3C 171) were excluded as the extended X-ray emission associated has been attributed to jet-driven gas (Hardcastle et al., 2012, 2010); we also exclude the sources M87 and Centaurus A, which are already known to produce X-rays by synchrotron process (Feigelson et al., 1981; Kraft et al., 2002; Wilson et al., 2000). Both M87 and Centaurus A have been far more deeply and frequently observed than typical of the remaining sample (with 50 and 43 distinct observations, respectively), and this is likely one of the reasons that X-ray variability has been reported in both cases. With the exception of Pictor A, variability has not been reported for any other source in our sample of 53. Unlike the jets of M87 and Centaurus A, however, the X-ray emission in Pictor A is clearly from a second emission component (Hardcastle and Croston, 2005). Our likelihood function is a straightforward application of Poisson statistics, with a null hypothesis of a steady source rate for each individual knot region, taking into account a varying background and various detector effects. We computed for each knot a \(p\)-value for the test of the the null hypothesis of a steady source rate, the distribution of which (for 155 test regions) is shown in Figure 3. Out of the full sample of 155 regions tested, 18 (12%) have \(p\)-values less than 0.05, suggesting significant variability in the intrinsic source rate. The single-region \(p\)-values for the are expected to follow a uniform \(U(0,\,1)\) distribution under the null hypothesis of steady emission, so this is clearly about twice the value expected, and the excess of low p-values is apparent by eye in Figure 3. This is also born out in the statistics: when we compare all 155 single-region \(p\)-values to a \(U(0,1)\) distribution using a one-sided Kolmogorov-Smirnov (KS) test, we obtain a global \(p\)-value of 0.000196, indicating that the distribution is highly non-Uniform. This clearly indicates that the observations are not consistent with non-variable X-ray sources. Figure 3 (right panel) shows the results of a simulation to try to constrain the typical scale of variability in our jet population, given a degree of degeneracy between the typical variability amplitude and rate of variable sources existing in the population. It is not implausible that a subset of X-ray jets are steady X-ray emitters while the rest exhibit variability. For example, Figure 3: At left, the histogram of p-values from our likelihood model analysis of 155 jet regions from 53 total sources. Under the null hypothesis of steady emission, we expect a completely uniform (0,1) distribution of p-values. The global p-value associated with the excess of low values shown here is p=0.00019, suggesting variability in the jet population. At right, the results of a simulation to test the degeneracy between sample fraction of variability (assuming 1-f are steady emitters) and the typical variability amplitude (relative to mean). The shaded blue region shows the best agreement with the observational data, suggesting between 30-100% of jets are variable, at a modest tens-of-percent scale. 30% of the sample might be variable sources, with a characteristic scale of 50% in amplitude, or 90% of the sample might be variable with a lower (say, 10%) characteristic amplitude of variability, and both of these scenarios might be consistent with the _p_-value distribution we observe. Our simulation compares the MLE _p_-value distribution of simulated populations varying in these two respects to our actual _p_-value distribution and reports the K-S test _p_-value for the comparison. Thus, the lighter-colored regions of the plot are most consistent with the data while black areas are not consistent with the data under a K-S test comparison. ## 4 Discussion As a result of this and the other observational lines of evidence, the _general_ validity of IC/CMB model for large-scale jet X-ray emission must be called into question. On the other hand, IC/CMB does appear to be more consistent than alternatives in some cases, for example the few high-redshift jets with unusually faint or undetected radio emission (e.g. Simionescu et al., 2016; Migliori et al., 2022), and potentially in somewhat 'unusual' cases at lower redshift. For example, we found that the 'plateau' or underlying steady-state gamma-ray emission in two X-ray jets (PKS 1510-089 and OJ 287) was compatible with the predicted IC/CMB level (Meyer et al., 2018). However, these cases are likely outliers, with a particularly favorable alignment and unusually high jet speed at large distances. The main alternative to the IC/CMB model is synchrotron radiation from a second electron energy distribution (EED), or abandoning the leptonic model in favor of a hadronic jet model (Petropoulou et al., 2017; Meyer et al., 2018). Synchrotron models allow us to avoid the 'uncomfortable' requirements of IC/CMB models (e.g., high bulk Lorentz factors, small viewing angle, and super-Eddington jet powers), but still have a largely ad-hoc appearance: multiple EEDs was not predicted by theory. Previously, the apparent co-spatiality of the radio and X-rays seemed consistent with a single population (i.e. IC/CMB) X-ray model, but recent research has shown that they are in fact _not_ co-spatial in most cases (Reddy et al., 2022). Notably, these different mechanisms along with IC/CMB involve vastly different scenarios as far as jet matter content and energy distribution, jet power, and acceleration mechanisms, with important implications for a proper accounting of jet impacts on the environment. The short timescales of X-ray variability observed in our study implies that the emitting regions are much smaller that the width of the jet (which is generally resolved in e.g., radio to be on kpc scales). By the light crossing time argument a flare event on the order of a year Figure 4: At left, an HST image of the jet of 3C 346, with ALMA band3 contours overlaid. At right, a detailed SED of knot E, showing at least 3 emission components in the jet, so far unexplained (Meyer et al., in prep.) cannot occur in a region larger than a few parsecs; this requires highly localized particle acceleration and appears more consistent with magnetic reconnection than the usually assumed shock acceleration (Giannios et al., 2009). Interestingly, production of high energy photons from regions much smaller than the source physical size has been observed in very different celestial objects, from the Sun (Omodei et al., 2013) and the Crab Nebula (Abdo et al., 2011), to blazars like PKS 1510-089, where variations over hours are seen in the VLBI radio core that has a light crossing time of the order of a year, a difference too large to explain through beaming (Marscher and Jorstad, 2010). These observations are key to understanding the particle acceleration mechanism acting in these environments.
2308.03468
The mass density contrast in perturbed Friedman-Lemaitre-Robertson-Walker cosmologies
We analyze the evolution of the mass density contrast in spherical perturbations of flat Friedman-Lemaitre-Robertson-Walker cosmologies. Both dark matter and dark energy are included. In the absence of dark energy the evolution equation coincides with that obtained by Bonnor within the ``Newtonian cosmology''.
Edward Malec
2023-08-07T10:50:16Z
http://arxiv.org/abs/2308.03468v1
# The mass density contrast in perturbed Friedman-Lemaitre-Robertson-Walker cosmologies ###### Abstract We analyze the evolution of the mass density contrast in spherical perturbations of flat Friedmann-Lemaitre-Robertson-Walker cosmologies. Both dark matter and dark energy are included. In the absence of dark energy the evolution equation coincides with that obtained by Bonnor within the "Newtonian cosmology". ## I Introduction We shall analyze the evolution of perturbations of flat FLRW spacetimes using the \(1+3\) splitting of the spacetime. The original aim of this paper was just to find the general relativistic version of the well known result of Bonnor [1], assuming isothermal perturbations and using the comoving coordinates. The main conclusion concerning the temporal behaviour of the mass density contrast -- in the absence of dark energy -- coincides with that of Bonnor and also with a later analysis of [2]), for perturbations comoving with the background matter. The case of the nonzero cosmological constant was not investigated by Bonnor. In such a case the evolution equation for the mass density contrast differs from that found earlier by Martel [2]. ## II Selfgravitating fluids within spherically symmetric spacetimes We shall assume only spherical symmetry, without spatial homogeneity. Some of the resulting Einstein equations had been found by Lemaitre in 1930's [3; 4], who studied stability of Einstein static universes. Tolman and Bondi extended results of Lemaitre for a selfgravitating dust [5; 6]. The resulting class of metrics is often referred to as the Lemaitre-Tolman-Bondi spacetimes. In 1960's Misner and Sharp [8], and Podurets [9] again analyzed these equations, but in the case of perfect gas; they extended in particular the Lemaitre-Tolman-Bondi concept of the quasilocal material mass. Its expression will be given below. We assume the Einstein equations \(R_{\mu\nu}-g_{\mu\nu}R=8\pi T_{\mu\nu}-\Lambda g_{\mu\nu}\), where the stress-energy tensor is defined as \(T_{\mu\nu}=(\varrho+p)\,U_{\mu}U\nu+pg_{\mu\nu}\) and \(\Lambda\) is the cosmological constant. The coordinate \(4-\)velocity is normalized, \(U_{\mu}U^{\mu}=-1\). Here \(\varrho\) and \(p\) denote the mass density and pressure, respectively. We shall assume that we are given a \(1+3\) foliation, with foliation leaves characterized by constant time, \(t=const\). The line element is taken in the form \[ds^{2}=-N^{2}dt^{2}+\hat{a}dr^{2}+R^{2}\left(d\theta^{2}+\sin^{2}\theta d\phi ^{2}\right), \tag{1}\] where the radius \(0\leq r<\infty\) and the angular variables satisfy \(0\leq\phi<2\pi\), \(-\pi/2\leq\theta\leq\pi/2\). The lapse \(N\) and the areal radius \(R\) depend on time \(t\) and the coordinate radius \(r\). We adopt the standard condition that the speed of light \(c\) and the gravitational constant \(G\) are equal to unity. This metric is diagonal, so that we shall calculate extrinsic curvatures from the formula \(K_{ij}=\frac{1}{2N}\partial_{t}g_{ij}\)[7]. The condition of isotropy implies that two of them are equal, \(\mathrm{K}_{\phi}^{\phi}=\mathrm{K}_{\theta}^{g}\). The nonzero components of \(K_{ij}\) read \[\mathrm{trK} = \frac{\partial_{t}(\sqrt{\hat{a}}R^{2})}{N\sqrt{\hat{a}}R^{2}}, \quad\mathrm{K}_{r}^{r}=\frac{1}{2N\hat{a}}\partial_{t}\hat{a},\quad\mathrm{ K}_{\phi}^{\phi}= \tag{2}\] \[\mathrm{K}_{\theta}^{\theta}=\frac{\partial_{t}R}{NR}=\frac{1}{2} (\mathrm{trK}-\mathrm{K}_{r}^{r})\] Usually one assumes that coordinates are comoving. We shall impose a foliation condition as in the standard \(1+3\) formulations of Einstein equations, by putting a condition onto extrinsic curvatures of leaves of a foliation. We shall assume the following \[\Delta(R(r,t),t)=(\frac{R(\mathrm{trK}-\mathrm{K}_{r}^{r})}{2})^{2} \tag{3}\] where \(\Delta\) is defined as [10]: \[\Delta(R(r,t),t) = \frac{-3}{4R}\int_{0}^{R}\tilde{R}^{2}(\mathrm{K}_{r}^{r})^{2}d \tilde{R}+\frac{1}{4R}\int_{0}^{R}\tilde{R}^{2}(\mathrm{trK})^{2}d\tilde{R}+ \tag{4}\] \[\frac{1}{2R}\int_{0}^{R}\mathrm{trK}_{r}^{r}\tilde{R}^{2}d\tilde {R}.\] Differentiation of both sides of Eq. (4) with respect the coordinate radius \(r\) yields, using the momentum constraint of Einstein equations [7] and the definition of the mean curvature \(\hat{p}=2\partial_{r}\ln R/\sqrt{\hat{a}}\)[10], \[R(\mathrm{trK}-\mathrm{K}_{r}^{r})\frac{16\pi j_{r}R}{\hat{p}}=0. \tag{5}\] Herein we define \(j_{r}=NT^{0}\mathrm{r}/\sqrt{\hat{a}}\). This implies that fluids are comoving in chosen coordinates, \[j_{r}=0, \tag{6}\] provided that there are no minimal surfaces, \(\hat{p}\neq 0\) and \(\mathrm{trK}\neq K_{\mathrm{r}}^{\mathrm{r}}\). On the other hand, it appears that in comoving coordinates \(\mathrm{trK}=\partial_{R}\left(R^{3}(\mathrm{trK-K_{r}^{r}})\right)/(2R^{2})\) (see Sec. IV A). The areal velocity \(R(\mathrm{trK-K_{r}^{r}})/2\) constitutes a part of the initial data of Einstein equations -- see the forthcoming equation (15). Thus under the conditions \(\hat{p}\neq 0\) and \(\mathrm{trK}\neq K_{\mathrm{r}}^{\mathrm{r}}\) our foliation equation (3) is equivalent to the standard assumption of comoving coordinates. Notice that now the material energy-momentum tensor reads \(T_{0}^{0}=-\varrho\), \(j_{r}=0\) and \(T_{\mathrm{r}}^{\mathrm{r}}=p=T_{\theta}^{\theta}\); we deal with perfect fluids. The cosmological constant is responsible for the dark energy \(\varrho_{\Lambda}\) and pressure \(p_{\Lambda}\) contributions: \[\varrho_{\Lambda}=\frac{\Lambda}{8\pi},\quad p_{\Lambda}=-\frac{\Lambda}{8 \pi}. \tag{7}\] In such a case the quasilocal mass of Misner and Sharp [8], and Podurets [9], contained in a coordinate sphere of a radius \(r\), is given by the formula \[m(R(r))=2\pi\int_{0}^{r}\tilde{R}^{3}\hat{p}\sqrt{a}\left(\rho+\varrho_{ \Lambda}\right)dr. \tag{8}\] For the sake of concise notation we shall define \[U(r)=\frac{R(r)}{2}\left(\mathrm{trK(r)}-K(r)_{\mathrm{r}}^{\mathrm{r}}\right); \tag{9}\] this quantity represents areal velocity of a comoving particle of gas, \(U=\partial_{0}R/N\). The mean curvature \(\hat{p}\) of centered spheres can be calculated to be [10] \[\hat{p}=\frac{2}{R(r)}\sqrt{1-\frac{2m(R(r))}{R(r)}+U^{2}(r)}. \tag{10}\] One can show that the mass defined in (8) changes as follows [8] \[\partial_{t}m(R(r))=-4\pi\left[NR^{2}U\left(p+p_{\Lambda}\right)\right](r). \tag{11}\] Moreover, by direct calculation one gets from (8) \[\frac{\partial_{r}m(R)}{\sqrt{\hat{a}}}=2\pi R^{3}\hat{p}\varrho. \tag{12}\] These equations should be supplemented by two conservation equations \[N\partial_{r}p+\partial_{r}N(p+\varrho)=0, \tag{13}\] and \[\partial_{t}\varrho=-N\mathrm{trK}(p+\rho). \tag{14}\] The Einstein evolution equations reduce to the single equation \[\partial_{t}U=-\frac{m(r)}{R^{2}}-4\pi\left(p+p_{\Lambda}\right)RN+\frac{\hat{ p}R}{2\sqrt{\hat{a}}}\partial_{r}N. \tag{15}\] ## III The Friedman Type Solution Assuming that matter consists of dust and imposing in addition homogeneity on slices of constant time \(t\), one gets from equations (8 -- 15) the Friedman metric \(ds^{2}=-dt^{2}+a^{2}\left(dr^{2}+r^{2}d\Omega^{2}\right)\). Thus the lapse \(N=1\). The conformal factor \(a(t)\) satisfies Friedman equations: \[\varrho_{0}+\varrho_{\Lambda}=\frac{3}{8\pi}H^{2}\] \[-\frac{dH}{dt}=4\pi\varrho_{0}\] \[\frac{d\varrho_{0}}{dt}=-3H\varrho_{0}. \tag{16}\] (Only two of the three equations are independent.) The extrinsic curvatures of this solution are equal to the Hubble parametr \(H\equiv\frac{da}{adt}\), \[K_{\mathrm{r}}^{\mathrm{r}}=K_{\theta}^{\theta}=K_{\phi}^{\phi}=H, \tag{17}\] while its trace is \(\mathrm{tr}K=3H\). The velocity \(U\) reads now \(U=HR\). The mean curvature of centered 2-spheres within the \(t=const\) slice is now the same as in the flat space: \(\hat{p}=2/R\). This solution describes a flat, homogeneous and isotropic universe filled with comoving dust of the density \(\varrho_{0}\), that is expanding with the Hubble recession velocity \(H=\dot{a}/a\). The product \(\varrho_{0}a^{3}\) is constant in time. ## IV Evolution of Small Spherical Inhomogeneities in a FLRW Universe We assume that the background (Friedman-type) universe is dotted by isolated, locally isotropic mass density perturbations \(\delta\varrho\), so that the mass density is split into the background part \(\varrho_{0}\) and the perturbation \(\delta\varrho\): \(\varrho=\varrho_{0}+\delta\varrho\). The mass perturbations are isothermal -- they exert pressure \(p=c_{s}^{2}\delta\varrho\). The metric of the perturbed spacetime reads \(ds^{2}=-N^{2}dt^{2}+\dot{a}dr^{2}+R^{2}d\Omega^{2}\); we use comoving coordinates. Far from these perturbations the lapse \(N\) tends to 1 and the spatial part of the metric is approaching the background metric \(a^{2}\left(dr^{2}+r^{2}d\Omega^{2}\right)\). We assume -- similarly as Bonnor in his analysis of [1] -- that this perturbing isothermal gas is comoving with the background dust. (Let us remark, that perturbations do not have to comove with the background dust -- see a different scenario discussed in [11].) For the matter of convenience we shall locate our coordinate system in the symmetry center of a perturbation. The areal velocity \(U=\partial_{0}R/N=R(\mathrm{trK-K_{r}^{r}})/2\) is split into the background and perturbed parts as follows \[U=H(t)R+\delta\upsilon, \tag{18}\] where \(H(t)\) is the Hubble constant at the time \(t\). We need initial data -- for the areal velocity \(U=\partial_{0}R/N\) and the mass density \(\varrho\) -- for the two evolution equations (14) and (15). They are defined as follows at an initial hypersurface labelled by the world time \(t_{0}\). The initial value of the perturbing component \(\delta_{U}\) is small but otherwise it is a free datum. The initial mass density \(\varrho\) is given as the sum of the background mass density \(\varrho_{0}\) at the time \(t_{0}\) and the small initial perturbation \(\delta\varrho\), with the condition that far from the center \(\varrho\) approaches \(\varrho_{0}(t_{0})\). The main aim of forthcoming calculation is the derivation of the wave equation that rules the evolution of the mass density contrast \(\delta\varrho/\varrho_{0}\). We shall get also an evolution equations for the velocity perturbation \(\delta_{U}\). ### The extrinsic curvature The first part of the calculation is actually exact -- we do not need the assumption of small perturbations in order to get the trace of the extrinsic curvature \[\mathrm{trK}=\frac{\partial_{R}\left(R^{2}U\right)}{R^{2}} \tag{19}\] of hypersurfaces of constant world time \(t\). Formula (19) is valid in all slicings of spherically symmetric spacetimes cosmological models that asymptotically coincide with flat slicings of cosmological flat FLRW models. We allow for dark energy (cosmological constant) and various forms of _comoving_ matter -- dust and fluids.This formula is known (see for instance [10; 12]), but we derive it here for the sake of completeness. We have from the definition of extrinsic curvatures \[\mathrm{trK}=\frac{\partial_{0}\left(R^{2}\sqrt{\hat{a}}\right)}{NR^{2}\sqrt{ \hat{a}}}. \tag{20}\] The quantity \(\sqrt{\hat{a}}\) in the nominator of (20) can be replaced by \[\sqrt{\hat{a}}=\frac{2\partial_{r}R}{\hat{p}R}; \tag{21}\] here \(\hat{p}\) is the mean curvature of the coordinate sphere \(r=const\). Thus (20) yields \[\mathrm{trK}=2\frac{\partial_{0}R}{NR}+\frac{2\partial_{0}\partial_{r}R}{N \hat{p}R\sqrt{\hat{a}}}+\frac{\hat{p}R}{N\sqrt{\hat{a}}}\partial_{0}\frac{1}{ \hat{p}R}. \tag{22}\] The first term is just \(2U/R\). Changing the order of differentiation, we can write the second term as \[\frac{2\partial_{r}\left(\frac{\partial_{0}R}{N}N\right)}{N\hat{p}R\sqrt{\hat {a}}}=2\frac{\partial_{r}U}{\hat{p}R\sqrt{\hat{a}}}+2U\frac{\partial_{r}N}{N \hat{p}R\sqrt{\hat{a}}}.\] Replace now the coordinate radius \(r\) by the areal radius \(R\) and notice that \(\frac{2\partial_{r}}{\hat{p}R}=\partial_{R}\). We obtain the following form of the second term of (22): \[\frac{2\partial_{0}\partial_{r}R}{N\hat{p}R}=\partial_{R}U+U\frac{\partial_{ R}N}{N}.\] The calculation of the third term in (22) is a little bit longer. Recall (see formula (10)) that the mean curvature \(\hat{p}R=2\sqrt{1-\frac{2m(R(r,t),t)}{R(r,t)}+U^{2}(r,t)}\). Its differentiation with respect time yields, after using the mass conservation equation (11) and the Einstein equation describing the evolution of \(U=R(\mathrm{tr}-\mathrm{K}_{\mathrm{r}}^{\mathrm{r}})/2\) (see equation (15)): \[\partial_{t}\frac{2}{\hat{p}R}=-\frac{2U}{\hat{p}R}\partial_{R}N \tag{23}\] Combining the three terms of (22), we arrive at the formula (19). In the case of small spherically symmetric perturbations we can use the splitting (18) of the radial velocity. We immediately arrive at the following corollary. **Conclusion**. Assume a perturbed FLRW flat universe. The trace of the extrinsic curvature of constant time hypersurfaces, in the foliation defined by the assumption of comoving particles, is given by \[\mathrm{trK}=3H+\frac{\partial_{R}\left(R^{2}\delta_{U}\right)}{R^{2}}. \tag{24}\] _Remark. The alternative way to derive the formula (19) is to write down the momentum constraint (i.e., the Einstein equation \(R_{0i}-\frac{R}{2}g_{0i}=8\pi T_{0i}\)), using the metric (1). The \(r\)-component of the constraint can be expressed as (19), in comoving coordinates._ ### The lapse In what follows we need the lapse function \(N\); it can be obtained from (13). We assumed that the pressure is isothermal in perturbed FLRW universes, \(p=c_{s}^{2}\delta\varrho=c_{s}^{2}\varrho_{0}\delta\), where we introduced the mass density contrast \[\delta\equiv\frac{\delta\varrho}{\varrho_{0}}. \tag{25}\] If the mass density contrast is small, \(\delta\ll 1\), then (13) yields \(\partial_{r}N\approx-c_{s}^{2}\partial_{r}\delta\). Far from the center \(N\to 1\); thus \[N\approx 1-c_{s}^{2}\delta. \tag{26}\] This implies that the time derivative of the areal radius evolves as \[\partial_{0}R = UN\approx\left(HR+\delta_{U}\right)\left(1-c_{s}^{2}\delta\right)\approx \tag{27}\] \[HR+\delta_{U}-c_{s}^{2}\delta HR.\] ### Evolution of the mass density contrast We investigate perturbations of (flat) FLRW universes with dust (including dark matter) and dark energy. Let us summarise the relevant information. The material pressure \(p_{0}=0\) and the sum of the background energy density satisfies \(\varrho_{0}+\varrho_{\Lambda}=\frac{3H^{2}}{8\pi}\). The metric scale factor \(a(t)\) of the background metric can be obtained from equations (16). The lapse up to the first perturbation is given by (26) and Eq. (27) reads now \(\partial_{0}R\approx HR+\delta_{U}-c_{s}^{2}HR\). Equation (15) can be written as \[\partial_{0}U=-\frac{m(R)}{R^{2}}N-4\pi\left(c_{s}^{2}R\varrho_{0}\delta+p_{ \Lambda}\right)RN+\frac{\hat{p}^{2}R^{2}}{4}\partial_{R}N. \tag{28}\] Zeroth order terms (see Section III) drop out. Thus the linear perturbations satisfy the equation \[\frac{1}{a}\partial_{0}\left(a\delta_{U}\right)=-\frac{\delta m(R)}{R^{2}}-c_{ s}^{2}\partial_{R}\delta. \tag{29}\] We employed (18) and (26) -- (28) in the process of deriving (29). One can show that in the leading order of \(O(\delta)\) the following rule holds \[\partial_{0}\partial_{R}\left(R^{2}\delta_{U}\right)=\partial_{R}\left(\frac{ R^{2}}{a}\partial_{0}\left(a\delta_{U}\right)\right). \tag{30}\] The mass density conservation equation is given by (14). Using the derived earlier expressions for the lapse \(N\) and the trace of the extrinsic curvature trK, we get \[\partial_{0}\delta\varrho+3H\delta\varrho+\frac{\varrho_{0}}{R^{2}}\partial _{R}\left(R^{2}\delta_{U}\right)=0. \tag{31}\] Dividing both sides by \(\varrho_{0}\), we obtain \[\partial_{0}\delta+\frac{1}{R^{2}}\partial_{R}\left(R^{2}\delta_{U}\right)=0. \tag{32}\] Differentiate now both sides of (32) with respect time, use formula (30) and equation (29). After straightforward calculation we arrive at \[\partial_{0}^{2}\delta-\frac{c_{s}^{2}}{R^{2}}\partial_{R}\left(R^{2}\partial _{R}\delta\right)-\frac{3}{2}H^{2}\delta+2H\partial_{0}\delta=0. \tag{33}\] Notice also that Eq. (33) is a wave equation -- thus it possesses a kind of travelling wave pulses that move within the coordinate sphere that encloses the perturbed initial data. Equation (33) is equivalent to the corresponding Bonnor equation describing the evolution of the mass density contrast [1; 2] when the cosmological constant is absent. In order to see this, perform the Fourier transformation of (33) and insert \(H^{2}=8\pi\varrho_{0}/3\). Then one exactly arrives at the result of Bonnor. Our equation (33) differs from the corresponding equation of Martel (see Eq. (8) in [2]) in the case of the nonzero cosmological constant. The two descriptions differ in the part concerning the evolution of velocity perturbations. In the model of Bonnor the perpendicular velocity components behave like \(\vec{V}_{T}\propto 1/a(t)\)[1]; thus their length has to decrease. In the general relativistic analysis we have only a partly coincident behaviour of velocity perturbations -- \(\partial_{0}(a\delta_{U})\leq 0\), assuming that \(\partial_{R}\delta\geq 0\). In the case of dust-like perturbations -- with the vanishing speed of sound, \(c_{s}^{2}=0\) -- the velocity perturbation \(\delta_{U}\) is strictly decreasing. Positive velocity perturbations might decrease at least like the inverse of the scale factor, \(1/a(t)\), but there is no a bound onto the absolute value of negative velocity disturbances \(\delta_{U}\). ## V The influence of dark energy We shall investigate how dark energy would influence the evolution of the mass density contrast \(\delta\) after the end of recombination epoch, that is for times \(t\geq t_{\rm re}\). We neglect -- as in the whole paper -- the contribution of the radiation energy. The speed of sound \(c_{s}\) is negligible in this period and the evolution equation becomes \[\partial_{0}^{2}\delta-\frac{3}{2}H^{2}\delta+2H\partial_{0}\delta=0. \tag{34}\] #### v.0.1 Absence of dark energy In this case the conformal factor \(a(t)\propto t^{2/3}\) and \(H=2/(3t)\). The increasing solution of (35) reads \(\delta(t)\propto a(t)\propto t^{2/3}\). According to astronomical observations \(a(t_{0})/a(t_{\rm re})\approx 1100\)[13]; here \(t_{0}\) is the present age of the Universe. Thus the mass density contrast \(\delta\) of dust-like perturbations of dust Friedman universes would increase \(1100\) times since the end of the recombination era. #### v.0.2 Including dark energy In this case the coefficients -- \(H(t)\) and \(H^{2}(t)\) are given as related solutions of the Friedman equations (see Sec. III); the latter can be solved numerically, assuming dust and the cosmological constant. The evolution equation reads \[\partial_{0}^{2}\delta-\frac{3}{2}H^{2}\delta+2H\partial_{0}\delta=0. \tag{35}\] At the recombination era the material density \(\varrho\) exceeds the dark energy density \(\varrho_{\Lambda}\) by a factor of the order of \(10^{8}\). Thus as initial data we can choose \[\delta(t_{\rm re})=t_{\rm re}^{2/3},\ \ \ \ \frac{d\delta}{dt}|_{t_{\rm re}}=\frac{2}{3 t_{\rm re}^{1/3}} \tag{36}\] -- these are data dictated by the solution \(\delta(t)\propto a(t)\), valid in the case of no-dark energy. The solution of Eq. (35) with initial data (36) is very close to \(\delta(t)=t^{2/3}\); the difference becomes clear at relatively late times \(t\geq t_{0}/10\)[14]. Assuming a flat universe with present data \(\Omega(t_{0})_{\rm d}=0.3\) and \(\Omega_{\Lambda}(t_{0})=0.7\), one gets \(\delta(t_{0})/\delta(t_{\rm re})\approx 975\)[14]. The cosmological constant slows the process of formation of bound structures; its influence is comparable to that obtained from the equation of Martel -- see [13].
2310.16549
Valley Polarization-Electric Dipole Interference and Nonlinear Chiral Selection Rules in Monolayer WSe$_2$
In monolayer transition metal dichalcogenides time-reversal symmetry, combined with space-inversion symmetry, defines the spin-valley degree of freedom. As such, engineering and control of time-reversal symmetry by optical or magnetic fields constitutes the foundation of valleytronics. Here, we propose a new approach for the detection of broken time-reversal symmetry and valley polarization in monolayer WSe$_2$ based on second harmonic generation. Our method can selectively and simultaneously generate and detect a valley polarization at the $\pm K$ valleys of transition metal dichalcogenides at room temperature. Furthermore, it allows to measure the interference between the real and imaginary parts of the intrinsic (electric dipole) and valley terms of the second order nonlinear susceptibility. This work demonstrates the potential and unique capabilities of nonlinear optics as a probe of broken time-reversal symmetry and as a tool for ultrafast and non-destructive valleytronic operations.
Paul Herrmann, Sebastian Klimmer, Till Weickhardt, Anastasios Papavasileiou, Kseniia Mosina, Zdeněk Sofer, Ioannis Paradisanos, Daniil Kartashov, Giancarlo Soavi
2023-10-25T11:03:18Z
http://arxiv.org/abs/2310.16549v1
# Title ###### Abstract We present a new method for the calculation of the radiative and ###### Abstract In monolayer transition metal dichalcogenides time-reversal symmetry, combined with space-inversion symmetry, defines the spin-valley degree of freedom. As such, engineering and control of time-reversal symmetry by optical or magnetic fields constitutes the foundation of valleytronics. Here, we propose a new approach for the detection of broken time-reversal symmetry and valley polarization in monolayer WSe\({}_{2}\) based on second harmonic generation. Our method can selectively and simultaneously generate and detect a valley polarization at the \(\pm K\) valleys of transition metal dichalcogenides at room temperature. Furthermore, it allows to measure the interference between the real and imaginary parts of the intrinsic (electric dipole) and valley terms of the second order nonlinear susceptibility. This work demonstrates the potential and unique capabilities of nonlinear optics as a probe of broken time-reversal symmetry and as a tool for ultrafast and non-destructive valleytronic operations. ## Main Text Time-reversal (TR) symmetry underlies some of the most exotic phases of condensed matter, including topological insulators and superconductors [1]. In monolayer transition metal dichalcogenides (TMDs), the interplay between space inversion and TR symmetry further defines the valley degree of freedom [2; 3], where direct transitions in momentum space at the \(\pm K\) points of the Brillouin zone are energetically degenerate but non-equivalent. Engineering of TR symmetry in TMDs naturally leads to the field of valleytronics, where the degeneracy of the \(\pm K\) valleys is lifted either by magnetic fields (Zeeman splitting) [4] or with circularly polarized light. The latter approach can be further distinguished between the generation of a real exciton population in one of the valleys _via_ one- [5] or two-photon [6] absorption, or by transient breaking of TR symmetry with coherent processes such as the optical Stark and Bloch-Siegert effects [7; 8]. However, in the vast majority of studies the detection of broken TR symmetry and the consequent valley polarization (VP) has been limited to the realm of linear optics, mainly the detection of a polarized photoluminescence (PL) to probe the VP induced by a real excited state population [5; 9] or the detection of the Kerr rotation in a pump-probe configuration to probe valley polarized resident carriers [10; 11] or valley selective coherent states [7; 8]. Both approaches suffer from severe limitations: PL is intrinsically destructive, as it requires recombination of the electron-hole pair and thus the loss of the valley information while optical Kerr rotation uses a relatively intense and resonant probe pulse (_e.g._, \(100\,\mathrm{\SIUnitSymbolMicro W}\) of average power in Ref. [11]), which can significantly perturb the sample under investigation. In addition, both helicity-resolved PL and optical Kerr rotation can only probe the amplitude of the valley imbalance, while they do not measure the complex nature (real and imaginary parts) of the VP induced elements in the TMD susceptibility tensor. Finally, it is worth noting that both methods require low temperatures to increase the spin relaxation times [12; 13] and thus induce a measurable degree of VP. In this context, nonlinear optics (NLO) can provide distinct advantages. An all-optical probe of broken TR symmetry based on NLO has been realized in layered [14] and bulk magnets [15], and very recently in various non-magnetic TMDs under the effect of an external magnetic field [16]. Also in the context of valleytronics, few theoretical [17; 18; 19] and experimental [20; 21; 22] studies have recently demonstrated the advantages of a detection scheme based on second harmonic generation (SHG). All these studies were based on the measurement of a rotation in the SHG polarization ellipse while simultaneously writing the valley state with an elliptically polarized fundamental beam (FB) [20; 21; 22]. On one hand, this approach clearly surpasses the standard methods based on polarized PL and optical Kerr rotation, because SHG is a parametric process and thus ultrafast and non-destructive, especially under the condition where the SH signal at \(2\omega\) is resonant with the exciton transition under investigation, and thus the TMD is fully transparent to the FB at \(\omega\). On the other hand, detection of the VP based on elliptical SHG fails if the polarization of the FB approaches the circular state (which is the most efficient condition for the generation and detection of the VP), because in this case there is no well-defined ellipse rotation to measure. In addition, measurements of the valley SHG with elliptically polarized light are based on the assumption that the VP and electric dipole (ED, otherwise called "intrinsic") terms of the \(\chi^{(2)}\) tensor are in-phase, and thus the SH rotation angle is directly proportional to the ratio \(\frac{|\chi^{(2)}_{EP}|}{|\chi^{(2)}_{ED}|}\)[20; 21]. This, again, limits the study of broken TR symmetry in TMDs to the amplitude of the VP tensor, rather than its complex nature. As we will show, this assumption fails in the energy region of excitonic resonances, which are the ideal probe for the VP. In this work, we propose a new approach for all-optical detection of broken TR symmetry and nonlinear valleytronics where we simultaneously generate the VP by an off-resonant, circularly polarized FB using the optical Stark effect, and read it by measuring the resonant SH intensity rather than the polarization rotation angle. This greatly simplifies the detection scheme and enables ultrafast write/read of the VP at ambient temperature. In particular, we measure the ratio between the SH signal emitted for incoming circular _versus_ linear FB polarization and show that this directly probes the nonlinear elements of the \(\chi^{(2)}\) tensor induced by the VP. We further demonstrate that such measurement can also probe the VP dispersion and the wavelength dependent relative phase between the VP and ED elements of the \(\chi^{(2)}\) tensor. Based on this, we measure both constructive and destructive SH interference between the VP and ED terms, similar to the SH magnetic-electric dipole interference observed in bulk magnets [15; 23]. This provides a further piece of evidence for the analogies between the VP in TMDs and the magnetic-dipole response of magnets [7], as both are ultimately connected to the more general property of broken TR symmetry. Besides and beyond the scientific interest, a deeper understanding of the VP and ED nonlinear response of TMDs is of paramount importance for the development of the emerging field of nonlinear valleytronics [20]. ### Crystal symmetry and nonlinear chiral selection rules The vast majority of NLO experiments on TMDs [24], such as the measurements of crystal orientation [25], number of layers [26], strain [27], ultrafast switching [28]_etc._, are based on the assumption that monolayers belong to the point group (or more precisely the _wave vector group_[29]) \(D_{3h}\). However, a closer look shows that the wave vector group is \(D_{3h}\) only at the \(\Gamma\) point of the Brillouin zone, while it is \(C_{3h}\) at the \(\pm K\) points [30]. Thus, resonant excitation of the valleys should be more precisely described by the nonlinear elements of the cyclic \(C_{3h}\) tensor, rather than those of the dihedral \(D_{3h}\) group. The elements of the second order susceptibility \(\chi^{(2)}\) for the \(C_{3h}\) point group can be divided into two sub-groups, namely \(\chi^{(2)}_{xxx}=-\chi^{(2)}_{xyy}=-\chi^{(2)}_{yyx}=-\chi^{(2)}_{yxy}\) and \(\chi^{(2)}_{yyy}=-\chi^{(2)}_{yxx}=-\chi^{(2)}_{xxy}=-\chi^{(2)}_{xyx}\), where \(x(y)\) refers to the armchair(zig-zag) axis of the crystal in the case of TMDs. The first subset is identical to the \(D_{3h}\) point group and we will refer to it as the ED (or intrinsic) response (\(\chi^{(2)}_{ED}=\chi^{(2)}_{xxx}\)). These elements can fully describe SHG in TMDs in the case of non-resonant excitation (_e.g._, below-gap virtual states), and thus they represent the crystal (_i.e._, geometrical, intrinsic) response of TMDs. In contrast, the second subset appears only in the \(C_{3h}\) group and must be taken into account in the case of resonant excitation at \(\pm K\). In this regard, the second subset is a direct probe of broken TR symmetry and thus of the VP (\(\chi^{(2)}_{VP}=\chi^{(2)}_{yyy}\)). However, in contrast to a standard \(C_{3h}\) system, the VP elements of the \(\chi^{(2)}\) tensor in TMDs are also chiral, as they must describe the broken space inversion while preserving TR symmetry [31]. This has been observed, for instance, as a rotation angle in opposite directions in recent experiments based on SH by an elliptically polarized FB [20; 21]. Thus, we can further define \(\chi^{(2)}_{VP}=\tau\cdot\chi^{(2)}_{yyy}\), where \(\tau=\pm 1\) at \(\pm K\). This is the nonlinear analogous of chiral selection rules for absorption and emission of light in TMDs [2; 31]. These observations are consistent with the experimental findings of the previous work [32; 33]. Figure 1: **Nonlinear selection rules and TR symmetry-breaking in monolayer TMDs a**, A linearly polarized FB (left, black) can be decomposed into right (red) and left (blue) circular components which interact with the \(\mp K\) valleys respectively, emitting counter-rotating SH beams. As no VP is induced, only the ED contributes to the SH. Coherent superposition of the SH contributions from the \(\mp K\) valleys results in linearly polarized SH (black, right). **b**, A left circularly polarized FB (left, blue) interacts only with the \(+K\) valley. Simultaneously, the fundamental induces a VP second order response. Therefore, in addition to the ED (red), also the VP (yellow) contributes to the counter-rotating SH. Coherent superposition of the ED and VP contributions from the \(+K\) valley results in circularly polarized SH (orange, right). have important consequences, as we show schematically in Fig. 1. If we focus our attention only on resonant excitation at \(\pm K\), we can immediately understand that linear and circular polarization of the FB will probe different symmetries. In particular, in the case of linear excitation we coherently add up the left and right circular components of the FB, while preserving TR symmetry and not producing any VP, leading to a SH signal described only by the \(\chi^{(2)}_{ED}\) terms (Fig. 1a). Thus, the effective \(\chi^{(2)}\) tensor in the case of linear excitation is identical to that of the \(D_{3h}\) point group and therefore it probes only the ED response of TMDs (as long as no VP is introduced by any other means). In contrast, circular polarization of the FB will simultaneously induce and probe the chiral VP elements of the \(C_{3h}\) tensor, and their interference with the ED-SH signal (Fig. 1b). As we demonstrate in the next section, this fundamental difference can be measured experimentally as the ratio of the SH intensity in the cases of circular _versus_ linear polarization of the FB. ### Second harmonic intensity with linear and circular polarization Based on the previous discussion, we can write the expression of the second order polarization \(\mathbf{P^{(2)}(2\omega)}\) in the two cases of linear and circular FB. For linear excitation, the SH response reads (see Supplementary Information S3.1): \[\mathbf{P^{(2)}(2\omega)}=\begin{pmatrix}P_{x}^{(2)}\\ P_{y}^{(2)}\end{pmatrix}=\epsilon_{0}\begin{pmatrix}\chi^{(2)}_{ED}(E_{x}^{2}- E_{y}^{2})\\ -2\chi^{(2)}_{ED}E_{x}E_{y}\end{pmatrix} \tag{1}\] as expected from a system with \(D_{3h}\) symmetry. Instead, for circular excitation the polarization is (see Supplementary Information S3.2): \[\mathbf{P^{(2)}(2\omega)}=\begin{pmatrix}P_{+}^{(2)}\\ P_{-}^{(2)}\end{pmatrix}=\epsilon_{0}\sqrt{2}\begin{pmatrix}(\chi^{(2)}_{ED}+i \chi^{(2)}_{VP})E_{-}^{2}\\ (\chi^{(2)}_{ED}+i\chi^{(2)}_{VP})E_{+}^{2}\end{pmatrix} \tag{2}\] where \(\mathbf{P^{(2)}_{\pm}}=P^{(2)}_{\pm}\mathbf{\sigma_{\pm}}\) and \(\mathbf{E_{\pm}}=E_{\pm}\mathbf{\sigma_{\pm}}\) define left and right circular polarization of the second order polarization and of the fundamental electric field, with \(\mathbf{\sigma_{\pm}}=\frac{1}{\sqrt{2}}(\mathbf{e_{x}}\pm i\mathbf{e_{y}})\). Equation (2) shows that for circular excitation, the SH polarization is always cross-polarized with respect to the FB [32], as imposed from the conservation of the angular momentum in NLO processes [33]. It is very important to highlight that this property has nothing to do with the valley degree of freedom, in contrast to the discussion of seminal reports on valley selection rules for SHG in TMDs [34]. One can easily appreciate this by setting \(\chi^{(2)}_{VP}=0\) in equation (2), and obtain the same result of cross-polarization between FB and SH. In addition, we note that equation (2) reminds of equation (4) from Ref. [15] with two main differences: (1) the magnetic-dipole contribution is here substituted by the VP term; (2) the pre-factors to \(E_{\pm}^{2}\) are identical in our case, while they have opposite sign for the magnetic-dipole term in Ref. [15]. The latter observation derives from the different sign of \(\tau=\pm 1\) for light of opposite helicity (see Supplementary Information S3.2), namely what we previously defined as the nonlinear chiral selection rule. This also has important consequences on the VP-ED interference, as we discuss in detail in the following. Based on equations (1) and (2), the VP can be measured by looking at the ratio \(\eta\) of the SH intensity in the two cases of circular and linear FB polarization (see Supplementary Information S3.3): \[\eta:=\frac{I_{circ}(2\omega)}{I_{lin}(2\omega)}=2\left[1+\frac{|\chi^{(2)}_{ VP}|^{2}}{|\chi^{(2)}_{ED}|^{2}}\right] \tag{3}\] assuming equal intensity of the incident linear and circular polarized waves: \(|I_{0}^{lin}(\omega)|=|I_{0}^{circ}(\omega)|\). Note that equation (3) is valid only in the simplest case where we neglect the complex nature of the nonlinear tensor \(\chi^{(2)}\). However, already under this simplified assumption, we can immediately observe that the aforementioned ratio is exactly 2 only in the absence of a VP (\(\chi^{(2)}_{VP}=0\)), namely when the system maintains the \(D_{3h}\) symmetry also for circular excitation, as reported for instance in Ref. [32]. However, in presence of VP, equation (3) predicts a ratio larger than 2, since linear excitation probes the \(D_{3h}\) symmetry while circular excitation probes the broken TR symmetry \(C_{3h}\). The result becomes even more interesting if we now consider the complex nature of the nonlinear tensor \(\chi^{(2)}\)[15]. In this case, equation (3) becomes (see Supplementary Information S3.3): \[\eta=2\cdot\left[1+\frac{|\chi^{(2)}_{VP}|^{2}}{|\chi^{(2)}_{ED}|^{2}}+\frac{ |\chi^{(2)}_{VP}|}{|\chi^{(2)}_{ED}|}\sin\Delta\varphi\right]. \tag{4}\] where we defined the complex elements of the second order nonlinear tensor as \(\chi^{(2)}_{ED/VP}=|\chi^{(2)}_{ED/VP}|\cdot e^{i\varphi_{ED/VP}}\), and thus the last term \(\sin\Delta\varphi\) (with \(\Delta\varphi=\varphi_{ED}-\varphi_{VP}\)) describes the interference between the VP and ED terms of the SH signal and it can lead to a ratio larger or smaller than 2. Note that the interference is maximum if, in a specific wavelength range, one of the terms is real while the other is imaginary (namely a phase shift of \(\Delta\varphi=\pm\pi/2\)), as discussed in the case of SH magnetic-electric dipole interference in bulk magnets [15]. Constructive and destructive interference at \(\pm\pi/2\) (rather than 0 and \(\pi\)) occurs due to the intrinsic phase shift of \(\pi/2\) between the SH originating from \(|\chi^{(2)}_{ED}|\) and \(|\chi^{(2)}_{VP}|\), as it can be observed already in equation (2). However, there is also one major difference compared to the results reported in the case of bulk magnets [15], namely the fact that in TMDs left and right circular polarization lead to the same SH intensity (note the symmetric ratio for LCP and RCP in Fig. 2a), as a consequence of the nonlinear chiral selection rules. Before we move to the experimental results, we highlight that the above discussion can be applied in a more general context to probe breaking of TR symmetry in crystals that belong to the \(D_{3h}\) point group, and it could possibly be extended to other crystal symmetries. Figure 2: **Ellipticity dependence of SHG in monolayer WSe\({}_{2}\) across the A:1s exciton resonance a**, Normalized total emitted SH as a function of the ellipticity of the FB at different wavelengths across the \(A\):\(1s\) exciton. **b**, Wavelength-dependent ratio of SHG with circular to linear excitation (left axis, black squares). The grey dashed line indicates a ratio of 2. The orange curve is the normalized PL spectrum of the sample under investigation (emission at the \(A\):\(1s\) resonance). ### Experimental results In order to demonstrate the features discussed in the previous section, we perform SHG experiments at different wavelengths (SH signal at 690 nm-825 nm, corresponding to a FB in the range 1380 nm-1650 nm) and scan across the \(A\):\(1s\) exciton resonance of a monolayer WSe\({}_{2}\) sample (see Methods for details on sample fabrication and characterization). In our experiments, we tune the ellipticity of the FB (from linear to circular) and detect the total SH intensity (see Methods for details on the experimental setup). Fig. 2a shows the ellipticity dependent measurements for three selected wavelengths at resonance (750 nm), and away from resonance (790 nm and 720 nm) with respect to the \(A\):\(1s\) exciton of our sample. Here, the FB power is kept constant at 6 mW for all wavelengths. The curves are normalized (_i.e._, we set the SH intensity to 1 for linear excitation) and we paid particular attention to remove any possible contribution from two-photon PL (see Supplementary Information S4), which could alter the ratio of the circular/linear SH intensity. Clearly, this ratio is highly dispersive and can dramatically differ from two (_i.e._, the ratio expected from \(D_{3h}\) symmetry), particularly in the case of resonant SHG. To further highlight this point, in Fig. 2b we plot the wavelength dependence of \(\eta\) (black squares) on top of the linear PL measured on the same sample (orange curve, see Methods and Supplementary Information S2 for detail on sample characterization). This ratio displays values both above and below 2 (horizontal dashed line) in correspondence of the exciton transition, with a shape that strongly resembles the derivative of the PL emission. While a detailed study of this shape is beyond the scope of this paper, we can simply observe that any nonlinear response can be decomposed, within the simple classical nonlinear harmonic oscillator model, as the product of linear susceptibilities (_i.e._, the Miller's rule and coefficient [35]). This can qualitatively explain why the dispersion reported in Fig. 2b resembles the imaginary part of the linear dielectric constant in the same energy region, as measured for instance in differential reflectivity measurements that probe the exciton Rydberg states [36; 37]. On the other hand, in the previous section we have demonstrated that while a ratio \(>2\) can be explained without considering the complex nature of the \(\chi^{(2)}\) nonlinear tensor (equation (3)), a ratio \(<2\) can only be explained by destructive interference between the VP and ED elements (equation (4)). This leads to the conclusion that the VP and ED terms of the \(\chi^{(2)}\) tensor are out of phase close to the \(A\):\(1s\) exciton resonance, in contrast to the hypothesis of previous reports [20; 21; 22]. If we assume perfect constructive (destructive) interference between the ED and VP terms in this wavelength region, _e.g._ the ED term is purely imaginary with a 180\({}^{\circ}\) phase shift at resonance (as predicted by the Lorentz model [38]) while the VP term is purely real with no phase-shift, we can set the interference term in equation (4) to \(\sin\Delta\varphi=\pm 1\) and thus calculate a value of the \(\chi^{(2)}_{VP}\) of \(\sim\) 41 pm V\({}^{-1}\) and \(\sim\) 29 pm V\({}^{-1}\) at 730 nm and 750 nm, respectively (see Supplementary Information S5 for details), which correspond to \(\sim\) 28 % and \(\sim\) 18 % of the \(\chi^{(2)}_{ED}\) at the same wavelengths. To the best of our knowledge, this work provides the first theoretical and experimental study of the complex values (real and imaginary part) of the \(\chi^{(2)}_{VP}\) and their dispersion. Finally, Fig. 3a shows the power dependence of \(\eta\) for three exemplary SH wavelengths, namely 750 nm, 790 nm and 720 nm, while Fig. 3b shows the slope (obtained by linear fitting of the curves in Fig. 3a) of \(\eta\) at wavelengths across the \(A\):1\(s\) exciton region of our sample. Here, the dotted horizontal line (slope = 0) corresponds to a ratio that is independent of Figure 3: **Power dependence of the SHG ratio a**, The ratio of circular to linear SH depends linearly on the FB power for SH wavelengths of 750 nm (red circles) and 720 nm (blue triangles), while it is independent of power for 790 nm (green diamonds). The grey dashed line marks a ratio of 2. **b**, Slope of the linear power dependencies (black squares, left axis) from **a** across the exciton resonance. The grey dashed line indicates a slope of 0, _i.e._, \(\eta\) is power-independent. The orange curve is the normalized PL spectrum of the sample under investigation (emission at the \(A\):1\(s\) resonance). power. In the absence of VP, the ratio should be indeed always equal to 2 and independent of power (equation (3)), since the \(\chi^{(2)}_{ED}\) probes an intrinsic property of the crystal. This is the case for wavelengths below the exciton resonance, see _e.g._ Fig. 2a and Fig. 3b for wavelengths \(>\)780 nm (ratio \(\sim\,2\) and slope \(\sim\,0\)). In contrast, in our experiments the VP term is a linear function of the FB power (\(\chi^{(2)}_{VP}\sim\,I_{\omega}\)), as it originates from the breaking and tuning of TR symmetry induced by optical Stark shift due to the off-resonant FB, as we already observed and discussed in Ref. [20]. Note that the optical Stark effect neither involves nor requires a real excited state population, in contrast to the valley imbalance produced by two-photon absorption. In addition, this linear power dependence of \(\eta\) confirms that the interference term in equation (4) dominates over the quadratic term \(\frac{|\chi^{(2)}_{VP}|^{2}}{|\chi^{(2)}_{ED}|^{2}}\), because \(|\chi^{(2)}_{VP}|\sim\,I_{\omega}\) and thus \(|\chi^{(2)}_{VP}|^{2}\sim\,I^{2}_{\omega}\). This is in agreement with the observation that \(\frac{|\chi^{(2)}_{VP}|}{|\chi^{(2)}_{ED}|}\ll 1\), namely \(\sim\,\)28 % (\(\sim\,\)18 %) at 730 nm (750 nm) and 6 mW of average FB power, and thus \(\frac{|\chi^{(2)}_{VP}|}{|\chi^{(2)}_{ED}|}\gg\frac{|\chi^{(2)}_{VP}|^{2}}{| \chi^{(2)}_{ED}|^{2}}\) in the power range of our experiments (Fig. 3). ## Discussion In conclusion, we established a new method based on circular second harmonic generation to probe the valley degree of freedom in TMDs, and more generally to probe the breaking of time-reversal symmetry in crystals belonging to the \(D_{3h}\) point group. We demonstrated that in such crystals the ratio between the circular and linear SH intensities can directly probe the valley induced nonlinear susceptibility \(\chi^{(2)}_{VP}\) and we measured its dispersion in the wavelength region of the \(A\):\(1s\) exciton of a monolayer WSe\({}_{2}\) sample. From this, we could estimate values of \(\chi^{(2)}_{VP}\sim\,\)41 pm V\({}^{-1}\) and \(\sim\,\)29 pm V\({}^{-1}\) at 730 nm and 750 nm respectively, which correspond to a large fraction (\(\sim\,\)28 % and \(\sim\,\)18 %, respectively) of the intrinsic electric-dipole nonlinear response \(\chi^{(2)}_{ED}\). Moreover, our approach provides direct access to the relative phase between the electric dipole and valley polarization generated second harmonics. This phase shift manifests itself in constructive and destructive interference across the \(A\):\(1s\) exciton resonance, in contrast to the commonly accepted assumption that the two terms are in phase at resonance. Finally, we have shown that while the \(\chi^{(2)}_{ED}\) is independent of the excitation power, the \(\chi^{(2)}_{VP}\) scales linearly with power in our experiments, confirming that here time-reversal symmetry is broken due to the coherent optical Stark effect. This work demonstrates the unique capabilities of nonlinear optics as a probe of broken time reversal symmetry and of the valley degree of freedom in TMDs, and thus offers new insights for the development of nonlinear valleytronics, where parametric nonlinear optical processes can be used to probe valleys on ultrafast timescales and without perturbing the system. ### Online methods #### Sample preparation and characterization We mechanically exfoliate a monolayer of WSe\({}_{2}\) from a bulk crystal (see Supplementary Information S1) onto PDMS and transfer it onto a transparent fused silica substrate. The monolayer nature of our sample is confirmed by optical contrast, PL, Raman and SHG (see Supplementary Information S2). #### Polarization resolved SHG For the SHG measurements we use a home-made multiphoton microscope, which we operate in transmission geometry (see Supplementary Information S2). The FB is generated by an optical parametric oscillator (Levante IR fs from APE), pumped by the output of an Yb doped mode locked laser (FLINT FL2-12, Light Conversion) with a repetition rate of 76 MHz and pulse length of \(\sim\) 100 fs. This allows tuning of the FB from 1300 nm to 2000 nm. Before entering the microscope, a combination of halfwave-plate (AHWP05M-1600, Thorlabs) and quarterwave-plate (#46-562, Edmund optics), both mounted in motorized rotation mounts (RSP05/M, Thorlabs), allows us to fully control the polarization state of the FB. Subsequently, the FB is focussed onto the sample by a x40 objective (LMM-40X-P01, Thorlabs) and the transmitted FB as well as the generated SH are collimated by a lens (C330TMD, Thorlabs). The transmitted FB is blocked by a shortpass filter (FESH0950, Thorlabs) and the SH is spectrally filtered by bandpass filters. Finally, we detect the SH with a Silicon avalanche-photo-diode (APD440A, Thorlabs) and lock-in amplifier (HF2LI, Zurich Instruments). ## Acknowledgments This work was funded by the German Research Foundation DFG (CRC 1375 NOA), project number 398816777 (subproject C4); the International Research Training Group (IRTG) 2675 "Meta-Active", project number 437527638 (subproject A4); and by the Federal Ministry for Education and Research (BMBF) project number 16KIS1792 SINNER. Z.S. acknowledges the ERC-CZ program (project LL2101) from the Ministry of Education Youth and Sports (MEYS). ## References * (1) Sato, M. & Ando, Y. Topological superconductors: a review. _Reports on Progress in Physics_**80**, 076501 (2017). URL [https://dx.doi.org/10.1088/1361-6633/aa6ac7](https://dx.doi.org/10.1088/1361-6633/aa6ac7). * (2) Liu, G.-B., Xiao, D., Yao, Y., Xu, X. & Yao, W. Electronic structures and theoretical modelling of two-dimensional group-VIB transition metal dichalcogenides. _Chem. Soc. Rev._**44**, 2643-2663 (2015). URL [http://dx.doi.org/10.1039/C4CS00301B](http://dx.doi.org/10.1039/C4CS00301B). * (3) Vitale, S. A. _et al._ Valleytronics: Opportunities, challenges, and paths forward. _Small_**14**, 1801483 (2018). URL [https://onlinelibrary.wiley.com/doi/pdf/10.1002/smll.201801483](https://onlinelibrary.wiley.com/doi/pdf/10.1002/smll.201801483). * (4) MacNeill, D. _et al._ Breaking of valley degeneracy by magnetic field in monolayer \(\mathrm{MoSe}_{2}\). _Phys. Rev. Lett._**114**, 037401 (2015). URL [https://link.aps.org/doi/10.1103/PhysRevLett.114.037401](https://link.aps.org/doi/10.1103/PhysRevLett.114.037401). * (5) Mak, K. F., He, K., Shan, J. & Heinz, T. F. Control of valley polarization in monolayer \(\mathrm{MoS}_{2}\) by optical helicity. _Nature Nanotechnology_**7**, 494-498 (2012). URL [https://doi.org/10.1038/nnano.2012.96](https://doi.org/10.1038/nnano.2012.96). * (6) Wang, G. _et al._ Giant enhancement of the optical second-harmonic emission of \(\mathrm{WSe}_{2}\) monolayers by laser excitation at exciton resonances. _Phys. Rev. Lett._**114**, 097403 (2015). URL [https://link.aps.org/doi/10.1103/PhysRevLett.114.097403](https://link.aps.org/doi/10.1103/PhysRevLett.114.097403). * (7) Kim, J. _et al._ Ultrafast generation of pseudo-magnetic field for valley excitons in \(\mathrm{WSe}_{2}\) monolayers. _Science_**346**, 1205-1208 (2014). URL [https://www.science.org/doi/pdf/10.1126/science.1258122](https://www.science.org/doi/pdf/10.1126/science.1258122). * (8) Sie, E. J. _et al._ Large, valley-exclusive Bloch-Siegert shift in monolayer WS\({}_{2}\). _Science_**355**, 1066-1069 (2017). URL [https://www.science.org/doi/pdf/10.1126/science.aal2241](https://www.science.org/doi/pdf/10.1126/science.aal2241). * (9) Zeng, H., Dai, J., Yao, W., Xiao, D. & Cui, X. Valley polarization in MoS\({}_{2}\) monolayers by optical pumping. _Nature Nanotechnology_**7**, 490-493 (2012). URL [https://doi.org/10.1038/nnano.2012.95](https://doi.org/10.1038/nnano.2012.95). * (10) Yang, L. _et al._ Long-lived nanosecond spin relaxation and spin coherence of electrons in monolayer MoS\({}_{2}\) and WS\({}_{2}\). _Nature Physics_**11**, 830-834 (2015). URL [https://doi.org/10.1038/nphys3419](https://doi.org/10.1038/nphys3419). * (11) Hsu, W.-T. _et al._ Optically initialized robust valley-polarized holes in monolayer WSe\({}_{2}\). _Nature Communications_**6**, 8963 (2015). URL [https://doi.org/10.1038/ncomms9963](https://doi.org/10.1038/ncomms9963). * (12) Paradisanos, I. _et al._ Prominent room temperature valley polarization in WS\({}_{2}\)/graphene heterostructures grown by chemical vapor deposition. _Applied Physics Letters_**116**, 203102 (2020). URL [https://pubs.aip.org/aip/apl/article-pdf/doi/10.1063/5.0002396/13167372/203102_1_online.pdf](https://pubs.aip.org/aip/apl/article-pdf/doi/10.1063/5.0002396/13167372/203102_1_online.pdf). * (13) Glazov, M. M. _et al._ Spin and valley dynamics of excitons in transition metal dichalcogenide monolayers. _physica status solidi (b)_**252**, 2349-2362 (2015). URL [https://onlinelibrary.wiley.com/doi/pdf/10.1002/pssb.201552211](https://onlinelibrary.wiley.com/doi/pdf/10.1002/pssb.201552211). * (14) Sun, Z. _et al._ Giant nonreciprocal second-harmonic generation from antiferromagnetic bilayer CrI\({}_{3}\). _Nature_**572**, 497-501 (2019). URL [https://doi.org/10.1038/s41586-019-1445-3](https://doi.org/10.1038/s41586-019-1445-3). * (15) Fiebig, M., Frohlich, D., Krichevtsov, B. & Pisarev, R. V. Second harmonic generation and magnetic-dipole-electric-dipole interference in antiferromagnetic Cr\({}_{2}\)O\({}_{3}\). _Physical Review Letters_**73**, 2127 (1994). * (16) Wu, S. _et al._ Extrinsic nonlinear Kerr rotation in topological materials under a magnetic field. _ACS Nano_**17**, 18905-18913 (2023). PMID: 37767802, URL [https://doi.org/10.1021/acsnano.3c04153](https://doi.org/10.1021/acsnano.3c04153). * (17) Wehling, T. O., Huber, A., Lichtenstein, A. I. & Katsnelson, M. I. Probing of valley polarization in graphene via optical second-harmonic generation. _Phys. Rev. B_**91**, 041404 (2015). URL [https://link.aps.org/doi/10.1103/PhysRevB.91.041404](https://link.aps.org/doi/10.1103/PhysRevB.91.041404). * (18) Cheng, J. _et al._ Chiral selection rules for multi-photon processes in two-dimensional honeycomb materials. _Opt. Lett._**44**, 2141-2144 (2019). URL [https://opg.optica.org/ol/abstract.cfm?URI=ol-44-9-2141](https://opg.optica.org/ol/abstract.cfm?URI=ol-44-9-2141). * [19] Hipolito, F. & Pereira, V. M. Second harmonic spectroscopy to optically detect valley polarization in 2D materials. _2D Materials_**4**, 021027 (2017). URL [https://dx.doi.org/10.1088/2053-1583/aa6f4d](https://dx.doi.org/10.1088/2053-1583/aa6f4d). * [20] Herrmann, P. _et al._ Nonlinear all-optical coherent generation and read-out of valleys in atomically thin semiconductors. _Small_**19**, 2301126 (2023). URL [https://onlinelibrary.wiley.com/doi/pdf/10.1002/smll.202301126](https://onlinelibrary.wiley.com/doi/pdf/10.1002/smll.202301126). * [21] Ho, Y. W. _et al._ Measuring valley polarization in two-dimensional materials with second-harmonic spectroscopy. _ACS Photonics_**7**, 925-931 (2020). URL [https://doi.org/10.1021/acsphotonics.0c00174](https://doi.org/10.1021/acsphotonics.0c00174). * [22] Mouchliadis, L. _et al._ Probing valley population imbalance in transition metal dichalcogenides via temperature-dependent second harmonic generation imaging. _npj 2D Materials and Applications_**5**, 6 (2021). URL [https://doi.org/10.1038/s41699-020-00183-z](https://doi.org/10.1038/s41699-020-00183-z). * [23] Toyoda, S., Fiebig, M., Arima, T.-h., Tokura, Y. & Ogawa, N. Nonreciprocal second harmonic generation in a magnetoelectric material. _Science Advances_**7**, eabe2793 (2021). * [24] Dogadov, O., Trovatello, C., Yao, B., Soavi, G. & Cerullo, G. Parametric nonlinear optics with layered materials and related heterostructures. _Laser & Photonics Reviews_**16**, 2100726 (2022). URL [https://onlinelibrary.wiley.com/doi/pdf/10.1002/lpor.202100726](https://onlinelibrary.wiley.com/doi/pdf/10.1002/lpor.202100726). * [25] Malard, L. M., Alencar, T. V., Barboza, A. P. M., Mak, K. F. & de Paula, A. M. Observation of intense second harmonic generation from MoS\({}_{2}\) atomic crystals. _Phys. Rev. B_**87**, 201401 (2013). URL [https://link.aps.org/doi/10.1103/PhysRevB.87.201401](https://link.aps.org/doi/10.1103/PhysRevB.87.201401). * [26] Li, Y. _et al._ Probing symmetry properties of few-layer MoS\({}_{2}\) and h-BN by optical second-harmonic generation. _Nano Letters_**13**, 3329-3333 (2013). PMID: 23718906, URL [https://doi.org/10.1021/nl401561r](https://doi.org/10.1021/nl401561r). * [27] Mennel, L. _et al._ Optical imaging of strain in two-dimensional crystals. _Nature Communications_**9**, 516 (2018). URL [https://doi.org/10.1038/s41467-018-02830-y](https://doi.org/10.1038/s41467-018-02830-y). * [28] Klimmer, S. _et al._ All-optical polarization and amplitude modulation of second-harmonic generation in atomically thin semiconductors. _Nature Photonics_**15**, 837-842 (2021). URL [https://doi.org/10.1038/s41566-021-00859-y](https://doi.org/10.1038/s41566-021-00859-y). * [29] Dresselhaus, M., Dresselhaus, G. & Jorio, A. _Group Theory: Application to the Physics of Condensed Matter_ (Springer Berlin Heidelberg, 2007). URL [https://doi.org/10.1007/978-3-540-32899-5](https://doi.org/10.1007/978-3-540-32899-5). * (30) Fajardo, E. & Winkler, R. Effective dynamics of two-dimensional bloch electrons in external fields derived from symmetry. _Physical Review B_**100**, 125301 (2019). * (31) Xiao, D., Liu, G.-B., Feng, W., Xu, X. & Yao, W. Coupled spin and valley physics in monolayers of MoS\({}_{2}\) and other group-VI dichalcogenides. _Phys. Rev. Lett._**108**, 196802 (2012). URL [https://link.aps.org/doi/10.1103/PhysRevLett.108.196802](https://link.aps.org/doi/10.1103/PhysRevLett.108.196802). * (32) Saynatjoki, A. _et al._ Ultra-strong nonlinear optical processes and trigonal warping in MoS\({}_{2}\) layers. _Nature Communications_**8**, 893 (2017). URL [https://doi.org/10.1038/s41467-017-00749-4](https://doi.org/10.1038/s41467-017-00749-4). * (33) Bloembergen, N. Conservation laws in nonlinear optics\(*\). _J. Opt. Soc. Am._**70**, 1429-1436 (1980). URL [https://opg.optica.org/abstract.cfm?URI=josa-70-12-1429](https://opg.optica.org/abstract.cfm?URI=josa-70-12-1429). * (34) Seyler, K. L. _et al._ Electrical control of second-harmonic generation in a WSe\({}_{2}\) monolayer transistor. _Nature Nanotechnology_**10**, 407-411 (2015). URL [https://doi.org/10.1038/nnano.2015.73](https://doi.org/10.1038/nnano.2015.73). * (35) Boyd, R. W. _Nonlinear optics_ (Academic press, 2020). * (36) Chernikov, A. _et al._ Exciton binding energy and nonhydrogenic rydberg series in monolayer WS\({}_{2}\). _Phys. Rev. Lett._**113**, 076802 (2014). URL [https://link.aps.org/doi/10.1103/PhysRevLett.113.076802](https://link.aps.org/doi/10.1103/PhysRevLett.113.076802). * (37) Wang, G. _et al._ Colloquium: Excitons in atomically thin transition metal dichalcogenides. _Rev. Mod. Phys._**90**, 021001 (2018). URL [https://link.aps.org/doi/10.1103/RevModPhys.90.021001](https://link.aps.org/doi/10.1103/RevModPhys.90.021001). * (38) Lorentz, H. A. _Versuch Einer Theorie der Electrischen und Optischen Erscheinungen in Bewegten Korpern_, 1-138 (Springer Netherlands, Dordrecht, 1937). URL [https://doi.org/10.1007/978-94-015-3445-1_1](https://doi.org/10.1007/978-94-015-3445-1_1). **SUPPLEMENTARY INFORMATION** Valley Polarization-Electric Dipole Interference and Nonlinear Chiral Selection Rules in Monolayer WSe\({}_{2}\) **AUTHOR LIST** Paul Herrmann\({}^{1}\), Sebastian Klimmer\({}^{1,2}\), Till Weickhardt\({}^{1}\), Anastasios Papavasileiou\({}^{3}\), Kseniia Mosina\({}^{3}\), Zdenek Sofer\({}^{3}\), Ioannis Paradisanos\({}^{4}\), Daniil Kartashov\({}^{5,6}\) and Giancarlo Soavi\({}^{1,6,\star}\) **AFFILIATIONS** \({}^{1}\)Institute of Solid State Physics, Friedrich Schiller University Jena, Helmholtzweg 5, 07743 Jena, Germany \({}^{2}\)ARC Centre of Excellence for Transformative Meta-Optical Systems, Department of Electronic Materials Engineering, Research School of Physics, The Australian National University, Canberra, ACT, 2601, Australia \({}^{3}\)Department of Inorganic Chemistry, University of Chemistry and Technology, Technicka 5, Prague, 166 28 Czech Republic \({}^{4}\)Institute of Electronic Structure and Laser, Foundation for Research and Technology, N. Plastira 100, Vassilika Vouton, 70013 Heraklion, Crete, Greece \({}^{5}\)Institute of Optics and Quantum Electronics, Friedrich Schiller University Jena, Max-Wien-Platz 1, 07743 Jena, Germany \({}^{6}\)Abbe Center of Photonics, Friedrich Schiller University Jena, Albert-Einstein-Strasse 6, 07745 Jena, Germany \({}^{\star}\) [email protected] ## S1 Synthesis of bulk tungsten biselenide crystals The synthesis of WSe\({}_{2}\) was performed by chemical vapor transport in a quartz glass ampoule from Tungsten (99.999 %, -100 mesh, China Rhenium Co., Ltd, China) and Selenium (99.9999 % granules 1-6mm, Wuhan Xinrong New Material Co., Ltd., China) in the stochiometric amount corresponding to 100g of WSe\({}_{2}\). In addition, excess of 2 at.% of selenium, 0.5g of SeCl\({}_{4}\) (99.9%, rough crystalline powder, Strem, USA) and 0.5g of iodine (99.9%, granules, Fisher Scientific, USA) were added in a glovebox to the ampoule (50x250 mm, wall thickness 3 mm) and the ampoule was sealed by an oxygen-hydrogen welding torch under high vacuum (under \(1\times 10^{-3}\) Pa) using a diffusion pump with liquid nitrogen trap. The sealed ampoule was first placed in a muffle furnace and heated to 500 \({}^{\circ}\)C for 25 hours, 600 \({}^{\circ}\)C for 50 hours and finally 800 \({}^{\circ}\)C for 50 hours. The heating and cooling rate was 1 \({}^{\circ}\)C min\({}^{-1}\). The ampoule with formed powder WSe\({}_{2}\) was placed in a two zone horizontal furnace. First, the growth zone was heated to 1000 \({}^{\circ}\)C and the source zone to 800 \({}^{\circ}\)C. After 2 days the thermal gradient was reversed, as the source zone was kept at 1000 \({}^{\circ}\)C while the growth zone was kept at 900 \({}^{\circ}\)C for 10 days. During the cooling the thermal gradient was reversed for 2 hours, in order to remove the transport medium and volatile compounds. The ampoule was opened in an argon filled glovebox. ## S2 Sample fabrication, characterization and setup The sample was prepared by mechanical exfoliation of a bulk crystal produced with the procedure described in Section S1. To confirm the monolayer nature of our sample, we measured the PL (Fig. S1a) under excitation with a CW laser (\(\lambda_{exc}=532\) nm). The strong PL is characteristic for monolayers, as the bandgap shifts from direct in the monolayer case to indirect in the bi-layer to bulk case [1]. We observe the PL maximum at \(\sim\) 743 nm, corresponding to the A exciton in WSe\({}_{2}\) in agreement with literature [2]. In addition, we measure the Raman spectrum of our sample (Fig. S1b), which shows the for monolayer WSe\({}_{2}\) characteristic degenerate E' and A\({}_{1}\)' modes at \(\sim\) 249 cm\({}^{-1}\) and a second order 2LA mode at \(\sim\) 260 cm\({}^{-1}\)[3]. Finally, we performed power dependent SHG measurements (Fig. S1c). The data shows a power scaling with an exponent of nearly 2, which is characteristic for second order nonlinear processes. Due to crystal symmetry, SHG in TMDs is possible only in odd number of layers [4]. Since we can safely rule out a tri-layer from the PL spectroscopy and optical contrast, we can further confirm the monolayer nature of our WSe\({}_{2}\) sample. For the polarization-resolved SHG measurements we used a home-made multiphoton microscope (Fig. S2). A combination of a half- and quarter-wave plate gives us complete control over the polarization state of the incident fundamental beam (FB). The generated SH signal is collected in transmission geometry, spectrally filtered and subsequently detected with a silicon avalanche photo-diode. ## S3 Second Harmonic Generation in TMD Monolayers In this section we first revise the intrinsic second order nonlinear response, _i.e._, in the absence of a valley polarization (VP), of a TMD monolayer. Following that we generalize the response to include the valley susceptibility, carefully analyzing the resulting modified second order nonlinear polarization. Finally, we derive the ratio \(\eta\) of circular to linear SHG depending on the valley susceptibility. ### S3.1 Intrinsic response As mentioned in the main text, TMD monolayers in absence of VP belong to the \(D_{3h}\) point group. This point group has four non-independent elements of the second order nonlinear susceptibility, namely \(\chi^{(2)}_{ED}=\chi^{(2)}_{xxx}=-\chi^{(2)}_{xyy}=-\chi^{(2)}_{yyx}=-\chi^{(2) }_{yxy}\), where \(x(y)\) refers to armchair (zig-zag) axis of the crystal. We label this the ED (electric dipole or intrinsic) response of a TMD monolayer, as it describes its crystal (broken space-inversion) symmetry. In the contracted notation the second order nonlinear polarization is then given by: \[\mathbf{P^{(2)}}=\begin{pmatrix}P^{(2)}_{x}\\ P^{(2)}_{y}\end{pmatrix}=\epsilon_{0}\begin{pmatrix}\chi^{(2)}_{ED}&-\chi^{(2) }_{ED}&0\\ 0&0&-\chi^{(2)}_{ED}\end{pmatrix}\cdot\begin{pmatrix}\begin{array}{c}E^{2}_{ x}\\ E^{2}_{y}\\ 2E_{x}E_{y}\end{array}\\ \end{pmatrix}=\epsilon_{0}\begin{pmatrix}\chi^{(2)}_{ED}\cdot(E^{2}_{x}-E^{2}_{ y})\\ -2\chi^{(2)}_{ED}\cdot E_{x}E_{y}\end{pmatrix} \tag{1}\] where we neglect the \(z\)-component since the light propagates along the \(z\)-axis and the TMD monolayer has vanishing thickness. ### S3.2 Adding the VP-induced response In the presence of a VP, the symmetry of the TMD monolayer is reduced from \(D_{3h}\) to \(C_{3h}\)[5], adding the following four non-independent elements to the total susceptibility: \(\chi^{(2)}_{VP}=\chi^{(2)}_{yyy}=-\chi^{(2)}_{yxx}=-\chi^{(2)}_{xxy}=-\chi^{( 2)}_{xyx}\), which we label as the VP susceptibility. This VP susceptibility is a sum of the contributions from the \(\pm K\) valleys, \(\chi^{(2)}_{VP}=\chi^{(2)}_{VP}(K)+\chi^{(2)}_{VP}(-K)\), however the \(\pm K\) valley can only be addressed by left/right circularly polarized light. Furthermore, the \(\pm K\) valleys are connected by time-reversal symmetry, resulting in a valley susceptibility of equal magnitude but opposite sign: \(\chi^{(2)}_{VP}(K)=-\chi^{(2)}_{VP}(-K)\) Therefore the total second order nonlinear polarization in presence of a VP can be written as: \[\mathbf{P^{(2)}}=\begin{pmatrix}P_{x}^{(2)}\\ P_{y}^{(2)}\end{pmatrix}=\epsilon_{0}\begin{pmatrix}\chi_{ED}^{(2)}(E_{x}^{2}-E_{ y}^{2})-2\tau\chi_{VP}^{(2)}E_{x}E_{y}\\ -2\chi_{ED}^{(2)}E_{x}E_{y}-\tau\chi_{VP}^{(2)}(E_{x}^{2}-E_{y}^{2})\end{pmatrix}. \tag{2}\] Note the pre-factor \(\tau\) in front of the VP term, due to the aforementioned selection rule and symmetry. We can gain better insights into this by switching from the linear \(\mathbf{e_{x}}/\mathbf{e_{y}}\) basis to the circular \(\mathbf{\sigma_{+}}/\mathbf{\sigma_{-}}\) basis, using the definition \(\mathbf{\sigma_{\pm}}=\frac{1}{\sqrt{2}}(\mathbf{e_{x}}\pm i\mathbf{e_{y}})\). In this circular basis the electric field can be written as \(\mathbf{E}=E_{+}\mathbf{e_{+}}+E_{-}\mathbf{e_{-}}\) with amplitudes \[E_{\pm}=\frac{1}{\sqrt{2}}(E_{x}\pm iE_{y}) \tag{3}\] of the left/right circular components. Analogously the second order nonlinear polarization can be represented in this circular basis (\(\mathbf{P^{(2)}}=P_{+}^{(2)}\mathbf{e_{+}}+P_{-}^{(2)}\mathbf{e_{-}}\)) with the left/right circular amplitudes \(P_{\pm}^{(2)}=\frac{1}{\sqrt{2}}(P_{x}^{(2)}\pm iP_{y}^{(2)})\). Therefore, conversion of the polarization into the circular basis leads to: \[\mathbf{P^{(2)}}=\begin{pmatrix}P_{+}\\ P_{-}\end{pmatrix}=\epsilon_{0}\frac{1}{\sqrt{2}}\begin{pmatrix}\chi_{ED}^{(2)}( E_{x}^{2}-E_{y}^{2}-2iE_{x}E_{y})-i\tau\chi_{VP}^{(2)}(E_{x}^{2}-E_{y}^{2}-2iE_{x}E_{ y})\\ \chi_{ED}^{(2)}(E_{x}^{2}-E_{y}^{2}+2iE_{x}E_{y})+i\tau\chi_{VP}^{(2)}(E_{x}^{2}-E_{y}^{2}+2iE_{x}E_{ y})\end{pmatrix}. \tag{4}\] We can now substitute \(E_{\pm}^{2}=\frac{1}{2}(E_{x}^{2}-E_{y}^{2}\pm 2iE_{x}E_{y})\) from equation (3) and finally obtain: \[\mathbf{P^{(2)}}=\begin{pmatrix}P_{+}\\ P_{-}\end{pmatrix}=\epsilon_{0}\sqrt{2}\begin{pmatrix}(\chi_{ED}^{(2)}-i\tau \chi_{VP}^{(2)})E_{-}^{2}\\ (\chi_{ED}^{(2)}+i\tau\chi_{VP}^{(2)})E_{+}^{2}\end{pmatrix}=\epsilon_{0} \sqrt{2}\begin{pmatrix}(\chi_{ED}^{(2)}+i\chi_{VP}^{(2)})E_{-}^{2}\\ (\chi_{ED}^{(2)}+i\chi_{VP}^{(2)})E_{+}^{2}\end{pmatrix}. \tag{5}\] Note that in the last step we resolved \(\tau\) according to \(\tau(E_{\pm})=\pm 1\), as discussed above. ### S3.3 Measuring VP Now we focus on the total emitted SHG for linear and circular polarization. For linear excitation the electric field is given by \(\mathbf{E}=E_{0}\cdot\left(\cos\left(\theta\right)\mathbf{e_{x}}+\sin\left(\theta \right)\mathbf{e_{y}}\right)\), where \(\theta\) defines the angle between the AC-axis of the crystal and the polarization-axis of the FB. The resulting SH is then proportional to the absolute square of the second order nonlinear polarization (\(I_{SHG}\propto|\mathbf{P^{(2)}}|^{2}\)). In the linear case, no VP is induced and thus the second order nonlinear polarization is given by equation (1). In this case, the total emitted SH intensity \[I_{lin}(2\omega)\propto|\chi^{(2)}_{ED}|^{2}|E_{0}|^{4} \tag{6}\] is proportional to the fourth power of the fundamental field, _i.e._, to the square of the intensity of the fundamental field (\(|E_{0}|^{4}=I(\omega)^{2}\)) and of the ED susceptibility \(|\chi^{(2)}_{ED}|^{2}\). However, as discussed in the main text, changing the excitation to circular induces a VP. We take this into account by using equation (5). For circularly polarized light the total emitted SH \[I_{circ}(2\omega)\propto 2\cdot|\chi^{(2)}_{ED}+i\chi^{(2)}_{VP}|^{2}|E_{0}|^{4} \tag{7}\] is still proportional to the square of the fundamental intensity, but with a pre-factor of 2 and the additional term \(|\chi^{(2)}_{ED}|^{2}\rightarrow|\chi^{(2)}_{ED}+i\chi^{(2)}_{VP}|^{2}\). While the pre-factor 2 is not related to the VP, and was observed previously in Ref. [6], the VP elements of the nonlinear susceptibility allow to directly probe the broken TR symmetry. Taking the ratio of SH intensities with circular to linear excitation we obtain: \[\eta:=\frac{I_{circ}(2\omega)}{I_{lin}(2\omega)}=2\cdot\frac{|\chi^{(2)}_{ED}+ i\chi^{(2)}_{VP}|^{2}}{|\chi^{(2)}_{ED}|^{2}} \tag{8}\] where we can observe that the ratio is indeed 2 in the absence of a VP (\(\chi^{(2)}_{VP}=0\)) and otherwise will differ from 2. Carefully resolving the fraction of the absolute squares and considering the complex nature of the elements of the nonlinear susceptibilities, we end up with: \[\eta=2\cdot\Bigg{[}1+\frac{|\chi^{(2)}_{VP}|^{2}}{|\chi^{(2)}_{ED}|^{2}}+\frac {1}{|\chi^{(2)}_{ED}|^{2}}\Big{\{}\Re\big{(}\chi^{(2)}_{VP}\big{)}\cdot\Im \big{(}\chi^{(2)}_{ED}\big{)}-\Im\big{(}\chi^{(2)}_{VP}\big{)}\cdot\Re\big{(} \chi^{(2)}_{ED}\big{)}\Big{\}}\Bigg{]} \tag{9}\] where the symbols \(\Re\) and \(\Im\) represent real and imaginary parts, respectively. Finally, we can rewrite the last term by representing the susceptibilites as \(\chi^{(2)}_{ED/VP}=|\chi^{(2)}_{ED/VP}|\cdot e^{i\varphi_{ED/VP}}\) as a product of their amplitude \(|\chi^{(2)}_{ED/VP}|\) and phase \(\varphi_{ED/VP}\): \[\frac{1}{|\chi^{(2)}_{ED}|^{2}}\Big{\{}\Re\big{(}\chi^{(2)}_{VP}\big{)}\cdot \Im\big{(}\chi^{(2)}_{ED}\big{)}-\Im\big{(}\chi^{(2)}_{VP}\big{)}\cdot\Re \big{(}\chi^{(2)}_{ED}\big{)}\Big{\}}=\frac{|\chi^{(2)}_{VP}|}{|\chi^{(2)}_{ ED}|}\cdot\sin\Delta\varphi \tag{10}\] where \(\Delta\varphi=\varphi_{ED}-\varphi_{VP}\). ## S4 Removal of TP-PL from SHG When scanning the SH emission wavelength across the exciton \(A:1s\) resonance, the signal may overlap with two-photon-photoluminescence (TP-PL, see Fig. S3a). Thus, in our resonant SHG measurements the TP-PL must be properly subtracted. Since the valley coherence for two-photon-excitation at room temperature in monolayer WSe\({}_{2}\) is expected to be 0 (see _e.g._ Ref. [5]), TP-PL is unpolarized and does not depend on the polarization of the excitation laser. Therefore we can assume that the emitted TP-PL is equal in the cases of linear and circular excitation. The emitted SH for linear excitation is linear, see Fig. S3b, providing a pathway to suppress the SH, by means of a linear polarizer oriented perpendicular to the SH, and measuring the residual TP-PL. Hence, after recording the total SH for circular and linear excitation, we set the excitation to linear and insert a linear polarizer in the correct orientation to suppress the SH and record the TP-PL. We follow this procedure for all excitation wavelengths and powers. Finally, in the data analysis we subtract the TP-PL from the SH. ## 85 Calculation of Second Order Nonlinear Susceptibility To calculate the intrinsic second order nonlinear optical susceptbility we use the following formula from Ref. [7] for SHG in transmission geometry: \[|\chi_{S}^{(2)}|=\sqrt{\frac{P_{SH}}{P_{FF}^{2}}\cdot\frac{c\epsilon_{0}f{r^{2}}t _{FWHM}\lambda^{2}}{64\sqrt{2}\pi S}\cdot\frac{(1+n)^{6}}{n^{3}}} \tag{11}\] with the powers \(P_{FF}\) and \(P_{SH}\) of the fundamental and emitted SH, respectively, the speed of light \(c\) and the vacuum electric permittivity \(\epsilon_{0}\). \(f=76\,\)MHz is the repetition rate of the pump laser, \(r\approx\) 1.85 um is the radius of the focal spot, \(t_{FWHM}\approx\) 200 fs is the FWHM of the pulse and \(\lambda\) (1380 nm - 1650 nm) is the wavelength of the FB. \(S=\) 0.94 is the shape factor of a Gaussian pulse and \(n\approx\) 1.44 is the refractive index of the fused silica substrate in the wavelength range of the FB. In order to calculate the VP susceptibility, we analyze equation (8) and (9). As mentioned in the main text, we assume perfect constructive and destructive interference between the ED and VP contributions for SH wavelengths of 750 nm and 730 nm, respectively. Therefore the ratio is then given by: \[\eta=2\cdot\left[1+\frac{|\chi_{VP}^{(2)}|^{2}}{|\chi_{ED}^{(2)}|^{2}}\pm\frac {|\chi_{VP}^{(2)}|}{|\chi_{ED}^{(2)}|}\right] \tag{12}\] where the \(\pm\) corresponds to constructive/destructive interference. Solving this equation we retrieve the ratio of the absolute values of the susceptibilities as: \[\frac{|\chi_{VP}^{(2)}|}{|\chi_{ED}^{(2)}|}=\mp\frac{1}{2}\pm\sqrt{\frac{2\cdot \eta-3}{4}} \tag{13}\] with the \(\mp\) coming from the constructive/destructive interference, while the \(\pm\) appears since equation (12) is a quadratic formula. Two points can be noted: (1) \(\frac{|\chi_{VP}^{(2)}|}{|\chi_{ED}^{(2)}|}\) has to be positive; (2) in general there are two solutions. For a SH wavelength of 750 nm (constructive interference), with \(\eta=2.43\) we obtain \(\frac{|\chi_{VP}^{(2)}|}{|\chi_{ED}^{(2)}|}=0.18\) and \(\frac{|\chi_{VP}^{(2)}|}{|\chi_{ED}^{(2)}|}=-1.18\). However, the second solution is un-physical, as it is negative. For a SH wavelength of 730 nm (destructive interference), with \(\eta=1.59\) we obtain two possible solutions: \(\frac{|\chi_{VP}^{(2)}|}{|\chi_{ED}^{(2)}|}=0.71\) and \(\frac{\chi_{VP}^{(2)}|}{|\chi_{ED}^{(2)}|}=0.29\). In the main text we present only the latter as the lower limit. ## S6 Definition of error bars The raw data for each measurement is taken as follows: for each data point we average three times, and define the mean as the measured value and the standard deviation as the error. This way we consider the optical noise, _i.e._, fluctuations in either the fundamental or SH. Note that we do not plot the error bars corresponding to this standard deviation in the main text in Fig. 2a, because the error bars are smaller than the symbols. However, as we detect the signal electrically, we need to also account for the electrical noise of the lock-in setup, including APD, optical chopper and lock-in detector. We retrieve the electrical noise _via_ a dark measurement, where we block the detector and record the signal. The average of this dark signal is then the electrical noise. Every SH data point, where the signal is smaller than the electrical noise is disregared, as it can not be differentiated from the electrical noise. For the wavelength dependence of the ratio circular to linear SHG (see Fig. 2b in the main text), we fit the ellipticity \(\epsilon\) dependent SHG data (including the error bars given by the standard deviation) with the following function: \[I(\epsilon)=A\sin{(\pi\epsilon)}^{2}+I_{lin} \tag{14}\] _i.e._, the square of a sinusoidal function with the amplitude \(A\) plus the offset \(I_{lin}\), which corresponds to the intensity for linear excitation. The intensity for circular excitation is then given by the sum of the amplitude and the offset (\(I_{circ}=A+I_{lin}\)). By fitting the data we retrieve the parameters (\(A\),\(I_{lin}\)) as well as their standard errors (\(\Delta A\),\(\Delta I_{lin}\)). Although this directly gives us the error of SHG intensity for linear excitation (\(\Delta I_{lin}\)), for the ratio we need to take the formula for error propagation: \[\Delta\eta\approx\frac{\partial\eta}{\partial A}\cdot\Delta A+\frac{\partial \eta}{\partial I_{lin}}\cdot\Delta I_{lin}=\frac{1}{I_{lin}}\cdot\Delta A+ \frac{A}{I_{lin}^{2}}\cdot\Delta I_{lin} \tag{15}\] to calculate the error bars in Fig. 2b of the main text. We measure the SH power dependencies four times, then take the average \[SH=\frac{1}{4}\sum_{i=1}^{4}SH_{i} \tag{16}\] and propagate the error according to: \[\Delta SH\approx\frac{1}{4}\sum_{i=1}^{4}SH_{i}\cdot\Delta SH_{i} \tag{17}\] derived from the formula for error propagation (see equation (15)). Especially for the power dependent measurements, taking the electrical noise is of great significance, as for certain wavelengths the signal can be very low, so that a big portion of data lies within the electrical noise and needs to be disregarded. After averaging, propagating the error and disregarding the data smaller than the electrical noise, we plot the power dependencies in Fig. 3a of the main text. Note that the error bars are smaller than the symbols and therefore not plotted. We fit the power dependencies (including error) with a linear function, and obtain the slopes together with their standard errors, which we set as the error bars, see Fig. 3b of the main text.
2302.01224
Propositional Logics for the Lawvere Quantale
Lawvere showed that generalised metric spaces are categories enriched over $[0, \infty]$, the quantale of the positive extended reals. The statement of enrichment is a quantitative analogue of being a preorder. Towards seeking a logic for quantitative metric reasoning, we investigate three $[0,\infty]$-valued propositional logics over the Lawvere quantale. The basic logical connectives shared by all three logics are those that can be interpreted in any quantale, viz finite conjunctions and disjunctions, tensor (addition for the Lawvere quantale) and linear implication (here a truncated subtraction); to these we add, in turn, the constant $1$ to express integer values, and scalar multiplication by a non-negative real to express general affine combinations. Quantitative equational logic can be interpreted in the third logic if we allow inference systems instead of axiomatic systems. For each of these logics we develop a natural deduction system which we prove to be decidably complete w.r.t. the quantale-valued semantics. The heart of the completeness proof makes use of the Motzkin transposition theorem. Consistency is also decidable; the proof makes use of Fourier-Motzkin elimination of linear inequalities. Strong completeness does not hold in general, even (as is known) for theories over finitely-many propositional variables; indeed even an approximate form of strong completeness in the sense of Pavelka or Ben Yaacov -- provability up to arbitrary precision -- does not hold. However, we can show it for theories axiomatized by a (not necessarily finite) set of judgements in normal form over a finite set of propositional variables when we restrict to models that do not map variables to $\infty$; the proof uses Hurwicz's general form of the Farkas' Lemma.
Giorgio Bacci, Radu Mardare, Prakash Panangaden, Gordon Plotkin
2023-02-02T17:04:28Z
http://arxiv.org/abs/2302.01224v4
# Propositional Logics for the Lawvere Quantale ###### Abstract Lawvere showed that generalised metric spaces are categories enriched over \([0,\infty]\), the quantale of the positive extended reals. The statement of enrichment is a quantitative analogue of being a preorder. Towards seeking a logic for quantitative metric reasoning, we investigate three (closely related) many-valued propositional logics over the Lawvere quantale. The basic logical connectives shared by all three logics are those that can be interpreted in any quantale, viz finite conjunctions and disjunctions, tensor (addition for the Lawvere quantale) and linear implication (here a truncated subtraction); to these we add, in turn, the constant \(1\) to express integer values, and scalar multiplication by a non-negative real to express general affine combinations. Propositional Boolean logic can already be interpreted in the first of these logics; Lukasiewicz logic can be interpreted in the second; Ben Yaucov's continuous propositional logic can be interpreted in the third; and quantitative equational logic can be interpreted in the third if we allow inference systems instead of axiomatic systems. For each of these logics we develop a natural deduction system which we prove to be decidably complete w.r.t. the quantale-valued semantics. The heart of the completeness proof makes use of Motzkin transposition theorem. Consistency is also decidable; the proof makes use of Fourier-Motzkin elimination of linear inequalities. Strong completeness does not hold in general, even for theories over finitely-many propositional variables; indeed even an approximate form of strong completeness in the sense of Ben Yaacov--provability up to arbitrary precision--does not hold. However, we can show it for such theories having only models never mapping variables to \(\infty\); the proof uses Hurwicz's general form of the Farkas lemma. I dentify applicable funding agency here. If none, delete this. ## I Introduction Real-valued logics have been experiencing a recent resurgence of interest because of probabilistic and metric reasoning and applications such as neurosymbolic reasoning in machine learning [1, 2, 3]. They are generally fuzzy logics [4] often interpreted over \([0,1]\)--for example Lukasiewicz logic ([5, 6, 7]). In [8], Lawvere showed that generalised metric spaces are categories enriched over \([0,\infty]\), the quantale of the positive extended reals. The statement of enrichment is a quantitative analogue of being a preorder. One can view Lukasiewicz logic as the propositional logic of a quantale over \([0,1]\), see [9]. Also there are interpretations of linear logic in quantales [10]. In this paper we propose studying logic over Lawvere's quantale, and we begin such a study with propositional logic. An argument that points towards this quantale comes also from the literature on quantitative algebras [11], where the development of quantitative equational logic is based on a family of binary predicates "\(=_{\varepsilon}\)" for every \(\varepsilon\geq 0\). These are used in quantitative equations, such as \(s=_{\varepsilon}t\), to encode the fact that the distance between the interpretation of the terms \(s\) and \(t\) in a metric space is at most \(\varepsilon\). Once this quantitative information is embedded in the predicate "\(=_{\varepsilon}\)", quantitative equations become Boolean statements, i.e., _true_ or _false_ in a model. An alternative way to tackle this issue is to consider only one "\(=\)" predicate that is valued in the Lawvere quantale. This requires exchanging the classical "Boolean core" of quantitative equational logic with a many-valued logic interpreted over Lawvere's quantale. In this paper we investigate basic concepts and proof systems for such propositional logics. We consider three closely related logics, built up in stages. The basic logical connectives, shared by all three logics, are those that can be interpreted in any quantale, viz finite conjunctions and disjunctions, tensor (addition for the Lawvere quantale) and linear implication (here a truncated subtraction). To these we add, in turn, the constant \(1\) to express integer values, and scalar multiplication by a non-negative real to express general non-negative affine combinations. Propositional Boolean logic can already be interpreted in the first of these logics; Lukasiewicz logic can be interpreted in the second; Ben Yaacov's continuous propositional logic can be interpreted in the third; and the quantitative equational logic can be interpreted in the third once we extend the provability principles and instead of only looking to theories defined by axiomatic systems, we also consider theories closed under systems of inferences. For each of these logics we develop a natural deduction system, and prove it complete relative to interpretations in the Lawvere quantale. The proof of completeness uses a normalisation technique which replaces a sequent \(\varphi\vdash\psi\) that is not in normal form, with a finite set of sequents in normal form. The main normal form is a sequent where the formulas \(\varphi\), \(\psi\) are tensors \(r_{1}*p_{1}\otimes\ldots\otimes r_{n}*p_{n}\otimes r\) of propositional variables multiplied by positive reals, and a scalar. Semantically, these are exactly affine linear combinations of the variables. Leaving aside the details, this reduces the problem of proving that a given sequent follows from a given finite set of sequents, to the problem of proving that a given affine inequality is a consequence of a given finite set of affine inequalities. This is exactly the province of (variants of) the Farkas Lemma [12] and the Motzkin transposition theorem [13]. These show that when such a consequence holds, then the given affine inequality is a linear combination of the given set of inequalities (an integer variant of Motzkin [14] is used when the reals are integers). It has been long noted in the literature [15] that the Farkas Lemma, and related results, can be thought of as completeness theorems; here they are literally seen as such, and used to prove general completeness for our propositional logics. As reduction to sets of normal forms and the Farkas lemma (and related) are effective, it follows that satisfiability is decidable. We can also decide consistency via the reduction to normal form, now making use of Fourier-Motzkin elimination [16, 17]. We conjecture that consequence is co-NP complete and that consistency is NP-complete. Strong completeness does not hold in general, even for theories over finitely-many propositional variables; indeed even an approximate form of strong completeness in the sense of Ben Yaacov [6, 7]--provability up to arbitrary precision--does not hold. However, we can show it does hold for such theories having only models never mapping variables to \(\infty\); the proof uses Hurwicz's general form [18] of the Farkas lemma. ## II Preliminaries and notation A _quantale_ is a complete lattice with a binary, associative operation \(\otimes\) (the _tensor_), such that for every element \(a\), both \(a\otimes-\) and \(-\otimes a\) have right adjoints (equivalently \(\otimes\) preserves all joins). A quantale is called _commutative_ whenever its tensor is; it is called _unital_ if there is an element \(1\), the unit, such that \(1\otimes a=a=a\otimes 1\), for all \(a\); and it is called _integral_ if the unit is the top element. For commutative quantales we denote the right adjoint to \(a\otimes-\) by \(a\multimap-\), which is characterised by \[a\otimes b\leq c\Longleftrightarrow b\leq a\multimap c\,.\] Examples of quantales are (i) the Boolean quantale \(\{0,1\}\) ordered by \(0\leq 1\) with logical conjunction as tensor; (ii) the complete lattice \([0,1]\) ordered by the "greater or equal" relation \(\geq\) and truncated addition as tensor, know as _Lukasiewicz quantale_; and (iii) the complete lattice \([0,\infty]\) ordered by the "greater or equal" relation \(\geq\) with extended sum as tensor, known as _Lawvere quantale_ (or metric quantale). Note that all of the above are examples of commutative integral quantales. In this paper we mainly work with the Lawvere quantale, so it is convenient to have an explicit characterisation of its basic operations. Join and meet are \(\inf\) and \(\sup\), respectively, \(\infty\) is the bottom element and \(0\) the top. For \(r,s\in[0,\infty]\), we define truncated subtraction as \[r\mathrel{\dot{\phantom{\rule{0.0pt}{1.0pt}}}}s=\begin{cases}0&\text{if }r \leq s\\ r-s&\text{if }r>s\text{ and }r\neq\infty\\ \infty&\text{if }r=\infty\text{ and }s\neq\infty\,.\end{cases}\] Then, the right adjoint \(s\multimap r\) is just \(r\mathrel{\dot{\phantom{\rule{0.0pt}{1.0pt}}}}s\) (note that the order of terms is inverted). ## III Logics for the Lawvere Quantale In this section, we present three propositional logics interpreted over the Lawvere quantale which we will collectively refer to as _logics for the Lawvere quantale_ (LLQ). ### _Syntax of logical formulas_ Formulas are freely generated from a set \(\mathbb{P}=\{p_{1},p_{2},\dots\}\) of atomic propositions over logical connectives that can be interpreted in the Lawvere quantale: \[\begin{array}{ll}\bot\mid\top\mid\phi\wedge\psi\mid\phi\vee\psi\mid\phi \otimes\psi\mid\phi\multimap\phi\multimap\psi&\text{(quantale connect.)}\\ \mathbb{1}&\text{(constant)}\\ r\ast\phi\quad\text{(for }r\in[0,\infty)\text{)}&\text{(scalar multiplication)}\end{array}\] The first logic, \(\mathbb{L}\), uses only the basic logical connectives that can be interpreted in any commutative quantale, viz, the constants bottom (\(\bot\)) and top (\(\top\)), binary conjunction (\(\land\)) and disjunction (\(\vee\)), tensor (\(\otimes\)), and linear implication (\(\multimap\)). The second logic, \(\mathbb{L}_{1}\), additionally allows the use of the constant \(\mathbb{1}\). The third logic, \(\mathbb{L}_{1}^{*}\), extends further the syntax with scalar multiplication by a positive real (\(r\ast-\)). It will be useful to define, in all LLQ, the following derived connectives: \[\begin{array}{ll}\multicolumn{1}{c}{\multimap\phi}{\mathrel{\mathop{:}}=\phi\multimap \multimap\bot}\,,&\text{(Negation)}\\ \phi\multimap\psi&\mathrel{\mathop{:}}=(\phi\multimap\psi)\wedge(\psi\multimap \phi)\,.&\text{(Double implication)}\end{array}\] Moreover, for any \(n\in\mathbb{N}\), the derived connective \(n\phi\) is inductively defined as follows \[0\phi\mathrel{\mathop{:}}=\top\qquad\text{ and }\qquad(n+1)\phi\mathrel{ \mathop{:}}=\phi\otimes n\phi\,.\] Similarly, for any \(r\in[0,\infty)\), we write simply \(r\) to denote the formula \(r\ast 1\). **Notation 1**.: _To simplify the presentation, we assume an operator precedence rule so that the strongest bound has \(\ast\), followed by \(\otimes\), next are \(\land\) and \(\vee\), and the weakest are \(\multimap\), \(\multimap\) and \(\neg\). Thus, the formula \(r\ast\phi\otimes\psi\wedge s\ast\psi\multimap\phi\) is interpreted as \((((r\ast\phi)\otimes\psi)\wedge(s\ast\psi))\multimap\theta\)._ ### _Semantics of logical formulas_ The models of LLQ are maps \(m\colon\mathbb{P}\to[0,\infty]\) interpreting the propositional symbols in the Lawvere quantale, which can be extended uniquely to formulas by setting \[\begin{array}{ll}m(\bot)\mathrel{\mathop{:}}=\infty\,,&m(\phi\wedge\psi) \mathrel{\mathop{:}}=\max\{m(\psi),m(\phi)\}\,,\\ m(\top)\mathrel{\mathop{:}}=0\,,&m(\phi\vee\psi)\mathrel{\mathop{:}}=\min\{m( \psi),m(\phi)\}\,,\\ m(1)\mathrel{\mathop{:}}=1\,,&m(\phi\otimes\psi)\mathrel{\mathop{:}}=m(\phi)+m( \psi)\,,\\ m(r\ast\phi)\mathrel{\mathop{:}}=rm(\phi)\,,&m(\phi\multimap\psi)\mathrel{ \mathop{:}}=m(\psi)\mathrel{\mathop{:}}m(\phi)\,,\end{array}\] with derived connectives \(\neg\) and \(\multimap\) interpreted as \[m(\neg\phi)=\infty\,\raisebox{-1.0pt}{\scalebox{1.0}{$\circlearrowleft$}}\,m( \phi)\,,\quad m(\phi\multimap\multimap\psi)=\left|m(\psi)-m(\phi)\right|.\] ## IV Natural Deduction Systems We present natural deduction systems for the thee logics \(\mathbb{L}\subseteq\mathbb{L}_{1}\subseteq\mathbb{L}_{1}^{\star}\). As each logic is intended to be a conservative extension of its sub-logics, we present their deduction systems incrementally. Let \(\mathcal{L}\in\{\mathbb{L},\mathbb{L}_{1},\mathbb{L}_{1}^{\star}\}\). A _judgement_ in \(\mathcal{L}\) is a syntactic construct of the form \[\phi_{1},\ldots,\phi_{n}\vdash\psi\,,\] (Judgement) where \(\phi_{i}\) and \(\psi\) are logical formulas of \(\mathcal{L}\), respectively called _antecedents_ and _consequent_ of the judgement. Note that the antecedent \(\Gamma=(\phi_{1},\ldots,\phi_{n})\) of a judgement is a finite ordered list, possibly, with repetitions. As customary, for \(\Gamma\) and \(\Delta\) lists of formulas, their comma-separated juxtaposition \(\Gamma,\Delta\) denotes concatenation; and \(\vdash\phi\) is the notation for a judgement with empty list of antecedents. A judgement \(\gamma=(\Gamma\vdash\phi)\) in \(\mathcal{L}\)_is satisfied by_ a model \(m\), in symbols \(m\models_{\mathcal{L}}\gamma\), whenever \[\sum_{\phi\in\Gamma}m(\phi)\geq m(\psi)\,.\qquad\text{(Semantics of judgements)}\] When the logic \(\mathcal{L}\) is clear from the context or the satisfiability holds in all LLQ, we simply write \(m\models\gamma\). Observe that, in our context, \(\models\) is not the semantic counterpart of \(\vdash\), as it happens often in the literature! A judgement is _satisfiable_ if it is satisfied by a model; _unsatisfiable_ if it is not satisfiable; and a _tautology_ if it is satisfied by all models. Note that, for any model \(m\) in LLQ \[m\models(\vdash\phi)\] iff \[m(\phi)=0\] \[m\models(\vdash\neg\phi)\] iff \[m(\phi)=\infty\] (_i.e._, \[\phi\] is infinite) \[m\models(\vdash\neg\neg\phi)\] iff \[m(\phi)<\infty\] (_i.e._, \[\phi\] is finite) \[m\models(\phi\vdash\psi)\] iff \[m(\phi)\geq m(\psi)\,.\] In particular, \(\vdash\phi\multimap\phi\), \(\vdash\top\), and \(\vdash\neg\bot\) are examples of tautologies, while \(\vdash\phi\multimap\) (\(\neg\neg\phi\)) is not. Moreover, by using negation we can express whether the interpretation of a formula is either finite or infinite. An _inference (rule)_ is a syntactic construct of form \[\frac{S}{\gamma}\] for \(S\) a set of judgements and \(\gamma\) a judgement. The judgements in \(S\) are the _hypotheses of the inference_ and \(\gamma\) is the _conclusion of the inference_. When \(S=\{\gamma^{\prime}\}\) is a singleton, we write \[\frac{\gamma^{\prime}}{\gamma}\quad\text{ to denote both }\quad\frac{ \gamma^{\prime}}{\gamma}\quad\text{and}\quad\frac{\gamma}{\gamma^{\prime}}\] A judgement \(\gamma\) is a _semantic consequence_ of a set \(S\) of judgements, in symbols \(S\models\gamma\), if every model that satisfies all the judgements in \(S\) satisfies also \(\gamma\). Thus, \(\emptyset\models\gamma\) (or more simply \(\models\gamma\)) means that \(\gamma\) is a tautology. For a model \(m\), we will also use the notation \(S\models_{m}\gamma\), to mean that, whenever \(m\) satisfies all the judgements of \(S\), then it satisfies also \(\gamma\). The natural deduction system of \(\mathbb{L}\):the inference rules in Tables I, II, and III. Table I contains the basic rules of logical deduction (id) and (cut), and the structural rules of weakening (weak) and permutation (perm). Note that, there is no cancellation rule. Table II provides the rules for the lattice operations of the Lawvere quantale1. Table III collects the rules that are specific to the Lawvere quantale. (wem) is the weak excluded middle; (tot) states that the quantale is totally ordered; the other rules explain the actions of \(\otimes\) and its adjoint in the Lawvere quantale. (\(\otimes_{1}\)) says that \(\otimes\) behaves as an additive conjunction; (\(\otimes_{2}\)) is the adjunction rule for \(\otimes\) and \(\multimap\); (\(\otimes_{3}\)) is a simplification rule for \(\otimes\); (\(\multimap\)), (\(\multimap_{2}\)), and (\(\multimap_{3}\)) complement the adjunction rule by expressing the interactions between the connectives \(\otimes\) and \(\multimap\) on opposite sides of the turnstile \(\vdash\). Note that (\(\multimap_{2}\)), (\(\multimap_{3}\)) are conditional to the finiteness of specific formulas. Footnote 1: Recall that the order on \([0,\infty]\) is reversed! The natural deduction system of \(\mathbb{L}_{1}\):includes all the rules in Tables I, II, and III, and, in addition, \[\frac{\vdash\mathbb{1}\vee\neg\mathbb{1}}{\vdash\bot}\quad\text{(one)}\] expressing that \(0\geq 1\) and \(1\geq\infty\) are inconsistencies. Thus, with \(\mathbb{L}_{1}\) we exit the universe of classical logic, as \(\mathbb{1}\) cannot be provably equivalent nor to \(\top\), neither to \(\bot\). The natural deduction system of \(\mathbb{L}_{1}^{\star}\):extends the deduction system of \(\mathbb{L}_{1}\) with the rules for scalar multiplication in Table IV. In (\(S_{4}\)), \(\bowtie\) can be either of \(\wedge,\vee,\otimes,\multimap\), meaning that we have one version of (\(S_{4}\)) for each of these operators. **Definition 2** (Provability).: _Let \(S\) be a set of judgements in \(\mathcal{L}\in\{\mathbb{L},\mathbb{L}_{1},\mathbb{L}_{1}^{\star}\}\). We say that a judgement \(\gamma\) is provable from (or deducible from) \(S\) in \(\mathcal{L}\) (in symbols \(S\Vdash_{\mathcal{L}}\gamma\)), if there exists a sequence \(\gamma_{1},\ldots,\gamma_{n}\) of judgements ending in \(\gamma\) whose members are either an axiom of \(\mathcal{L}\), or a member of \(S\), or it follows from some preceding members of the sequence by using the inference rules in \(\mathcal{L}\). A sequence \(\gamma_{1},\ldots,\gamma_{n}\) as above is called _proof_. A judgement \(\gamma\) is a _theorem_ of \(\mathcal{L}\) if it is provable in \(\mathcal{L}\) from the empty set (in symbols \(\emptyset\Vdash_{\mathcal{L}}\gamma\), or simply \(\Vdash_{\mathcal{L}}\gamma\)). **Theorem 3** (Soundness of LLQ).: _Let \(\mathcal{L}\in\{\mathbb{L},\mathbb{L}_{1},\mathbb{L}_{1}^{*}\}\). If a judgement \(\gamma\) is provable from \(S\) in \(\mathcal{L}\), then \(\gamma\) is a semantic consequence of \(S\) in \(\mathcal{L}\) (in symbols, \(S\Vdash_{\mathcal{L}}\gamma\) implies \(S\models_{\mathcal{L}}\gamma\))._ ### _Theorems in LLQ_ In this subsection, we state and prove a series of theorems of LLQ as well as some useful derived rules. For \(\gamma_{i}\), \(\gamma\) judgements, a _derived rule_ \[\begin{array}{cc}\underline{\gamma_{1}}&\cdots&\gamma_{n}\\ \hline\gamma\end{array}\] denotes that \(\gamma\) is provable from \(\{\gamma_{1},\ldots,\gamma_{n}\}\) in LLQ. **Lemma 4**.: _The following are provable in all LLQ:_ 1. \(\underline{\Gamma,\phi\wedge\psi\vdash\theta}\)__\(\phi\vdash\psi\)__ 2. \(\underline{\Gamma\vdash\phi\vee\psi}\)__\(\Gamma\vdash\psi\)__ 3. \(\underline{\Gamma,\vdash\phi\wedge\psi}\)__ 4. \(\overline{\Gamma\vdash\phi}\)__ 5. \(\overline{\phi\vdash\phi\vee\psi}\)__ 6. \(\overline{\phi\wedge\psi\vdash\phi}\)__ 7. \(\overline{\phi\wedge\psi\multimap\multimap\psi\wedge\phi}\)__ 8. \(\overline{\phi\vee\psi\multimap\phi\multimap\psi\vee\phi}\)__ 9. \(\overline{\vdash(\phi\wedge\psi)\vee\phi\multimap\phi}\)__ 10. \(\overline{\vdash(\phi\vee\psi)\wedge\phi\multimap\phi}\)__ 11. \(\overline{\vdash\phi\wedge\top\multimap\phi}\)__ 12. \(\overline{\vdash\phi\vee\top\multimap\top}\)__ 13. \(\overline{\vdash\phi\wedge\bot\multimap\bot}\)__ 14. \(\overline{\vdash\phi\vee\bot\multimap\phi}\)__ Proof.: We prove a few cases. All these proofs are trivial. 1. \(\underline{\phi\vdash\phi\quad\phi\vdash\psi}\)__\(\underline{\phi\vdash\phi\wedge\psi}\)__\((\wedge_{2})\)__\(\Gamma,\phi\wedge\psi\vdash\theta\)__(cut)__ 3. \(\underline{\Gamma\vdash\phi}\)__(weak)__\(\underline{\Delta\vdash\psi}\)__\(\overline{\Gamma,\Delta\vdash\psi}\)__\((\wedge_{2})\)__ 4. \(\underline{\Gamma\vdash\phi}\)__(top)__\(\Gamma,\top\vdash\phi\)__(cut)__ **Lemma 5**.: _The following are provable in all LLQ:_ 1. \(\vdash\phi\otimes(\psi\otimes\rho)\multimap\phi\)__\((\phi\otimes\psi)\otimes\rho\)__ 2. \(\underline{\Gamma\vdash\phi\otimes\psi}\)__ 3. \(\overline{\Gamma\vdash\phi\otimes\psi\vdash\phi}\)__ 4. \(\overline{\phi\otimes\psi\vdash\phi}\)__ 5. \(\overline{\vdash\phi\otimes\psi\multimap\phi}\)__ 6. \(\overline{\vdash\phi\otimes\top\multimap\phi}\)__ 7. \(\overline{\vdash\phi\otimes\bot\multimap\bot}\)__ 8. \(\underline{\vdash\phi\multimap\psi}\)__ 9. \(\overline{\vdash\phi\multimap\psi}\)__ 10. \(\overline{\vdash\phi\multimap\top}\)__ 11. \(\overline{\vdash\phi\multimap\phi}\)__ 12. \(\overline{\phi\vdash\psi\multimap\phi}\)__ 13. \(\overline{\phi\otimes(\phi\multimap\psi)\vdash\psi}\)__ 14. \(\overline{\vdash\phi\vee\bot\multimap\phi}\)__ Proof.: We prove a few cases. The other cases are as easy. 1. We show the proof of one of the two linear implications. \[\frac{\phi\vdash\phi}{\phi,\psi\vdash\phi\otimes\psi}\] (Lemma 5.(3)) \[\frac{\phi,\psi,\rho\vdash(\phi\otimes\psi)\otimes\rho}{\phi,\psi \otimes\rho\vdash(\phi\otimes\psi)\otimes\rho}\] (Lemma 5.(3)) \[\frac{\phi,\psi\otimes\rho\vdash(\phi\otimes\psi)\otimes\rho}{\phi \otimes\otimes\rho\vdash(\phi\otimes\psi)\otimes\rho}\] (\(\otimes_{1}\)) \[\frac{\phi\otimes(\psi\otimes\rho)\vdash(\phi\otimes\psi)\otimes \rho}{\vdash\phi\otimes(\psi\otimes\rho)\rightarrow(\phi\otimes\psi)\otimes \rho}\] (\(\otimes_{2}\)) The other implication follows similarly. We conclude by (\(\wedge_{2}\)). 2. \[\frac{\phi\vdash\phi}{\phi,\psi\vdash\phi}\] (weak) \[\frac{\Gamma\vdash\phi\otimes\psi}{\Gamma\vdash\phi}\] ( \[\otimes_{1}\] ) 3. Let \(\theta\) be the result of taking all the elements of \(\Gamma\) and connect them by \(\otimes\) --recall that \(\otimes\) is associative. \[\frac{\Delta\vdash\psi}{\Delta,\phi\multimap\theta\vdash\psi}\] (weak) \[\frac{\Gamma\vdash\phi}{\theta,\Delta\vdash\phi\otimes\psi}\] (rep. \[\otimes_{1}\] ) \[\frac{\Gamma\Delta\vdash\phi\otimes\psi}{\Gamma,\Delta\vdash\phi \otimes\psi}\] (rep. \[\otimes_{1}\] ) **Lemma 6**.: _The following are provable in \(\mathbb{L}_{1}\) and \(\mathbb{L}_{1}^{*}\)._ 1. \(\frac{\vdash\mathbb{1}}{\top+\bot}\) _2. \(\frac{\mathbb{1}\vdash\bot}{\top+\bot}\) _3. \(\frac{\phi\vdash\phi\otimes\mathbb{1}}{\phi\vdash\bot}\)_4. \(\frac{\mathbb{1}\multimap\phi\vdash\phi}{\phi\vdash\bot}\)_ **Lemma 7**.: _The following are provable in \(\mathbb{L}_{1}^{*}\):_ 1. \(\overline{(r+s)*\phi\vdash r*\phi}\) _2._ \(\overline{r*\top\dashrightarrow\top}\)__ 3. \(\frac{r*\mathbb{1}\vdash(r+s)*\mathbb{1}}{\top+\bot}\)__\(s>0\)__ Proof.: The key observation for proving this lemma is that all these statements are provable in \(\mathbb{L}\) and \(\mathbb{L}_{1}\) for \(r,s\in\mathbb{N}\) and interpreting \(r*\phi\) as \(r\phi\), without involving the rules regarding the scalar product. **Lemma 8**.: _The following is provable in LLQ._ \[\frac{\Gamma,\phi\multimap\psi\vdash\theta}{\Gamma\vdash\theta}\] Proof.: \[\frac{\Gamma,\phi\multimap\psi\vdash\theta}{\Gamma,(\phi\multimap\psi)\lor( \psi\multimap\phi)\vdash\theta}\] \[\frac{\Gamma,\phi\multimap\psi\vdash\theta}{\Gamma,(\phi\multimap\psi)\lor( \psi\multimap\phi)\vdash\theta}{\Gamma,\Gamma\vdash\theta}\] (cut + tot) **Lemma 9**.: _In all LLQ,_ \[\begin{array}{l}\mbox{If}\quad\vdash\phi\\ \vdash\psi\end{array}\quad\mbox{then}\quad\left[\vdash\phi\wedge\theta\quad \mbox{and}\quad\vdash\phi\vee\theta\right]\,.\] Proof.: Induction on the proof rules. We show one case. \[\frac{\vdash\phi\wedge\theta}{\vdash\phi}\] ( \[\wedge_{3}^{*}\] ) \[\frac{\vdash\phi\wedge\theta}{\vdash\psi}\] ( \[\wedge_{2}\] ) where (\(\wedge_{3}^{*}\)) is the symmetric rule of (\(\wedge_{3}\)), which is derivable. The other case follows similarly by using (\(\vee_{1}\)), (\(\vee_{2}\)) and their symmetric versions. **Lemma 10**.: _In all LLQ_ \[\begin{array}{l}\vdash\theta\\ \vdash\phi\end{array}\quad\mbox{iff}\quad\frac{\vdash\theta}{\vdash\phi\wedge\theta}\] Proof.: (\(\Leftarrow\)): \[\frac{\vdash\theta}{\vdash\phi\wedge\theta}.\] (\(\Rightarrow\)): \(\frac{\vdash\theta}{\vdash\phi}\) implies, using lemma 9, \(\frac{\theta\wedge\theta}{\theta\wedge\phi}\), which is equivalent to \(\frac{\vdash\theta}{\vdash\phi\wedge\theta}\). **Lemma 11**.: _In all LLQ,_ \[\left[\begin{array}{l}\vdash\theta\\ \vdash\phi\end{array}\quad\mbox{and}\quad\frac{\vdash\theta}{\vdash\psi} \end{array}\right]\quad\mbox{iff}\quad\frac{\vdash\theta}{\vdash\phi\wedge \psi}.\] Proof.: (\(\Leftarrow\)): \[\frac{\vdash\theta}{\vdash\phi\wedge\psi}.\] (\(\Rightarrow\)): Using lemma 9, from \(\frac{\vdash\theta}{\vdash\phi}\) we infer \(\frac{\vdash\theta\wedge\psi}{\vdash\phi\wedge\psi}\). Similarly, applying lemma 10, from \(\frac{\vdash\theta}{\vdash\psi}\) we derive \(\frac{\vdash\theta}{\vdash\theta\wedge\psi}\). Next, modus ponens completes the proof. **Lemma 12**.: _In all LLQ,_ \[\begin{array}{l}\mbox{If}\quad\left[\begin{array}{l}\vdash\phi\\ \vdash\theta\end{array}\quad\mbox{and}\quad\frac{\vdash\psi}{\vdash\theta} \right]\quad\mbox{then}\quad\frac{\vdash\phi\vee\psi}{\vdash\theta}.\end{array}\] Proof.: Applying lemma 9 to \(\frac{\vdash\phi}{\vdash\theta}\) and to \(\frac{\vdash\psi}{\vdash\theta}\) we obtain \(\frac{\vdash\phi\vee\psi}{\vdash\theta\vee\psi}\) and \(\frac{\vdash\phi\vee\psi}{\vdash\theta\vee\phi}\) respectively. Using them in the context of lemma 11 we get \[\begin{array}{l}\vdash\phi\vee\psi\\ \hline\vdash(\theta\vee\phi)\wedge(\theta\vee\psi)\end{array},\] which implies further \[\begin{array}{l}\vdash\phi\vee\psi\\ \hline\vdash\theta\wedge(\phi\vee\psi)\end{array}.\] Now, applying Lemma 11, we get \[\frac{\vdash\phi\vee\psi}{\vdash\theta}\,.\qed\] ### _Totality Lemmas and Decision Trees_ Next, we state the Totality Lemmas for the logics for Lawwere's quantale, which are results we will use abundantly in what follows. In order to properly state them we define the concept of decision tree. We call pairs of judgements of the following form \[(\vdash\phi\multimap\psi\,\vdash\psi\multimap\phi)\qquad\text{or}\qquad(\vdash \neg\phi\,\vdash\neg\neg\phi)\] _supplementary judgements_. These judgements, when used in pair as above, explore different alternatives for the interpretations of LLQ-formulas. The first pair of judgements explores different ordering alternatives; the second pair examines the alternatives between choosing either a finite or infinite interpretation for a formula. Supplementary judgements play a special role in reasoning, as clearly stated in the following lemma. **Lemma 13** (First Totality Lemma).: _The following statements are provable in all LLQ._ 1. _If_ \(\Big{[}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! _But \(\vdash\phi\multimap\psi\) is not provable, because otherwise, from the soundness, it should be a tautology and it is not: consider the model \(m\) such that \(m(\varphi)=m(\rho)=\frac{1}{2}\), and \(m(\theta)=1\)._ For substructural logics like linear or Lukasiewicz logics, a weaker form of deduction theorem holds: \(\vdash\psi\) is provable from \(S\cup\{\vdash\phi\}\) in \(\mathbb{L}\) iff \(\vdash n\phi\multimap\psi\) is provable from \(S\) in \(\mathbb{L}\) for some \(n\in\mathbb{N}\). However, this weaker version does not hold in \(\mathbb{L}_{1}\) and \(\mathbb{L}_{1}^{*}\). **Fact 17** (Failure of the weak deduction theorem).: _Consider the formulas \(\phi:=\mathbb{1}\vee\neg\mathbb{1}\) and \(\psi:=\bot\). Then \(\vdash\psi\) is provable from \(\{\vdash\phi\}\) using (one)._ _But \(\vdash n\phi\multimap\psi\) is not provable for any \(n\in\mathbb{N}\), since otherwise, using the soundness, there should exist an \(n\) such that \(\vdash n\phi\multimap\psi\) is a tautology. However, no model satisfy this judgement. Indeed, for any model \(m\),_ \[m(n\phi\multimap\psi) =m(\bot)\mathbin{\raisebox{-1.0pt}{\scalebox{1.0}{$\neg$}}}n\min\{m( \mathbb{1}),m(\bot)\}\] \[=\infty\mathbin{\raisebox{-1.0pt}{\scalebox{1.0}{$\neg$}}}n=\infty\,.\] **Fact 18**.: _In LLQ, a rule of type_ \[\frac{\vdash\phi_{1}\...\ \vdash\phi_{n}}{\vdash\psi}\] _cannot be internalized neither as_ \[\phi_{1}\wedge...\wedge\phi_{n}\vdash\psi\quad\text{nor as}\quad\phi_{1},..., \phi_{n}\vdash\psi.\] _The first statement is a consequence of the fact 16, and the second derives from the fact 17._ **Remark 19**.: _Note that in all LLQ, any judgement of type \(\phi_{1},...,\phi_{n}\vdash\psi\) is equivalent to \(\phi_{1}\otimes(...\otimes\phi_{n})\vdash\psi\), and any judgement of type \(\phi\vdash\psi\) is equivalent to \(\vdash\phi\multimap\psi\). Consequently, we can work exclusively with judgements of type \(\vdash\theta\)._ _Similarly, any rule with a finite set of hypotheses, of type_ \[\frac{\gamma_{1}\vdash\phi_{1}\...\ \gamma_{n}\vdash\phi_{n}}{\delta\vdash \theta},\] _is equivalent to_ \[\frac{\vdash(\gamma_{1}\multimap\phi_{1})\wedge...\wedge(\gamma_{n}\multimap \phi_{n})}{\delta\vdash\theta}.\] _Consequently, in such cases, we can safely assume that we work with a rule of type_ \[\frac{\vdash\phi}{\vdash\psi}.\] Now we prove a second version, a bit stronger, of the totality lemma, which plays a key role in the completeness proof. Hereafter, if \(S,R,V\) are sets of judgements, we write \[\frac{S}{V}\] to denote the fact that for all the judgements \(\gamma\in V\), we have \[\frac{S}{\gamma}\] **Definition 20** (Decision tree).: _A decision tree in \(\mathcal{L}\), where \(\mathcal{L}\in\{\mathbb{L},\mathbb{L}_{1},\mathbb{L}_{1}^{*}\}\), is a finite binary tree with the nodes labelled by finite sets of judgements in \(\mathcal{L}\) such that_ * _when a node labelled by_ \(R\) _has only one child, labelled by_ \(V\)_, then the following rules are provable in_ \(\mathcal{L}\)__ \[\frac{R}{V}\quad\text{ and }\quad\frac{V}{R}.\] * _when a node labelled by_ \(R\) _has two children, the labels of the children are of type_ \(V_{1}\cup\{\vdash\phi\}\) _and_ \(V_{2}\cup\{\vdash\psi\}\) _respectively, where_ \((\vdash\phi,\vdash\psi)\) _is a pair of supplementary judgements, and the following rules are provable in_ \(\mathcal{L}\)__ \[\frac{V_{1}\ \ \vdash\phi}{R}\quad\frac{V_{2}\ \ \vdash\psi}{R}\quad\frac{R\ from \(S\) to \(F_{i}\) is \(\{\vdash\phi_{i}\}\). In this case the implication is one of the cases 2. or 4. of the first totality lemma 13. For the inductive step, suppose the statement is true for trees of maximum depth \(n\), and we prove it for trees of depth \(n+1\). Let \(S\) be the root of a tree of depth \(n+1\). If \(S\) has a unique child \(F\), the statement is trivially true since the tree with the root \(F\) has the length \(n\) and the trace of the root from \(S\) to \(F\) is empty. If \(S\) has two children, \(S_{1}\) and \(S_{2}\), these are the roots of two trees of depth at most \(n\). Suppose that the first one has the root \(S_{1}\), the signature \(\{F_{1}^{1},\ldots,F_{k_{1}}^{1}\}\) and the trace of the path from \(S_{1}\) to \(F_{i}^{1}\) is \(L_{1}^{1}\). And the second has the root \(S_{2}\), the signature \(\{F_{1}^{2},\ldots,F_{k_{2}}^{2}\}\) and the trace of the path from \(S_{2}\) to \(F_{i}^{2}\) is \(L_{i}^{2}\). Suppose also that the split in \(S\) is marked by two supplementary judgements, \(\vdash\phi_{1}\) leading to \(S_{1}\) and \(\vdash\phi_{2}\) leading to \(S_{2}\). Consider now two new decision trees, \(T_{1}\) obtain by adding \(\vdash\phi_{1}\) to all the nodes of \(S_{1}\) except the root and having \(S\cup\{\vdash\phi_{1}\}\) as a root; and \(T_{2}\) obtain by adding \(\vdash\phi_{2}\) to all the nodes of \(S_{2}\) except the root and having \(S\cup\{\vdash\phi_{2}\}\) as a root. Because \(S_{1}\) and \(S_{2}\) are children of \(S\) in the initial tree, guarded by \(\vdash\phi_{1}\) and \(\vdash\phi_{2}\) respectively, we get that indeed \(T_{1}\) and \(T_{2}\) are decision trees. Moreover, the trace of the path from the root of \(T_{i}\) to \(G_{j}^{i}\) is still \(L_{j}^{i}\) as in the case of \(S_{i}\). Moreover, \(T_{i}\) has the depth at most \(n\). From the working bnotchisis we know that for each \(i\leq 2\) we have that for each \(j\leq k_{i}\), \[\frac{S\ \vdash\phi_{i}\ \ L_{j}^{i}}{\vdash\theta}.\] Hence, for each \(i\leq 2\) we have that for each \(j\leq k_{i}\), \[\frac{S\cup\{\vdash\phi_{i}\}\ \ L_{j}^{i}}{\vdash\theta}.\] Applying the inductive hypothesis in the tree \(T_{i}\) we get from here that \[\frac{S\ \vdash\phi_{1}}{\vdash\theta}\quad\text{and}\quad\frac{S\ \vdash\phi_{2}}{\vdash\theta}.\] Next, the cases 2. or 4. of the first totality lemma 13, we get the proof. 2. We prove by induction on the structure of the decision tree, starting from leafs upwards, that \(\vdash\theta\) is provable from each set of judgements that labels a node of the tree. It implies that it is provable from \(R\). If \(F\) is a leaf that has no siblings and \(S\) is its parent, then \(\frac{F}{\vdash\theta}\) we derive \(\frac{S}{\vdash\theta}\) since we have \(\frac{S}{F}\). If \(F_{1},F_{2}\) are two siblings leafs having the parent \(S\), then there exists a pair of supplementary judgements \(\vdash\phi\) and \(\vdash\psi\) such that \(F_{1}=V_{1}\cup\{\vdash\phi\}\) and \(F_{2}=V_{2}\cup\{\vdash\psi\}\). From the definition of decision tree, we know that \[\frac{S\ \ \vdash\phi}{V_{1}}\quad\text{ and }\quad\frac{S\ \ \vdash\psi}{V_{2}},\] implying \[\frac{S\ \ \vdash\phi}{V_{1}\ \ \vdash\phi}\quad\text{ and }\quad\frac{S\ \ \vdash\psi}{V_{2}\ \ \vdash\psi}.\] Using the bnotchisis, we have \[\frac{V_{1}\ \ \vdash\phi}{\vdash\theta}\quad\text{ and }\quad\frac{V_{2}\ \ \vdash\psi}{ \vdash\theta}.\] From these we get \[\frac{S\ \ \vdash\phi}{\vdash\theta}\quad\text{ and }\quad\frac{S\ \ \vdash\psi}{ \vdash\theta}.\] Since \(\vdash\phi\) and \(\vdash\psi\) are supplementary judgements, using the first totality lemma 13, we get \(\frac{S}{\vdash\theta}\). ## V Theories and Models In what follows we use \(\mathcal{L}\) to range over \(\{\mathbb{L},\mathbb{L}_{1},\mathbb{L}_{1}^{*}\}\), as the following definitions are uniform for all LLQ. A _theory_\(\mathbb{T}\) in \(\mathcal{L}\) is a set of judgements that is deductively closed (in symbols, \(\mathbb{T}\Vdash_{\mathcal{L}}\gamma\) implies \(\gamma\in\mathbb{T}\)). An _axiomatic theory_ in \(\mathcal{L}\) is a theory for which there exists a set of judgements, called _axioms_, such that all the judgements in the theory can be proven in \(\mathcal{L}\) from the axioms; it is _finitely axiomatized_ if it admits a finite set of axioms. If \(\mathbb{T}\) and \(\mathbb{T}^{\prime}\) are two theories in \(\mathcal{L}\) such that \(\mathbb{T}\subseteq\mathbb{T}^{\prime}\), we say that \(\mathbb{T}^{\prime}\) is a _extension_ of \(\mathbb{T}\); it is a _proper extension_ if \(\mathbb{T}\subseteq\mathbb{T}^{\prime}\). A theory \(\mathbb{T}\) in \(\mathcal{L}\) is _disjunctive_, if for any formulas \(\phi,\psi\in\mathcal{L}\), \(\vdash\phi\vee\psi\in\mathbb{T}\) implies that either \(\vdash\phi\in\mathbb{T}\) or \(\vdash\psi\in\mathbb{T}\). It is not difficult to observe that if \(\mathbb{T}\) is a disjunctive theory, because of (tot) and (wem), we have that for any set of supplementary judgements in \(\mathcal{L}\), at least one of the judgements belongs to \(\mathbb{T}\). A theory in \(\mathcal{L}\) is _inconsistent_ if it contains \(\top\vdash\bot\), otherwise it is _consistent_; it is _maximal consistent_ if it is consistent and all its proper extensions are inconsistent. A _model of a theory_\(\mathbb{T}\) is a model \(m\) that satisfies all the judgements of the theory. If the theory is axiomatized, \(m\) is a model for all the axioms iff it is a model of the theory. **Lemma 23**.: _In all LLQ the following statements are true._ 1. _If a theory has a model, then it is consistent._ 2. _Any model satisfies a disjunctive consistent theory._ Proof.: 1. This is a consequence of soundness. 2. For a model \(m\), define \(\mathbb{T}_{m}=\{\Gamma\vdash\phi\mid\Gamma\models_{m}\phi\}\). Suppose \(\vdash\phi\vee\psi\in\mathbb{T}_{m}\). Then either \(\phi\vdash\psi\) or \(\psi\vdash\phi\) are in \(\mathbb{T}_{m}\). Assume \(\phi\vdash\psi\in\mathbb{T}_{m}\), then \(\vdash\psi\mathrel{\hbox to 0.0pt{\kern 2.0pt\raise 2.0pt\hbox{$\sim$}} \raisebox{2.0pt\hbox{$\sim$}} \raisebox{2.0pt\hbox{$\sim$}} }\phi\vee\psi\in\mathbb{T}_{m}\). Since \(\vdash\phi\vee\psi\in\mathbb{T}_{m}\), so \(\vdash\psi\in\mathbb{T}_{m}\). In the case of \(\mathbb{L}_{1}^{*}\) we can identify a special class of disjunctive consistent theories. **Definition 24**.: \(A\) diagrammatic theory _is a consistent theory \(\mathbb{T}\) of \(\mathbb{L}_{1}^{*}\) such that for any atomic proposition \(p\in\mathbb{P}\),_ * _either_ \(p\vdash\bot\in\mathbb{T}\)_,_ * _or there exists_ \(\varepsilon\in[0,\infty)\) _such that_ \(\varepsilon\vdash p\) _and_ \(p\vdash\varepsilon\in\mathbb{T}\)_._ To simplify the notation, when we have \(\phi\vdash\psi,\ \ \psi\vdash\phi\in\mathbb{T}\) we simply write \[\phi\dashvDash\psi\in\mathbb{T}.\] Observe that this is equivalent in LLQ to \(\vdash\phi\mathrel{\hbox to 0.0pt{\kern 2.0pt\raise 2.0pt\hbox{$\sim$}} \raisebox{2.0pt\hbox{$\sim$}} }\psi\in\mathbb{T}\). A simple induction on the structure of formulas proves that in a diagrammatic theory, for any \(\phi\in\mathbb{L}_{1}^{*}\), either \(\phi\vdash\bot\in\mathbb{T}\), or there exists \(\varepsilon\in[0,\infty)\) such that \(\varepsilon\dashv\phi\in\mathbb{T}\). **Lemma 25**.: _In \(\mathbb{L}_{1}^{*}\) we have that_ 1. _Every diagrammatic theory has a unique model._ 2. _Every model satisfies a unique diagrammatic theory._ 3. _A theory is diagrammatic iff it is maximal consistent._ 4. _Every disjunctive consistent theory has a unique diagrammatic extension; and a unique model._ Proof.: 1. Let \(\mathbb{T}\) be a diagrammatic theory. For each \(p\in\mathbb{P}\), there exists \(v_{p}\in[0,\infty]\) such that \(p\dashv v_{p}\in\mathbb{T}\), where if \(v_{p}=\infty\) the symbol \(v_{p}\) is replaced by \(\bot\) in the respective judgement. We construct the model \(m_{\mathbb{T}}\colon\mathbb{P}\to[0,\infty]\) by \(m_{\mathbb{T}}(p)=v_{p}\) for any \(p\in\mathbb{P}\). A simple induction on the structure of formulas in \(\mathbb{T}\) proves that \(m_{\mathbb{T}}\) is a model of \(\mathbb{T}\). Let \(m^{\prime}\) be an arbitrary model of \(\mathbb{T}\). We can easily prove that for each \(p\in\mathbb{P}\), \(m^{\prime}(p)=v_{p}\), hence \(m^{\prime}=m_{\mathbb{T}}\). 2. Let \(m\) be a model of \(\mathbb{L}_{1}^{*}\). We showed in Lemma 23 that \(\mathbb{T}_{m}=\{\Gamma\vdash\phi\mid\Gamma\models_{m}\phi\}\) is a consistent theory admitting \(m\) as a model. We prove it is diagrammatic. Let \(\phi\in\mathbb{L}_{1}^{*}\) and \(v_{\phi}=m(\phi)\in[0,\infty]\). If \(v_{\phi}=\infty\), then \(\phi\models_{m}\bot\), hence \(\phi\vdash\bot\in\mathbb{T}_{m}\). Otherwise, \(v_{\phi}\models_{m}\phi\) and \(\phi\models_{m}v_{\phi}\), implying \(\phi\dashv v_{\phi}\in\mathbb{T}_{m}\). We prove that it is unique: suppose there exists a different diagrammatic theory \(\mathbb{T}\neq\mathbb{T}_{m}\) that is satisfied by \(m\). Let \(\phi\vdash\psi\) be a judgement that is present in only one of the theories. Suppose \(m(\phi)=r\) and \(m(\psi)=s\). Since both these theories are diagrammatic, \(\phi\dashv r,\ \psi\dashv s\in\mathbb{T}\cap\mathbb{T}_{m}\). Since \(\phi\vdash\psi\) belongs to one of these theories, in this one we also have \(r\vdash s\). Since this theory must be consistent, we obtain that \(r\leq s\). But then, \(r\vdash s\) is a theorem in \(\mathbb{L}_{1}^{*}\), hence is present in the two theories. But then, using the fact that \(\phi\dashv r,\ \psi\dashv s\in\mathbb{T}\cap\mathbb{T}_{m}\), we get \(\phi\vdash\psi\in\mathbb{T}\cap\mathbb{T}_{m}\). 3. (\(\Rightarrow\)) Suppose \(\mathbb{T}\) is diagrammatic and let \(\phi\vdash\psi\not\in\mathbb{T}\). Then \(m_{\mathbb{T}}(\phi)<m_{\mathbb{T}}(\psi)\). Hence, there exists \(r,s\in[0,\infty)\) such that \(m_{\mathbb{T}}(\phi)<r<s<m_{\mathbb{T}}(\psi)\). As \(\mathbb{T}\) is diagrammatic and disjunctive, \(r\vdash\phi,\ \psi\vdash s\in\mathbb{T}\). From here we get that in the theory generated by \(\mathbb{T}\cup\{\phi\vdash\psi\}\) we can prove \(r\vdash s\) for \(r<s\), which in \(\mathbb{L}_{1}^{*}\) is sufficient to prove inconsistency. (\(\Leftarrow\)) Suppose \(\mathbb{T}\) is maximal consistent. We firstly prove that it is disjunctive. Suppose \(\vdash\phi\vee\psi\in\mathbb{T}\) but \(\vdash\phi\not\in\mathbb{T}\). Then, from the maximality, there must exists a proof for \(\vdash\bot\) in \(\mathbb{T}\cup\{\vdash\phi\}\). Similarly, if \(\vdash\psi\not\in\mathbb{T}\), there must exists a proof for \(\vdash\bot\) in \(\mathbb{T}\cup\{\vdash\psi\}\). Combining these two proofs, one can get a proof of \(\vdash\bot\) from \(\mathbb{T}\cup\{\vdash\phi\vee\psi\}=\mathbb{T}\) - contradictions with the consistency of \(\mathbb{T}\). Hence \(\mathbb{T}\) is disjunctive and consistent. Hence, it is diagrammatic. 4. Consider a disjunctive consistent theory \(\mathbb{T}\) and let \(p\in\mathbb{P}\). As it is disjunctive, for any \(r\geq 0\) we can either prove \(p\vdash r\) or \(r\vdash p\). This determines a unique real \(v_{p}\) such that if \(p\vdash r\in\mathbb{T}\), then \(v_{p}\geq r\), and similarly if \(r\vdash p\). Thus, we have a model of \(\mathbb{T}\). Further, we use the other cases. ## VI Normal Forms In this section, we prove that any finitely axiomatized theory can be presented in a normal form, where all the axioms have a specific syntactic format. There are some important classes of judgements that play a crucial role in our development: \[\bot\vdash\phi\ \ |\ \ \phi\vdash\top\] (tautological) \[\top\vdash\bot\ \ |\ \ \top\vdash\mathbb{1}\ \ |\ \ \ \mathbb{1}\vdash\bot\] (inconsistent) \[\underbrace{\top\vdash p\ \ |\ p\vdash\bot}_{\text{aletich}}\ |\ \underbrace{\vdash\neg\neg p}_{\text{ fintist}}\] (assertive) \[(\bigotimes_{i\leq n}r_{i}*p_{i})\otimes r*\mathbb{1}\vdash( \bigotimes_{j\leq m}s_{i}*q_{i})\otimes s*\mathbb{1}\] (affine) where \(p,p_{i},q_{j}\in\mathbb{P}\) are atomic propositions. In the case of \(\mathbb{L}\) and \(\mathbb{L}_{1}\) the coefficient in an affine judgement are positive integers, and for \(\mathbb{L}\) the term involving \(\mathbb{1}\) is not present. **Definition 26** (Normal form).: _A judgement is in normal form if it is either tautological, inconsistent, assertive (finitist or aletich), or affine._ **Notation 27**.: _Since in \(\mathbb{L}_{1}^{*}\otimes\) commutes with all the other logical connectives, we assume hereafter that in all the formulas the scalar products guard the atomic propositions or the constants, and no other scalar products appear in a formula._ **Definition 28**.: _A theory in \(\operatorname{\text{LLQ}}\) is normal, or it has a normal axiomatization, if it admits a finite axiomatization such that_ * _every axiom is in normal form;_ * _no atomic proposition that occurs in an aletich axiom appears in any other axiom;_ * _there is an assertive judgement for each atomic proposition that appears in the axioms._ For \(\operatorname{\text{LLQ}}\) it is not possible, in general, to convert a judgement into a model-theoretic equivalent judgement in normal form. It is however possible to associate with any judgement \(\gamma\), a finite (possibly empty) set of normal theories \(\mathbb{T}_{1},\ldots,\mathbb{T}_{n}\), such that: \[m\models\gamma\qquad\qquad\text{iff}\qquad\text{for some }\ i\leq n,\ m\models\mathbb{T}_{i}\.\] In such a case, we call the set \(\{\mathbb{T}_{1},\ldots,\mathbb{T}_{n}\}\) of theories a _normal representation of the judgement \(\gamma\)_. Similarly, a _normal representation of a finite set of judgements_\(V\) (or of a finitely axiomatized theory) is a finite (possibly empty) set of normal theories \(\mathbb{T}_{1},\ldots,\mathbb{T}_{n}\), such that, \(m\) is a model of \(V\) iff it is a model for at least one of the theories \(\mathbb{T}_{1},\ldots,\mathbb{T}_{n}\). ### _Normalization Algorithm_ There exists a simple algorithm that allows us to compute, for any given finite set of judgements \(V\), its normal representation \(\mathcal{N}(V)\). To present the algorithm, we need firstly to introduce a couple of constructions. _The discrimination function \(\mathcal{D}\)_ is a nondeterministic function that takes a judgement \(\gamma\) as input and returns a finite set of finite sets of judgements as output, so that \(m\) is a model of \(\gamma\) iff it is a model for at least one of the sets of judgements in \(\mathcal{D}(\gamma)\). It is defined inductively on the syntax of the judgement, and the cases are described below. To simplify the future development, we present the cases as decision trees (recall Definition 28) - it is not difficult to verify that each of the graph below are indeed decision trees. The function \(\mathcal{D}\) takes the label of the root as input and returns the signature of the tree as output. The function \(\mathcal{D}\) applies uniformly to \(\mathbb{L}\), \(\mathbb{L}_{1}\) and \(\mathbb{L}_{1}^{*}\). The only difference is that for \(\mathbb{L}\) we don't use Rule 12, and for \(\mathbb{L}\) and \(\mathbb{L}_{1}\) the value of \(r>0\) in the Rules 4, 8, 10, 11, and 12 will be an integer and \(r*\phi\) is in fact \(r\phi\). The following results does not depend on the nondeterministic choices used in computing \(\mathcal{D}\). **Lemma 29**.: _Let \(\gamma\) be a judgement in \(\mathcal{L}\in\{\mathbb{L},\mathbb{L}_{1},\mathbb{L}_{1}^{*}\}\)._ 1. _Any disjunctive theory in_ \(\mathcal{L}\) _that contains_ \(\gamma\) _also contains all the judgements of at least one of the elements in_ \(\mathcal{D}(\gamma)\)_._ 2. _Any disjunctive theory in_ \(\mathcal{L}\) _that contains the judgements of at least one element in_ \(\mathcal{D}(\gamma)\) _also contains_ \(\gamma\)_._ 3. \(m\) _is a model for_ \(\gamma\) _iff_ \(m\) _is a model for at least one of the elements in_ \(\mathcal{D}(\gamma)\)_._ Proof.: Suppose that the signature of \(\mathcal{D}(\gamma)\) is \(\{F_{1},\ldots,F_{n}\}\). 1. Walking from the root to the leaves of the decision tree \(\mathcal{D}(\gamma)\), one needs to chose between supplementary judgements every time a node splits. Given a disjunctive theory \(\mathbb{T}\) that contains \(\gamma\), it must contain a set of choices of these supplementary judgements; let \(S\) be it. Consequently, this is (or contains) the trace of the path from the root to a particular leaf \(F\). Hence, there exists a set \(V\) of judgements so that \(F=V\cup S\). Moreover, \(S\subseteq\mathbb{T}\) and \(\gamma\in\mathbb{T}\), meaning \[\frac{\mathbb{T}}{S}\,\,\,\gamma.\] From the corollary 21 we know that \[\frac{S}{F}\,\,\,\,\,\,\,\,\,\,\text{hence,}\,\,\,\,\,\,\,\frac{\mathbb{T}}{ F}.\] 2. Let \(F\) be the label of a leaf, and suppose that \(\mathbb{T}\) is a disjunctive theory such that \(F\subseteq\mathbb{T}\). From the definition of decision tree we know that \(\frac{F}{\gamma}\). Hence \(\gamma\in\mathbb{T}\). 3. (\(\Rightarrow\)) Suppose \(\gamma\) has a model \(m\). From lemma 23, there exists a disjunctive consistent theory \(\mathbb{T}_{m}\) containing \(\gamma\) and \(m\) is a model of \(\mathbb{T}\). Applying 1. from this lemma, at least one of the elements of \(\mathcal{D}(\gamma)\) is a subset of \(\mathbb{T}_{m}\). Hence, this element has \(m\) as a model. (\(\Leftarrow\)) Suppose that an element \(V\in\mathcal{D}(\gamma)\) admits a model \(m\). From lemma 23, there exists a disjunctive consistent theory \(\mathbb{T}_{m}\) that contains all the judgements satisfied by \(m\), hence, \(V\subseteq\mathbb{T}_{m}\). Applying 2. of this lemma, \(\gamma\in\mathbb{T}_{m}\), hence \(m\) is a model of \(\gamma\). Next, we extend \(\mathcal{D}\) to take as inputs finite sets of judgements. We do it as follows, where \(V\) is an arbitrary set of judgements: \[\mathcal{D}(\emptyset) :=\emptyset\] \[\mathcal{D}(V) :=\left\{(V\setminus\{\gamma\})\cup T\mid\gamma\in V\text{ and }T\in\mathcal{D}(\gamma)\right\}.\] Observe that \(\mathcal{D}\) remains a nondeterministic function and its action can still be presented as a decision tree. Moreover, if we apply \(\mathcal{D}\) to a finite set of judgements \(V\) we get a decision tree; and if we apply \(\mathcal{D}\) again to the labels of its leafs we get a new decision tree; and we can continue applying \(\mathcal{D}\) to the labels of the leafs of the previous tree, obtaining larger and larger trees. However, this process will eventually end when one cannot apply \(\mathcal{D}\) any more. This happens when all the judgements of the leafs are in normal form. Let's denote by \(\mathcal{D}^{*}(V)\) this decision tree. Of course \(\mathcal{D}^{*}(V)\) is not uniquely defined and one can get differently shaped trees depending on the nondeterministic choices in applying \(\mathcal{D}\). However, for the results we present hereafter it is not important which of these trees we choose. Given a finite set \(V\) of judgements and an atomic proposition \(p\in\mathbb{P}\), the _\(p\)-saturation of \(V\)_ is defined as follows \[sat_{p}(V)=\{V\cup\{\vdash\neg p\},V\cup\{\vdash\neg\neg p\}\}.\] This function can be represented by the following decision tree: Let \(sat(V)\) be the composition of the decision trees of \(sat_{p}(V)\) for all atomic propositions \(p\) that appear in at least one judgement in \(V\). Their order is not important. If \(P\) is the set of atomic propositions that appear in the judgements of \(V\), then \[sat(V) =\{V\cup\{\vdash\neg x\mid x\in W\}\cup\] \[\{\vdash\neg\neg y\mid y\in P\setminus W\}\mid W\subseteq P\}.\] If \(V\) is a finite set of judgements, let the _refinement of \(V\)_ be the set \(\mathit{ref}(V)\) of judgements obtained as follows: 1. if either \(\vdash\bot\in V\), or for some formula \(\phi\) both \(\vdash\neg\phi\), \(\vdash\neg\neg\phi\in V\), replace \(V\) with \(\{\vdash\bot\}\); 2. identify all alethic judgements in \(V\) and do simultaneously * if \(\top\vdash p\in V\) replace all the occurrences of \(p\) in all the other judgements in \(V\) with \(\top\); * if \(p\vdash\bot\in V\) replace all the occurrences of \(p\) in all the other judgements in \(V\) with \(\bot\); 3. let \(\mathit{ref}(V)\) be the result of there replacements. These replacements produce equivalent results in every Lawvere logic and for this reason we can describe the effect of refinement using the following decision tree: Next, we use these functions to present an algorithm that computes the normal representation of a finitely axiomatized theory (identified by its finite axiomatization). Moreover, being the presentation of these functions as decision trees, the algorithm itself is represented by a decision tree. ### The Normalisation Algorithm 1. **Input:** a finite set \(R\) of judgements; 2. **Let**\(\mathcal{X}:=sat(R)\); 3. **For each** leaf identity \(F\) of \(\mathcal{X}\)**compute**\(\mathcal{D}^{*}(F)\); let**\(\mathcal{Y}\) be the decision tree obtained by composing \(\mathcal{X}\) with these trees; 4. **For each** leaf identity \(W\) of \(\mathcal{Y}\)**compute**\(\mathit{ref}(W)\); let**\(\mathcal{Z}\) be the decision tree obtained by composing \(\mathcal{Y}\) with these trees; 5. **If**\(\mathcal{Z}\neq\mathcal{X}\), **let**\(\mathcal{X}:=\mathcal{Z}\) and **go back to 3**; 6. **Else, output**\(\mathcal{Z}\). Observe that the algorithm always terminates on finite inputs. Since it uses \(\mathcal{D}\), it is nondeterministic. However, the results presented hereafter remain true independently of the nondeterministic choices. Also, the computation of the algorithm can be represented as a decision tree with the root indexed by \(R\), and the structure given by the composition of the decision trees used in the steps of the algorithm. In what follows, we sketch how this algorithm works on a couple of examples that cover the main subtleties of the algorithm. Suppose we have a finite set \(V\) of judgements. If \(V\) is not already in normal form, we can use the theorems of LLQ to simplify the judgements and eventually convert them to a normal form. In doing this, in some cases, we will have to use pairs \((\gamma_{1},\gamma_{2})\) of supplementary judgements (see Section 3) that will be treated as new axioms. This is done when the conversion cannot progress without extra assumptions. One can think of it as "a proof by cases" resulting in two new separate set of judgements \(V_{1}\) and \(V_{2}\), each containing one of the supplementary judgements. The invariant preserved in each reduction step is that, for \(i\in\{1,2\}\) * all the judgements in \(V_{i}\) are provable from \(V\cup\{\gamma_{i}\}\); * all the judgements in \(V\) are provable from \(V_{i}\). **Example 30** (Rule 3).: _Let \(\gamma=\theta\vdash(\phi\vee\psi)\otimes\rho\) be the judgement we would like to reduce to normal form. The disjunction occurring in \(\gamma\) is problematic as it prevents \(\gamma\) to be provably equivalent to another (single) judgement in normal form. However, by using the supplementary hypotheses \(\psi\vdash\phi\) and \(\phi\vdash\psi\), we can split the reduction by cases and obtain \(V_{1}=\{\phi\vdash\psi,\theta\vdash\psi\otimes\rho\}\) and \(V_{2}=\{\psi\vdash\phi,\theta\vdash\phi\otimes\rho\}\), two sets of judgements on which each element is (at least) one step closer to be in normal form (see Fig. 1) - this is Rule 3 in the definition of \(\mathcal{D}\). Note that the invariant described above is preserved._ **Example 31** (Rule 5).: _Let \(\gamma=\theta\vdash(\phi\multimap\psi)\otimes\rho\) be the judgement to be converted into normal form. In this specific case, the problematic connective is \(\multimap\). By adding appropriate pairs of supplementary judgements, in sequence, we split the reduction in four cases and obtain \(W_{1},\ldots,W_{4}\) as new sets of judgements (see Fig 2) - this is Rule 5 in the definition of \(\mathcal{D}\)._ _Of interest in this particular case, is that in order to guarantee that the new sets of judgements have strictly reduced complexity --interpreted as number of sub-formulas not in normal form-- we need to take several reduction steps._ Starting from a finite set of judgements \(V\), the normalization algorithm works essentiality by repeatedly applying conversion rules to the judgements that are not in normal form by inspecting the structure of the formulas in the judgements. Note that, Examples 30 and 31 describe actual conversion rules in the algorithm. As each conversion rule guarantees that the number of sub-formulas not in normal form is strictly reduced, the algorithms eventually terminates. The output \(\mathcal{N}(V)\) of the algorithm is a set of theories (technically, only their axioms). The next theorem states the correctness of this conversion. **Theorem 32** (Normal representation).: _Given a finite set \(V\) of judgements in \(\mathcal{L}\in\{\mathbb{L},\mathbb{L}_{1},\mathbb{L}_{1}^{*}\}\), the set of the theories axiomatized by the elements in \(\mathcal{N}(V)\) is a normal representation of the theory axiomatized by \(V\). Consequently, any model of \(V\) is a model for at least one of the elements in \(\mathcal{N}(V)\); and any model of an element in \(\mathcal{N}(V)\) is a model of \(V\)._ Proof.: The proof is a trivial induction on the structure of \(V\) relying of Lemma 29 and on the fact that all the constructions used in the normalization algorithm are decision trees. The normalization algorithm also allows us to prove the decidability of satisfiability in LLQ. **Theorem 33** (Decidability of satisfiability in LLQ).: _Given a finite set \(V\) of judgements in \(\mathcal{L}\in\{\mathbb{L},\mathbb{L}_{1},\mathbb{L}_{1}^{*}\}\), \(V\) is satisfiable iff there exists \(S\in\mathcal{N}(V)\) s.t. \(\vdash\bot\not\in S\). Consequently, the satisfiability of judgements in LLQ is decidable._ ## VII Completeness and incompleteness In this section we demonstrate firstly that all the logics for the Lawvere quantale are incomplete in general, even for theories over finitely many propositional symbols. Secondly, we prove that all LLQ are complete if we focus on the finitely-axiomatised theories only. Finally, we prove an approximate form of strong completeness over a well behaved class of theories, not necessarily finitely-axiomatizable. **Theorem 34** (Incompleteness for arbitrary theories).: _LLQ are incomplete: for any \(\mathcal{L}\in\{\mathbb{L},\mathbb{L}_{1}\mathbb{L}_{1}^{*}\}\), there exist theories \(\mathbb{T}\) and judgements \(\gamma\) in \(\mathbb{L}\) so that all the models of \(\mathbb{T}\) are models of \(\gamma\) but \(\gamma\) is not provable from \(\mathbb{T}\) in \(\mathcal{L}\)._ _Moreover, the result is independent of the particular proof systems that one can chose for LLQ, in the sense that any finite set of finitary proof rules that can be proposed (as an alternative to the rules presented in this paper) still produces an incomplete theory for each \(\mathcal{L}\)._ Proof.: Consider \(\mathcal{L}\in\{\mathbb{L},\mathbb{L}_{1}\mathbb{L}_{1}^{*}\}\) with their proof systems presented in Section IV, or any alternative finite set of _finitary_ rules that can describe \(\mathcal{L}\). Let \(p,q\in\mathbb{P}\) be two atomic propositions and \(\mathbb{T}\) a theory in \(\mathcal{L}\) axiomatized by all the judgements of type \[(n+1)p\vdash nq\quad\text{for all }n\in\mathbb{N}\,.\] Observe that in all models \(m\) of \(\mathbb{T}\) we have that \(m(p)\geq m(q)\), hence all the models of \(\mathbb{T}\) are also models of \(p\vdash q\). We claim that in none of the LLQ, neither any alternative version of their deductive systems that is governed by a finite set of finitary rules, \(p\vdash q\) is provable from the axioms of \(\mathbb{T}\). Assume there exists a finite proof of \(p\vdash q\) in \(\mathcal{L}\) from the set \(\{(n+1)p\vdash q\mid n\geq 0\}\) of axioms of \(\mathbb{T}\). Since this proof is finite and uses a finite set of finitary rules, there must exist \(k\geq 0\) so that the only judgements used in the proof of \(p\vdash q\) are from the set \(V=\{(n+1)p\vdash q\mid 0\leq n\leq k\}\). If that is the case, then, by Theorem 3 (soundeness), any model of \(V\) is a model for \(p\vdash q\). But this is obviously false. Consider, for instance, the model \(m\) such that \(m(p)=\frac{k}{k+1}\) and \(m(q)=1\). This is a model of \(V\), but not a model of \(p\vdash q\). A consequence of Theorem 34 is that not all consistent theories have models. For instance in \(\mathbb{L}_{1}\), the theory axiomatized by the following set of axioms \[\{p\vdash n\mid n\in\mathbb{N}\}\cup\{\vdash\neg\neg p\}\,,\] Fig. 1: Conversion into normal representation (Rule 3) Fig. 2: Conversion into normal representation (Rule 5) for some atomic proposition \(p\in\mathbb{P}\), is consistent because any proof will use a finite subset of axioms from the first set and possibly \(\vdash\neg\neg p\), and these are not sufficient to prove \(\vdash\bot\). However, this theory has no model, because in any model \(m\) the axioms in the first set guarantee that \(m(p)\geq n\), for all \(n\in\mathbb{N}\), while the axiom in the singleton require that \(m(p)\) is finite --contradiction. However the deduction systems of LLQ are complete w.r.t. the corresponding semantics, if we only consider the finitely-axiomatized theories. **Theorem 35** (Completeness for finitely-axiomatized theories).: _Let \(\mathcal{L}\in\{\mathbb{L},\mathbb{L}_{1}\mathbb{L}_{1}^{*}\}\) and \(\mathbb{T}\) a finitely-axiomatized theory in \(\mathcal{L}\). If a judgement \(\gamma\) is a semantic consequence of \(\mathbb{T}\) in \(\mathcal{L}\), then \(\gamma\) is provable from \(\mathbb{T}\) in \(\mathcal{L}\) (in symbols, \(\mathbb{T}\models_{\mathcal{L}}\gamma\) implies \(\mathbb{T}\Vdash_{\mathcal{L}}\gamma\))._ Proof.: The proof is made similarly for \(\mathbb{L},\mathbb{L}_{1}\) and \(\mathbb{L}_{1}^{*}\), adapting only the arguments to the appropriate context. Hereafter we present the proof for \(\mathbb{L}_{1}^{*}\), as it is the most complex of all. We do the proof in three steps. I. Firstly, we prove the statement under the assumption that \(\gamma\) is a normal judgement and \(\mathbb{T}\) a normal theory. There are a few cases we have to consider. 1. If \(\gamma\) is the alethic judgement \(\vdash\neg p\) for some atomic proposition \(p\in\mathbb{P}\). In any model \(m\) of this judgement we have \(m(p)=\infty\). Hence, the hypothesis guarantees that in all models of \(\mathbb{T}\), the value of \(p\) is \(\infty\). This means that \(p\) must be present in the axiomatization, because otherwise it could have any value. Since \(\mathbb{T}\) has a normal axiomatization, it must have at least one assertive judgement of \(p\). If it is \(\top\vdash p\), then all the models of \(\mathbb{T}\) will evaluate \(p\) to \(0\) --impossible. If it is the finitist judgement \(\vdash\neg\neg p\), all the models of \(\mathbb{T}\) will evaluate \(p\) to a finite value --impossible. The only possibility left is \(\vdash\neg p\), and in this case our judgment is indeed present in \(\mathbb{T}\). 2. If \(\gamma\) is the finitist judgement \(\vdash\neg\neg p\) for some atomic proposition \(p\in\mathbb{P}\). In any model \(m\) of this judgement we have that the value of \(p\) is finite. Hence, the hypothesis guarantees that in all models of \(\mathbb{T}\), the value of \(p\) is finite as well. This means firstly that \(p\) must be present in the axiomatization, because otherwise it could have \(\infty\) as value. Since \(\mathbb{T}\) has a normal axiomatization, it must have at least one assertive judgement of \(p\). If it is \(\top\vdash p\), then it also contains \(\vdash\neg\neg p\), since in all LLQ we can prove \[\infer{\vdash\phi}{\vdash\neg\neg\phi}.\] If it is the finitist judgement \(\vdash\neg\neg p\) itself, then \(\gamma\) is in \(\mathbb{T}\). If it is \(\vdash\neg p\), in this case all the models of \(\mathbb{T}\) assign to \(p\) the value \(\infty\) --impossible. 3. The last case is when \(\gamma\) is affine: \[(\bigotimes_{i\leq n}r_{i}*p_{i})\otimes r*1\vdash(\bigotimes_{j\leq m}s_{j}*q _{j})\otimes s*1\] for \(r,s,r_{i},s_{j}\in[0,\infty)\), \(p_{i},q_{j}\in\mathbb{P}\), with \(m\) and/or \(n\) possibly having value \(0\). We need to prove that our judgement is provable from the axioms of \(\mathbb{T}\). Consider a normal axiomatization of \(\mathbb{T}\). From the definition of normal axiomatization, some of these axioms are alethic and none of the atomic proposition present in these alethic axioms are present in any other axioms. If some of the atomic propositions \(p_{i},q_{j}\) appear in the alethic axioms of \(\mathbb{T}\). There are a few cases to consider: * If the alethic axioms where these appear are of type \(\top\vdash p\) for some atomic proposition \(p\), then it is sufficient to cancel all instances of \(p\) from our judgement and focus on proving that the new judgement can be proven from the axioms of \(\mathbb{T}\). This is sufficient to claim that our initial judgement can be proven as well, since the following rules are provable in all LLQ: \[\infer{\top\vdash p}{\phi\vdash\psi}{\phi\otimes r*p\vdash\psi}\quad\text{ and }\quad\infer{\top\vdash p}{\phi\vdash\psi}{\phi\vdash\psi\otimes r*p}.\] * If an alethic axiom of type \(\vdash\neg p_{i}\) is in \(\mathbb{T}\), then \[(\bigotimes_{i\leq n}r_{i}*p_{i})\otimes r*1\vdash(\bigotimes_{j\leq m}s_{j}*q _{j})\otimes s*1\] is provable in \(\mathbb{T}\) because in any LLQthe following rule is provable: \[\infer{p\vdash\bot}{r*p\otimes\phi\vdash\psi}.\] * impossible. Hence, at least one axiom \(\vdash\neg p_{i}\) must be in the axiomatization of \(\mathbb{T}\) - and we are in the previous case. It remains to prove the case when none of the atomic propositions \(p_{1},\ldots,p_{n},q_{1},\ldots,q_{m}\) appear in the alethic axioms of \(\mathbb{T}\). Consequently, all these atomic propositions apear in finitist axioms of \(\mathbb{T}\). Hence, we are looking after real values for these atomic propositions. In this case, we will prove that our judgement \[(\bigotimes_{i\leq n}r_{i}*p_{i})\otimes r*1\vdash(\bigotimes_{j\leq m}s_{j}*q _{j})\otimes s*1\] can be proven from the non-assertive axioms of \(\mathbb{T}\) only. Using commutativity and associativity of tensor product, we will reorganize both our judgement and the non-assertive axioms of \(\mathbb{T}\) so that we put together different copies of the same atomic proposition in a tensorial product and use the facts that \(0p=\top\) and \(r*\top\otimes\phi\dashv\phi\). So, without losing generality, we can assume that our judgement \(\gamma\) is \[(\bigotimes_{i\leq k}a_{i}*x_{i})\otimes r*\mathbb{1}\vdash(\bigotimes_{i\leq k}b _{i}*x_{i})\otimes s*\mathbb{1}\] and the non-assertive axioms of \(\mathbb{T}\) are \[\left\{\begin{aligned} &(\bigotimes_{i\leq k}a_{i}^{1}*x_{i}) \otimes r^{1}*\mathbb{1}\vdash(\bigotimes_{i\leq k}b_{i}^{1}*x_{i})\otimes s^ {1}*\mathbb{1}\\ &\cdots\\ &(\bigotimes_{i\leq k}a_{i}^{l}*x_{i})\otimes r^{l}*\mathbb{1} \vdash(\bigotimes_{i\leq k}b_{i}^{l}*x_{i})\otimes s^{l}*\mathbb{1}\end{aligned}\right.\] for some positive reals \(a_{i},b_{i},a_{i}^{j},b_{i}^{j},r,s,r^{j},s^{j}\) and atomic propositions \(x_{1},\ldots,x_{k}\). Consider the matrices \(A\in\mathbb{R}^{l\times k}\), \(C\in\mathbb{R}^{k\times 1}\) and vector \(\beta\in\mathbb{R}^{k}\). \[A =\begin{pmatrix}a_{1}^{1}-b_{1}^{1}&\ldots&a_{k}^{1}-b_{k}^{1} \\ \ldots&\ldots&\ldots\\ a_{1}^{l}-b_{1}^{1}&\ldots&a_{k}^{l}-b_{k}^{l}\end{pmatrix}\qquad\beta= \begin{pmatrix}r^{1}-s^{1}\\ \ldots\\ r^{l}-s^{l}\end{pmatrix}\] \[C =(b_{1}-a_{1},\ldots,b_{k}-a_{k})\] and let \(\delta=r-s\). Since from the hypothesis, any model of \(\mathbb{T}\) is a model of \(\gamma\), being also our working hypothesis in this case, there exists no \(x=(x_{1},\ldots,x_{k})\in\mathbb{R}^{k}\) such that \[Ax+\beta\geq 0\,,\quad Cx+\delta>0\,.\] (Integer case) If we are in the case of \(\mathbb{L}\) or \(\mathbb{L}_{1}\), then \(a_{i},b_{i},a_{i}^{j},b_{i}^{j}\) are all positive integers, and by applying Mozkin's rational transposition problem [13] (here adapted already for the affine case) there exist \(t_{0}\in\mathbb{Z}\) and \(t=(t_{1},\ldots,t_{l})\in\mathbb{Z}^{l\times 1}\), such that \[Cx+\delta=t(Ax+\beta)+t_{0}\,,\quad t\geq 0\,,\quad t_{0}\geq 0\,.\] But this means that by considering \(|t_{i}|\) copies of the \(i\)-th non-assertive axiom of \(\mathbb{T}\), and \(t_{0}\) copies of \(\mathbb{1}\), and repeatedly apply the rule \[\frac{\phi_{1}\vdash\psi_{1}\quad\phi_{2}\vdash\psi_{2}}{\phi_{1}\otimes\phi_{ 2}\vdash\psi_{1}\otimes\psi_{2}}\] we obtain a proof in \(\mathbb{T}\) for \[\left((\bigotimes_{i\leq k}a_{i}*x_{i})\otimes r*\mathbb{1}\right)\vdash\left( (\bigotimes_{i\leq k}b_{i}*x_{i})\otimes s*1\right).\] (Real case) If we are working in \(\mathbb{L}_{1}^{*}\), by the general Mozkin transposition theorem there exist \(t_{0}\in\mathbb{R}\) and \(t=(t_{1},\ldots,t_{l})\in\mathbb{R}^{l\times 1}\), such that \[Cx+\delta=t(Ax+\beta)+t_{0}\,,\quad t\geq 0\,,\quad t_{0}\geq 0\,.\] In \(\mathbb{L}_{1}^{*}\) we can repeatedly apply the derived rule \[\frac{\phi_{1}\vdash\psi_{1}\quad\phi_{2}\vdash\psi_{2}}{r*\phi_{1}\otimes s* \phi_{2}\vdash r*\psi_{1}\otimes s*\psi_{2}}\] and by following the pattern from \(t\), \(t_{0}\) and obtain a proof from \(\mathbb{T}\) for \[\left((\bigotimes_{i\leq k}a_{i}*x_{i})\otimes r*\mathbb{1}\right)+\left(( \bigotimes_{i\leq k}b_{i}*x_{i})\otimes s*1\right).\] II. The second step in the proof is to assume that \(\mathbb{T}\) is finitely-axiomatized but not necessarily normal, and \(\gamma\) is in normal form. Let \(V\) be a finite axiomatization of \(\mathbb{T}\) and let \(\mathcal{N}(V)=\{V_{1},\ldots,V_{n}\}\). Hence, by applying Theorem 32, any model of \(\mathbb{T}\) is a model for at least one theory axiomatized by \(V_{i}\); and reverse, any model of any theory axiomatized by \(V_{i}\) is a model of \(\mathbb{T}\). Since any model of \(\mathbb{T}\) is a model of \(\gamma\), we get that for any \(i\leq n\), any model of \(V_{i}\) is a model of \(\gamma\). But \(V_{i}\) is a normal axiomatization, so we can apply step I. and obtain that \[\text{for all}\quad i\leq n,\quad\frac{V_{i}}{\gamma}\] Observe that \(\{V_{1},\ldots,V_{n}\}\) is the signature of a decision tree with the root labelled by \(V\). Applying the second totality Lemma 22, we get \[\frac{V}{\gamma}\] hence, \(\gamma\) is provable in \(\mathbb{T}\). III. Consider now the case when neither \(\mathbb{T}\), nor \(\gamma\) are in normal form. Suppose that \(V\) is a finite axiomatization for \(\mathbb{T}\), that \(\mathcal{N}(V)=\{V_{1},\ldots,V_{n}\}\), that \(\mathcal{N}(\gamma)=\{U_{1},\ldots,U_{m}\}\), and assume that the trace of the path from the root to \(U_{i}\) in the decision tree \(\mathcal{N}(\gamma)\) is \(L_{i}\). Let \(i\leq m\). If \(m\) is a model for \(V\cup L_{i}\), since all the models of \(V\) are models of \(\gamma\), we get that \(m\) is a model for \(L_{i}\cup\{\gamma\}\). But because \(\mathcal{N}(\gamma)\) is a decision tree we know that \[\frac{\gamma\ L_{i}}{U_{i}}.\] Hence, \(m\) is a model of \(U_{i}\). Consider an arbitrary judgement \(\Delta\vdash\delta\in U_{i}\). It is in normal form and, moreover, we just proven that any model of \(V\cup L_{i}\) is a model of \(\Delta\vdash\delta\). We can apply the case II. and get that \[\frac{V\ L_{i}}{\Delta\vdash\delta}\quad\text{implying further that}\quad\frac{V\ L_{i}}{U_{i}}.\] But from the construction of the decision tree \(\mathcal{N}(\gamma)\) we know that \(\frac{U_{i}}{\gamma}\). Hence, for each \(i\leq m\), \[\frac{V\ L_{i}}{\gamma}.\] And applying the second totality Lemma 22, we get \(\frac{V}{\gamma}\). A consequence of this completeness result is the following. **Corollary 36**.: _For any finitely axiomatized theory of \(\mathbb{L}_{1}^{*}\), the set of its diagrammatic extensions coincide with the set of the diagrammatic theories of its models._ ### _Approximate Completeness of LLQ_ Incompleteness over general theories (Theorem 34) is a common trait of several many-valued logics [6, 7, 19], especially, if interpreted over the reals. A weaker form of completeness result, firstly proposed by Ben Yaacov [7], is _approximate completeness_. Rather than compromise on the theories, one asks instead whether a judgement can be "proven up to arbitrary precision". In \(\mathbb{L}_{1}^{*}\), approximate completeness can be formally stated as follows: whenever all the models of a set of judgements \(S\) are also models of \(\vdash\psi\), the judgement \(\epsilon\vdash\psi\) is provable from \(S\) in \(\mathbb{L}_{1}^{*}\), for any \(\varepsilon>0\). It is not difficult to see that the above statement is still too strong to hold in \(\mathbb{L}_{1}^{*}\) for general sets of judgements \(S\). Actually, already theories using only finitely-many atomic propositions (one is enough) can falsify the statement. **Fact 37** (Failure of approximate completeness).: _Consider the set of judgements \(S=\{p\vdash n\mid n\in\mathbb{N}\}\) in \(\mathbb{L}_{1}^{*}\), for \(p\in\mathbb{P}\) a fixed atomic proposition, and take \(\psi=\neg p\)._ _The only model \(m\) of \(S\) is such that \(m(p)=\infty\), because satisfying all the judgements of the form \(p\vdash n\), for \(n\in\mathbb{N}\), is equivalent to say that the interpretation of \(p\) is \(\infty\). Thus, it is also a model for \(\vdash\neg p\)._ _Now, for the sake of finding a contradiction, assume that approximate completeness holds in \(\mathbb{L}_{1}^{*}\) and let \(\varepsilon<\infty\). Then, \(\neg p\) (\(p\) is infinite) should be provable from \(S\). As any proof is finite, there must exists a finite subset \(S^{\prime}\subseteq S\) such that \(\varepsilon\vdash\neg p\) is provable from \(S^{\prime}\)._ _Define \(N:=\max\{n\mid p\vdash n\in S^{\prime}\}\). Then, \(m^{\prime}(p):=N\) is a model for \(S^{\prime}\) and, by Theorem 3 (soundness), \(m\models(\varepsilon\vdash\neg p)\) That is, \(\varepsilon=m^{\prime}(e)\geq m^{\prime}(\neg\phi)\). However, \(m^{\prime}(\neg\phi)=\infty\), thus, \(m^{\prime}(\neg\phi)>\varepsilon\)--contradiction._ Despite we cannot hope for a general form of approximate strong completeness, we can still recover a restricted version of it by focusing on a suitable well-behaved class of judgements and theories over finitely-many atomic propositions. Let \(\mathbb{P}_{n}=\{p_{1},\ldots,p_{n}\}\) be a finite set of atomic propositions, and denote by \(\mathbb{L}_{1}^{*}(n)\) the logic \(\mathbb{L}_{1}^{*}\) restricted over \(\mathbb{P}_{n}\). Then, our approximate completeness result is as follows: **Theorem 38** (Restricted approximate completeness).: _Let \(S\) be a set of normal judgements in \(\mathbb{L}_{1}^{*}(n)\) such that it has only models valued over \([0,\infty)\). If a normal judgement2\(\vdash\psi\) is a semantic consequence of \(S\) in \(\mathbb{L}_{1}^{*}(n)\), then \(\varepsilon\vdash\psi\) is provable from \(S\), for any \(\varepsilon>0\)._ Footnote 2: By an abuse of notation, here we actually mean that \(\vdash\phi\) is provably equivalent to a judgement in formal form. Proof.: We start by assuming that \(S\) is not finite, as this case is already covered (even in more generality) in Theorem 35. By hypothesis, \(\vdash\phi\) is provably equivalent to a judgement \(\gamma\) in normal form. The cases where \(\gamma\) is either tautological, inconsistent, or alethic are trivial. For the same reason, we also assume \(S\) to not contain tautological or inconsistent judgements, as the former can be derived already from \(\mathbb{L}_{1}^{*}(n)\), and the latter would allow to prove any judgement. Moreover, by the hypothesis on the models of \(S\), we know that the only alethic judgements in \(S\) are of type \(\top\vdash p\). We can assume also these to not be present in \(S\), as by replacing all the occurrences of \(p\) with \(\top\) in \(S\) we obtain a provably equivalent set of judgement over a reduced set of atomic propositions. For the sake of keeping the argument of the proof simpler, we further assume that the constant \(\mathbb{1}\) is never used neither in \(S\) nor in \(\psi\). This will guarantee the us to work on linear maps, rather than affine ones. The generalisation to the case of affine maps can be done by invoking the Fundamental Theorem of Linear Inequalities (for details see _e.g._[20, Corollary 7.1h]). So, as we have done already in the completeness proof (Theorem 35), without loss of generality, we can assume that our judgement \(\gamma\) is \[(\bigotimes_{i\leq n}a_{i}*p_{i})\vdash(\bigotimes_{i\leq n}b_{i}*p_{i})\] and the non-assertive judgements in \(S\) are \[\begin{cases}(\bigotimes_{i\leq n}a_{i}^{1}*p_{i})\vdash(\bigotimes_{i\leq n} b_{i}^{1}*p_{i})\\ (\bigotimes_{i\leq n}a_{i}^{2}*p_{i})\vdash(\bigotimes_{i\leq n}b_{i}^{2}*p_{i} )\\ \ldots\end{cases}\] Let \(\mathcal{X}=\mathbb{R}^{n}\) and \(\mathcal{Y}=\mathbb{R}^{\mathbb{N}}\), and denote by \(\mathcal{X}^{*}\), \(\mathcal{Y}^{*}\) their dual spaces, respectively (_i.e._, \(\mathcal{X}^{*}\) is the set of all continuous linear functionals \(\mathcal{X}\to\mathbb{R}\)). Define the maps \(T_{S}\colon\mathcal{X}\to\mathcal{Y}\) and \(t_{\gamma}\colon\mathcal{X}\to\mathbb{R}\), for \(x_{i}\in\mathbb{R}\), \(j\in\mathbb{N}\) as follows \[T_{S}(x_{1},\ldots,x_{n})(j) :=\sum_{i\leq n}(a_{i}^{j}-b_{i}^{j})x_{i}\,.\] \[t_{\gamma}(x_{1},\ldots,x_{n}) :=\sum_{i\leq n}(a_{i}-b_{i})x_{i}\] In abstract terms, the set of judgements in \(S\) can be thought of as the inequality \(T_{S}(x_{1},\ldots,x_{n})\geq 0\) and, similarly, \(\gamma\) as the inequality \(t_{\gamma}(x_{1},\ldots,x_{n})\geq 0\). Define the sets \[V_{S} =\left\{x^{*}\in\mathcal{X}^{*}\mid\forall x\in\mathcal{X}.\,T_{S} (x)\geq 0\text{ implies }x^{*}(x)\geq 0\right\},\] \[Z_{S} =\left\{x^{*}\in\mathcal{X}^{*}\mid\exists y^{*}\in\mathcal{Y}^{ *}.\,x^{*}=T_{S}^{*}(y^{*})\text{ and }y^{*}\geq 0\right\}.\] where \(T_{S}^{*}\colon\mathcal{Y}^{*}\to\mathcal{X}^{*}\) is the adjoint of \(T_{S}\) uniquely defined by the adjoint property as \(T_{S}^{*}(y^{*}):=y^{*}\circ T_{S}\). By Hurwicz's general form of Farkas lemma [18, Theorem 3.3], we know that \(V_{S}\) is the _regularly convex envelop_ of \(Z_{S}\), which corresponds to the topological closure \(\overline{Z_{S}}\) of \(Z_{S}\) for finite-dimensional vector spaces as \(\mathcal{X}^{*}\) is. Let \(\pi_{k}\colon\mathcal{Y}\to\mathbb{R}\) denote the \(k^{\text{th}}\)-projection function defined as \(\pi_{k}((x_{i})_{i\in\mathbb{N}})=x_{k}\). Clearly \(\pi_{k}\in\mathcal{Y}^{*}\). Moreover, the finite positive linear combinations of these projections, forms a dense subset of \(\mathcal{Y}_{+}^{*}=\{y^{*}\mid y^{*}\geq 0\}\). Denote this by \(P\). Notice that \(Z_{S}=T_{S}^{*}(\mathcal{Y}_{+}^{*})\). Since \(T_{S}^{*}\) is a continuous function and \(P\subseteq\mathcal{Y}_{+}^{*}\) is dense, \(\overline{Z_{S}}=\overline{T_{S}^{*}(P)}\). As we have established that \(V_{S}=\overline{Z_{S}}\), any element of \(V_{S}\) can be approached arbitrary close by a map of the form \(T_{S}^{*}(p)\), for \(p\in P\). In simpler terms, as \(T_{S}^{*}(p)=p\circ T_{S}\), any element of \(V_{S}\) is arbitrary close to a finite positive linear combination of judgements in \(S\). Notice that, by hypothesis, \(\gamma\) is a semantic consequence of \(S\), that, in our more abstract terms, it means that \(t_{\gamma}\in V_{S}\). From this, to prove \(\vdash\psi\) from \(S\) in \(\mathbb{L}_{1}^{*}\) we just need to pick an appropriate \(p\in P\) (which we know exists) and replicate the finite positive linear combination it represents by using the derived rule \[\frac{\phi_{1}\vdash\psi_{1}\quad\phi_{2}\vdash\psi_{2}}{r*\phi_{1}\otimes s* \phi_{2}\vdash r*\psi_{1}\otimes s*\psi_{2}}\] to obtain a judgement \(\gamma^{\prime}\) that is \(\varepsilon\)-close to \(\gamma\). As we assumed \(\gamma\) to be provably equivalent to \(\vdash\psi\), by chaining the two proof together, we are done. ## VIII Encodings After having developed three propositional logics for the Lawvere quantale let us now ask how they relate to other logics, namely, the classical Boolean logic, Lukasiewicz logic [5], and Ben Yaacov's continuous propositional logic [6]. ### _Encoding of Boolean Propositional Logic_ **Theorem 39** (Encoding Boolean Logic).: _The theory \(\mathbb{T}\) in \(\mathbb{L}\) axiomatized by the only axiom_ \[\begin{array}{ll}\mathrm{(\textsc{tnd})}&\vdash\phi\vee(\neg\phi)&\text{ (\emph{ tertium non datur})}\end{array}\] _coincides with classical Boolean logic (\(\mathbb{P}\) is the set of atomic propositions, and \(\vee,\wedge,\neg\) are the classic disjunction, conjunction and negation respectively). More exactly, a judgement of propositional logic (note that it also belongs to \(\mathbb{L}\)) is provable in the classic Boolean logic iff it is provable in \(\mathbb{L}\) from \(\mathbb{T}\)._ Proof.: Consider the judgement \(\gamma\) in Boolean logic with atomic propositions in \(\mathbb{P}\). This is, obviously, also a judgement in \(\mathbb{L}\). Note that a model of Boolean logic is a map \(m\colon\mathbb{P}\to\{\textit{true},\textit{false}\}\) that extends, as standard, to all logical symbols. If we consider the bijection between \(\{\textit{true},\textit{false}\}\) and \(\{0,\infty\}\) that maps _true_ to \(0\) and _false_ to \(\infty\), we discover that \(m\) is also a model of \(\mathbb{L}\) that satisfies the axiom (\(\textsc{tnd}\)), hence it is a model of \(\mathbb{T}\). Similarly, any model of \(\mathbb{T}\), because it is a model of (\(\textsc{tnd}\)), interprets all the formulas as \(0\) or \(\infty\) and it is not difficult to verify that it is a model of Boolean logic. From the soundness of Boolean logic we know that if \(\gamma\) is provable in Boolean logic, it is satisfied by all its models. But since the models of Boolean logic are the models of \(\mathbb{T}\) and vice versa, we obtain that all the models of \(\mathbb{T}\) are models of \(\gamma\). Because \(\mathbb{T}\) is finitely axiomatized, we can apply Theorem 35 and we get that \(\gamma\) is provable in \(\mathbb{T}\). Similarly, if \(\gamma\) is provable from \(\mathbb{T}\), then in all the models of \(\mathbb{T}\), \(\gamma\) is satisfiable. But the models of \(\mathbb{T}\) are exactly the models of Boolean logic. Hence, in Boolean logic we have that all the models satisfy \(\gamma\). Further, the completeness of Boolean logic guarantees that \(\gamma\) is provable in Boolean logic. ### _Encoding of Lukasiewicz propositional logic_ Next we show that Lukasiewicz logic (\(\mathbb{L}\)) can be encoded in \(\mathbb{L}_{1}\). Before doing that, we briefly recall the syntax and semantics of \(\mathbb{L}\)-formulas and refer to [4] for further details. The formulas of \(\mathbb{L}\) are freely generated from the atomic propositions by three logical connectives: \[\top\mid\neg\phi\mid\phi\to\psi\] ( \[\mathsf{L}\] \[\mathsf{u}\] \[\mathsf{L}\] \[\mathsf{u}\] \[\mathsf{u}\] \[\mathsf{u}\] \[\mathsf{u}\] \[\mathsf{u}\] \[\mathsf{u}\] \[\mathsf{u}\] \[\mathsf{u}\] \[\mathsf{u}\] \[\mathsf{u}\] \[\mathsf{u}\] \[\mathsf{u}\] \[\mathsf{u}\] \[\mathsf{u}\] The models of \(\mathbb{L}\) are assignments \(w\colon\mathbb{P}\to[0,1]\) of the propositional symbols to the unit interval, which are uniquely extended to the formulas as shown below3 Footnote 3: We consider the semantics used, _e.g._ in [6], where \(0\) corresponds to truth and any \(r\in(0,1]\) to a degree of truth/falsity, where \(1\) is absolute falsity. \[w(\top) :=0\] \[w(\neg\phi) :=1-w(\phi)\] \[w(\phi\to\psi) :=\max\{w(\phi)-w(\psi),0\}\,.\] A formula \(\phi\) is satisfied by a model \(m\) whenever \(w(\phi)=0\). The following is an Hilbert-style axiomatisation for \(\mathbb{L}\), \[\begin{array}{ll}\mathrm{(A1)}&(\phi\to\psi)\to\phi\\ \mathrm{(A2)}&((\theta\to\phi)\to(\theta\to\psi))\to(\psi\to\phi)\\ \mathrm{(A3)}&(\phi\to(\phi\to\psi))\to(\psi\to(\psi\to\phi))\\ \mathrm{(A4)}&(\phi\to\psi)\to(\neg\psi\to\neg\phi)\end{array}\] Deduction is defined in the natural way with _modus ponens_ being the only rule of inference. This axiomatization is (weak) complete w.r.t. the semantics above [21, 22]. The encoding function \(e\colon\mathbb{L}\to\mathbb{L}_{1}\) mapping \(\mathbb{L}\)-formulas to \(\mathbb{L}_{1}\)-formulas is defined as follows, for \(p\in\mathbb{P}\) and \(\phi,\psi\in\mathbb{L}\) \[\begin{array}{ll}e(p):=p\lor\mathbb{1}&\qquad\qquad e(\neg\phi)\mathrel{ \mathop{:}}=e(\phi)\multimap\mathbb{1}\\ e(\top):=\top&\qquad\qquad e(\phi\to\psi)\mathrel{\mathop{:}}=e(\psi)\multimap e (\phi)\,.\end{array}\] **Theorem 40** (Encoding Lukasiewicz Logic).: _Let \(\mathbb{T}\) be the theory in \(\mathbb{L}_{1}\) axiomatized by the only axiom_ \[\begin{array}{ll}\mathrm{(\textsc{1-bbd})}&\neg\neg\psi\vdash\mathbb{1} \multimap\psi\,.\end{array}\] _Then, a formula \(\phi\) of Lukasiewicz logic is provable in \(\mathbb{L}\) iff the judgement \(\vdash e(\phi)\) is provable in \(\mathbb{L}_{1}\) from \(\mathbb{T}\)._ Proof.: Note that, a model \(m\) in \(\mathbb{L}_{1}\) satisfies (1-bdd) iff \[\begin{array}{ll}\mathrm{for\ all}\ \phi\in\mathbb{L}_{1}\,,&m(\phi)<\infty \implies m(\phi)\leq 1\,.\end{array} \tag{1}\] (\(\Leftarrow\)) Let \(\phi\in\mathbb{L}\) and assume \(\vdash e(\phi)\) is provable in \(\mathbb{L}_{1}\) from \(\mathbb{T}\). Let \(w\colon\mathbb{P}\to[0,1]\) be a model of \(\mathbb{L}\). Obviously, it is also a model for \(\mathbb{L}_{1}\) that moreover satisfies (1), hence \(m\models_{\mathbb{L}_{1}}\mathbb{T}\). By Theorem 3, \(w\) satisfies the judgement \(\vdash e(\phi)\), or equivalently \(w(e(\phi))=0\). By an easy induction on \(\phi\), one can show that for any \(\mathbb{L}\)-model, \(w(\phi)=w(e(\phi))\). Thus \(w(\phi)=0\). As the \(\mathbb{L}\)-model \(w\) in the argument above was generic, we just proved that \(\phi\) is a tautology in \(\mathbb{L}\). By Chang-completeness [21] of \(\mathbb{L}\), we then have that \(\phi\) is provable in \(\mathbb{L}\). (\(\Rightarrow\)) Assume \(\phi\) is provable in \(\mathbb{L}\). Let \(m\colon\mathbb{P}\to[0,\infty]\) be a model of \(\mathbb{T}\). By an easy induction on \(\phi\), one can prove that, for any model \(m\) of \(\mathbb{T}\), \(m(e(\phi))\leq 1\) --this follows because for propositional variables \(p\in\mathbb{P}\), \(m(e(p))=\min\{m(p),1\}\leq 1\), and the semantical interpretation of the other logical connectives is closed is well-defined in \([0,1]\). Thus \(m\) is a proper model for \(\mathsf{L}\). As we assumed, \(\phi\) is provable in \(\mathsf{L}\), by the soundness of the axiomatization for \(\mathsf{L}\), we have \(m(\phi)=0\). As before, \(m(\phi)=m(e(\phi))\), Thus \(m(e(\phi))=0\). As the theory \(\mathbb{T}\) is finitely axiomatizable, by Theorem 35, \(\vdash e(\phi)\) is provable from \(\mathbb{T}\). ### _Encoding of continuous propositional logic_ At last, we consider the encoding in \(\mathbb{L}_{\mathsf{1}}^{*}\) of _continuous propositional logic_ (\(\mathsf{CL}\)), which is a conservative extension of Lukasiewicz logic proposed by Ben Yaacov [6] to reason about Banach spaces. The syntax and semantics of \(\mathsf{CL}\) is that of \(\mathsf{L}\), to which we add one single extra unary logical connective \(\frac{1}{2}\phi\) with the following semantical interpretation in \([0,1]\): \[w(\tfrac{1}{2}\phi):=\tfrac{1}{2}w(\phi)\,.\] \(\mathsf{CL}\) has a (weak) complete axiomatization, consisting of the axioms (A1)-(A4) of \(\mathsf{L}\) to which we add \[\begin{array}{ll}\mathsf{(A5)}&\frac{1}{2}\phi\to(\phi\to\frac{1}{2}\phi)\\ \mathsf{(A6)}&(\phi\to\frac{1}{2}\phi)\to\frac{1}{2}\phi\end{array}\] For the encoding of \(\mathsf{CL}\) into \(\mathbb{L}_{\mathsf{1}}^{*}\), we consider as encoding function \(e\colon\mathsf{CL}\to\mathbb{L}_{\mathsf{1}}^{*}\) the obvious extension of the one used for Lukasiewicz logic and such that: \[e(\tfrac{1}{2}\phi):=\tfrac{1}{2}*e(\phi)\,.\] **Theorem 41** (Encoding Continuous Logic).: _Let \(\mathbb{T}\) be the theory in \(\mathbb{L}_{\mathsf{1}}^{*}\) axiomatized by the only axiom_ \[\begin{array}{ll}\mathsf{(1\text{-}bbd)}&\neg\neg\psi\vdash\mathbb{1}\mathrel {\lnot\mathrel{\lnot\mathrel{\lnot\mathrel{\lnot\mathrel{\lnot\mathrel{ \lnot\mathrel{\lnot\mathrel{\lnot\mathrel{\lnot\mathrel{\lnot\mathrel{ \mathrel{\mathrel{\mathrel{\mathrel{\mathrel{\mathrel{\mathrel{\mathrel{\mathrel{\mathrel{{ }}}}}}}}}}}}}}\psi\, \end{array}\] _Then, a formula \(\phi\) of continuous prop. logic is provable in \(\mathsf{CL}\) iff the judgement \(\vdash e(\phi)\) is provable in \(\mathbb{L}_{\mathsf{1}}^{*}\) from \(\mathbb{T}\)._ Proof.: Similar to the proof of Theorem 40. ## IX Inference systems for the Lawvere quantale In this section, we extend further the concept of proof systems in LLQ and discuss _inference systems_. These are obtained by requiring that a theory in \(\mathbb{L}_{\mathsf{1}}^{*}\) obeys extra inferences (proof rules), in addition to its axioms and to the inferences of \(\mathbb{L}_{\mathsf{1}}^{*}\). Because in LLQ inferences cannot be internalized as judgements (see Fact 17), the effect of closing a theory by a rule is not producing always another theory, as it happens in classical logics, but often a set of theories. Note that, all the inferences of LLQ have finite sets of hypotheses and so, when one works with theories of LLQ, will only have derived rules that contains a finite set of hypotheses. But if we now allow ourselves to work with inferences that might not be derived from the proof rules, we might have to handle inferences with a countable set of hypotheses. In fact, for our purpose, we are interested in only one type of such inferences that we shall call _inductive inferences_. **Definition 42** (Inductive inferences).: _An inductive inference in LLQ is an inference of type_ \[\frac{\{\vdash\phi_{i}\mid i\in\mathbb{N}\}}{\vdash\psi}\] _such that for any \(i,j\in\mathbb{N}\) with \(i<j\), \(\phi_{j}\vdash\phi_{i}\)._ Observe that the inferences with a finite number of hypotheses are all particular cases of inductive inferences, since they all can be equivalently represented as inferences of type \[\frac{\vdash\phi}{\vdash\psi}\] by taking \(\phi\) to be the conjunction of all the hypotheses and this can be further seen as a particular inductive inference inference with \(\phi_{i}=\phi\), for all \(i\in\mathbb{N}\). Hence, all the proof rules of LLQ and all the inferences that can be derived from them are inductive inferences. Last but not least, observe that any axiom \(\vdash\psi\) can be seen as an inductive inference with an empty set of hypotheses: \[\overline{\vdash\phi}\,.\] **Definition 43**.: _A theory \(\mathbb{T}\) of \(\mathbb{L}_{\mathsf{1}}^{*}\) is closed under the inductive inference_ \[\frac{\{\vdash\phi_{i}\mid i\in\mathbb{N}\}}{\vdash\psi}\ \mathrm{(I)}\] _if for any diagrammatic extension \(\mathbb{T}^{+}\) of \(\mathbb{T}\) we have that_ * _either_ \(\vdash\psi\in\mathbb{T}^{+}\)_,_ * _or there exists_ \(\varepsilon\in[0,\infty)\)_,_ \(\varepsilon>0\) _and_ \(i\in\mathbb{N}\) _such that_ \(\phi_{i}\vdash\varepsilon\in\mathbb{T}^{+}\)_._ When a model \(m\) is such that \(\{\vdash\phi_{i}\mid i\in I\}\models_{m}\vdash\psi\), we say that it is a model of the inference \(I\). Observe that the previous definition makes sense semantically, since it implies that a theory \(\mathbb{T}\) of \(\mathbb{L}_{\mathsf{1}}^{*}\) is closed under the inference \(I\) iff any model \(m\) of \(\mathbb{T}\) is a model of \(I\). **Definition 44** (Inference System).: _An inference system in \(\mathbb{L}_{\mathsf{1}}^{*}\) is a set \(\mathcal{R}\) of inductive inferences in \(\mathbb{L}_{\mathsf{1}}^{*}\)._ _We say that a model satisfies an inference system if it is a model for each inference in the system._ _We say that a theory is closed with respect to an inference system, if it is closed under each inference in the system._ Since we can see any axiom of a theory as a particular inference with an empty set of hypotheses, we see that any finite axiomatic system of LLQ is, in fact, a particular case of an inference system. There exists, however, interesting mathematical theories, such as the quantitative equational logic, that cannot be presented by using an axiomatic system in LLQ, but only by using an inference system. Because the finite axiomatic systems in LLQ are particular type of inference systems, we can read the completeness theorem for finitely-axiomatized theories proven before, as a particular case of completeness for inference systems. In what follows, we will enforce these results and prove completeness results directly for inference systems. **Theorem 45** (Completeness for inference systems).: _Let \(\mathcal{R}\) be an inference system of \(\mathbb{L}_{1}^{*}\) and_ \[\frac{\{\vdash\ \phi_{i}\ |\ i\in\mathbb{N}\}}{\vdash\ \psi}\ (\mathrm{I})\] _an inductive inference. If \(I\) is satisfied by all the models of \(\mathcal{R}\), then all the finitely axiomatized theories closed under \(\mathcal{R}\) are also closed under \(I\)._ Proof.: Let \(\mathbb{T}\) be a finitely axiomatized theory closed under \(\mathcal{R}\). Then any model \(m\) of \(\mathbb{T}\) satisfies \(I\), hence * either \(m\models(\vdash\psi)\) * or there exist \(i\in\mathbb{N}\) and \(\varepsilon>0\) such that \(m\models(\phi_{i}\vdash\varepsilon)\). Applying Corollary 36, to \(m\) corresponds a diagrammatic theory \(\mathbb{T}_{m}\) such that \(m\models(\phi\vdash\phi^{\prime})\) implies \(\phi\vdash\phi^{\prime}\in\mathbb{T}_{m}\); and to each diagrammatic extension \(\mathbb{T}^{+}\) of \(\mathbb{T}\) corresponds a model \(m_{\mathbb{T}^{+}}\) such that \(m_{\mathbb{T}^{+}}\models(\phi\vdash\phi^{\prime})\) implies \(\phi\vdash\phi^{\prime}\in\mathbb{T}^{+}\). Consider now an arbitrary diagrammatic extension \(\mathbb{T}^{+}\) of \(\mathbb{T}\). Since \(m_{\mathbb{T}^{+}}\) is a model of \(\mathbb{T}\), we have that * either \(m_{\mathbb{T}^{+}}\models(\vdash\psi)\), implying \(\vdash\psi\in\mathbb{T}^{+}\), * or there exist \(i\in\mathbb{N}\) and \(\varepsilon>0\) such that \(m_{\mathbb{T}^{+}}\models(\phi_{i}\vdash\varepsilon)\), implying \(\phi_{i}\vdash\varepsilon\in\mathbb{T}^{+}\). Hence, \(\mathbb{T}\) is closed under \(I\). ### _The inference system of Quantitative Algebra_ In this section, we show how we can use LLQ as a support for quantitative equational reasoning [11]. Quantitative algebras [11] have been introduced, as a generalization of universal algebras, meant to axiomatize not only congruences, but algebraic structures over extended metric spaces. Given an algebraic similarity type \(\Omega\), a quantitative algebra is an \(\Omega\)-algebra supported by an extended metric space, so that all the algebraic operators are nonexpansive. Such a structure can be axiomatized using an extension of equational logics that uses, instead of equations of type \(s=t\) for some terms \(s,t\), quantitative equations of type \(s=_{\varepsilon}t\) for some \(\varepsilon\in[0,\infty)\). This quantitative equation is interpreted as "the distance between the interpretation of \(s\) and \(t\) is less or equal to \(\varepsilon\)". In the theory of quantitative algebras, \(=_{\varepsilon}\) are treated as classic Boolean predicates, so in any model, \(s=_{\varepsilon}t\) is either _true_ or _false_. However, a different way to look to this, is to actually think that we only have one equality predicate and \(s=t\) is interpreted in the Lawvereatale, thus allowing us to reason about the distance between \(s\) and \(t\). For instance, instead of \(s=_{\varepsilon}t\), we could use the syntax of \(\mathbb{L}_{1}^{*}\), create \(s=t\) as an atomic proposition, and write \(\varepsilon\vdash s=t\). This allows us to properly reason about extended metric spaces and encode, in our logic, the entire theory of quantitative equational reasoning. In this section we show how such an encoding is defined. \(\mathbb{L}_{1}^{*}\) has already all the necessary ingredients to do this work. However, since \(\mathbb{L}_{1}^{*}\) is only propositional, the way to do this is to treat all the equations as atomic propositions. This is exactly how we encode the classic equational logic into Boolean propositional logic. And, as in the classic case, while this is sufficient, it unfortunately requires an infinite set of axioms. As we have already anticipated, the theory of quantitative equational logic requires an inference system in \(\mathbb{L}_{1}^{*}\), and it cannot be only encoded using an axiomatic system. This is because a judgement of type \(s=_{\varepsilon}t\vdash s^{\prime}=_{\delta}t^{\prime}\) in quantitative reasoning corresponds to the inference \[\frac{\varepsilon\vdash s=t}{\delta\vdash s^{\prime}=t^{\prime}}\] which cannot be internalized due to the fact that in \(\mathbb{L}_{1}^{*}\) the deduction theorem fails. Concretely, assuming an algebraic similarity type \(\Omega\) and a set \(X\) of variables, we construct all the possible algebraic terms. Let \(\Omega X\) be the set of these terms. We define \(\mathbb{L}_{1}^{*}\) for \(\mathbb{P}=\{s=t\ |\ s,t\in\Omega X\}\). This gives us the syntax we need. The original axioms of quantitative equational logic are presented in Table V, where they are stated for arbitrary terms \(s,t,u,s_{1},\ldots,s_{n},t_{1},\ldots,t_{n}\in\Omega X\), for arbitrary \(n\)-ary operator \(f:n\in\Omega\), arbitrary positive reals \(\varepsilon,\varepsilon^{\prime}\in[0,\infty)\) and arbitrary decreasing convergent sequence \((\varepsilon_{i})_{i\in\mathbb{N}}\) of positive reals with limit \(\varepsilon\). These axioms, together with the standard substitution, cut and assumption rules, provide the proof system of quantitative equational logics. When translated into LLQ, the substitution, cut, and assumption rules are embedded in the way a proof operates. And the axioms of quantitative equational logic can be translated into the corresponding inferences in Table VI. However, in this translation, the set of axioms is actually infinite, because the terms equalities are names for atomic propositions and, as such, we will have, for instance, a (refl) inference rule for each term \(t\), a (symm) inference rule for each tuple of terms \(s\) and \(t\), etc. This is not surprising, as the same situation happens when we encode the classic equational logic developed for universal algebras in propositional logic. Observe also that the axiom (cont) of quantitative equational logic, which is an infinitery axiom, is translated into \(\mathbb{L}_{1}^{*}\) as an inductive inference rule. Indeed, first of all, we can convert each hypothesis of type \(\varepsilon_{i}\vdash s=t\) into the equivalent one, \(\vdash\varepsilon_{i}\multimap(s=t)\). And secondly, because for \(i\geq j\) we have \(\varepsilon_{i}\leq\varepsilon_{j}\), and this implies that \[\varepsilon_{j}\multimap(s=t)\vdash\varepsilon_{i}\multimap(s=t)\] is a theorem in \(\mathbb{L}_{1}^{*}\). A limitation of this encoding is coming from the fact that we need an infinite set of inductive inferences to rule the quantitative equational reasoning. But this is similar with what is happening in the classical logic, when one encodes the classic equational logic into propositional logic. And as in the classic case, this can be avoided by extending \(\mathbb{L}_{1}^{*}\) with predicates. This would allow us to present a more compact and finitary inference system. We leave this extension for future works. ## X Conclusions In this paper we developed a class of propositional logics interpreted in the Lawveree quantale. We develop natural deduction systems for them, which collect rules similar to rules well-known from other logics. We show that despite their natural arithmetic interpretation, these logics manifest important metatheoretical original features that differentiate them from other related logics. We prove that the logics are incomplete in general, but complete if we restrict to finitely-axiomatized theories. We present a normalization algorithm that proves the consequence is decidable and suggests complexity bounds for it. While arguing about the diferences between our logics and the other well known logics, we also show that the Boolean propositional logic, the Lukasiewicz logic, Ben Yaacov's continuous propositional logic and the quantitative equational logic can all be encoded in our new settings. Moreover, we demonstrate that for this class of logics one can either use axiomatic systems or systems of inferences, the second providing a higher expressivity from the point of view of mathematical theories that can be developed.
2308.14114
Hybrid Transformer-RNN Architecture for Household Occupancy Detection Using Low-Resolution Smart Meter Data
Residential occupancy detection has become an enabling technology in today's urbanized world for various smart home applications, such as building automation, energy management, and improved security and comfort. Digitalization of the energy system provides smart meter data that can be used for occupancy detection in a non-intrusive manner without causing concerns regarding privacy and data security. In particular, deep learning techniques make it possible to infer occupancy from low-resolution smart meter data, such that the need for accurate occupancy detection with privacy preservation can be achieved. Our work is thus motivated to develop a privacy-aware and effective model for residential occupancy detection in contemporary living environments. Our model aims to leverage the advantages of both recurrent neural networks (RNNs), which are adept at capturing local temporal dependencies, and transformers, which are effective at handling global temporal dependencies. Our designed hybrid transformer-RNN model detects residential occupancy using hourly smart meter data, achieving an accuracy of nearly 92\% across households with diverse profiles. We validate the effectiveness of our method using a publicly accessible dataset and demonstrate its performance by comparing it with state-of-the-art models, including attention-based occupancy detection methods.
Xinyu Liang, Hao Wang
2023-08-27T14:13:29Z
http://arxiv.org/abs/2308.14114v1
Hybrid Transformer-RNN Architecture for Household Occupancy Detection Using Low-Resolution Smart Meter Data ###### Abstract Residential occupancy detection has become an enabling technology in today's urbanized world for various smart home applications, such as building automation, energy management, and improved security and comfort. Digitalization of the energy system provides smart meter data that can be used for occupancy detection in a non-intrusive manner without causing concerns regarding privacy and data security. In particular, deep learning techniques make it possible to infer occupancy from low-resolution smart meter data, such that the need for accurate occupancy detection with privacy preservation can be achieved. Our work is thus motivated to develop a privacy-aware and effective model for residential occupancy detection in contemporary living environments. Our model aims to leverage the advantages of both recurrent neural networks (RNNs), which are adept at capturing local temporal dependencies, and transformers, which are effective at handling global temporal dependencies. Our designed hybrid transformer-RNN model detects residential occupancy using hourly smart meter data, achieving an accuracy of nearly 92% across households with diverse profiles. We validate the effectiveness of our method using a publicly accessible dataset and demonstrate its performance by comparing it with state-of-the-art models, including attention-based occupancy detection methods. Occupancy detection, smart meter data, deep learning, transformer, recurrent neural network (RNN) ## I Introduction The significance of residential occupancy detection has increased substantially, primarily driven by global urbanization and concurrent population growth in recent years [1]. Accurately determining patterns of occupancy holds utmost importance, as it can improve self-awareness of occupancy patterns for residents and enable various business opportunities for utility companies and building managers, including energy saving, thermal comfort control, and route optimization for work activities and deliveries [2, 3]. Consequently, accurate occupancy detection yields a range of benefits, including economic advantages, positive environmental impacts, and enhanced security and comfort for residents. A large body of research has been conducted on residential occupancy detection. Many existing studies adopted one common approach involving the installation of supplementary cameras or sensors, such as thermal imaging cameras or motion sensors [4, 5, 6, 7]. Though these methods could achieve high accuracy, they may cause significant concerns or pose new challenges. For example, installing sensors is intrusive, requires regular maintenance, and can be costly. Furthermore, constant monitoring can invade occupants' privacy, raising ethical concerns. Lastly, the integration of multiple sensors increases system complexity, which in turn requires sophisticated algorithms and software. Thus, these methods may face scalability issues and raise concerns regarding privacy risks. To address the aforementioned issues, researchers have been studying non-intrusive approaches. Compared to camera or motion-sensing-based methods, smart meters, widely installed as a part of utility infrastructure, can provide an alternative and cost-effective approach to occupancy detection. The energy consumption data monitored by smart meters can be easily integrated with energy management or home automation systems, providing an inherent advantage over the camera or motion-sensing methods as there is no need for extra installation and maintenance. Recent studies using high-resolution smart meter data [8, 9, 10, 3] have achieved comparable accuracy levels to other methods using sensors. However, privacy concerns still arise due to the detailed energy consumption information revealing occupants' habits. Additionally, frequent sampling also cause scalability issues to data transmission, storage, and processing. Thus, it is crucial to avoid using high-resolution data but using low-resolution data, which can preserve privacy. Low-resolution smart meter data occupancy detection is proposed to further resolve privacy and scalability limitations. However, the reduced information content in low-resolution smart meter data makes occupancy detection tasks quite challenging to achieve high accuracy equivalent to other methods using high-resolution data. Deep learning techniques provide promising solutions to improve the detection accuracy using low-resolution data. For example, Hisashi et al. [11] proposed a deep learning-based method to estimate residential occupancy status. Their proposed method included manual feature extraction to derive statistical features from time-series sequences and subsequently process the extracted data through a bi-directional long short term memory (BI-LSTM) network-a commonly used recurrent neural network (RNN), with the attention mechanism. Moreover, their method trained separate models for individual households, leading to generalization problems. According to [12, 13], households with diverse socioeconomic characteristics exhibit different energy consumption profiles. An effective occupancy detection method should overcome generalization limitations and be applied to a broad groups of households with diverse socioeconomic backgrounds and lifestyles. To overcome the limitations of existing methods, we are motivated to design a new deep-learning-based approach for residential occupancy detection using low-resolution smart meter data while achieving considerably high accuracy. Specifically, we employ a hybrid transformer-Bi-LSTM architecture that enables processing of raw smart meter data without the need for manual feature extraction. In addition, our model is designed to be applicable to various households rather than being trained for each individual household separately. The goal of our work is to achieve state-of-the-art performance for occupancy detection using low-resolution smart meter. To evaluate the effectiveness of our model, we conduct experiments on the most comprehensive publicly accessible dataset [14]. The results show that our model improves occupancy detection performance across households compared to baseline methods. The contributions of our work are summarized as follows. * Our work presents a novel model by combining RNNs and transformers to effectively model temporal dependencies in low-resolution smart meter data. By leveraging RNNs' capability in sequential processing of short to medium-term dependencies and transformers' self-attention for long-range dependencies, we enhance the performance and accuracy of occupancy detection. * Our work explores various transformer-RNN hybrid models by thoroughly examining the fusion of these architectures in different arrangements. Through our investigation, we find an optimal combination: the concatenation of Bi-LSTM and transformers. Our design leverages the temporal modeling of Bi-LSTM and transformers' self-attention mechanism, shedding light on the effective construction of such hybrid models for similar tasks. * We compare our model to different models in residential occupancy detection using a comprehensive benchmarking framework, including various performance metrics and cross-validation, based on a real-world household dataset. Our findings demonstrate that the fusion of transformers and Bi-LSTM models through a concatenation operation consistently outperforms other baseline models in terms of a comprehensive set of performance metrics. The remainder of this paper is organized as follows. Section II presents the problem formulation and the hybrid transformer-RNN model for occupancy detection. Section III introduces the benchmark models and evaluation metrics. Section IV discusses numerical results, and Section V concludes this paper. ## II Methodology We present a novel model aiming to enhance the performance of occupancy detection by leveraging a hybrid transformer-Bi-LSTM architecture on low-resolution smart meter data. As depicted in Figure 1, the smart meter data is directly fed into both the transformer-based and Bi-LSTM-based feature extractors. Then the extracted features from these two components are concatenated to form a comprehensive feature set. This amalgamated feature set is inputted into the classification layer, which discerns the presence or absence of the occupants in each time step, e.g., each hour. In the following, we will present the problem formulation and our model in detail. ### _Problem Formulation_ Given a dataset denoted by \(\mathcal{D}=\left(\mathbf{X}_{i},\mathbf{y}_{i}\right)_{i=1}^{N}\), containing \(N\) samples, and \(i=1,...,N\). Each sample consists of a time series of smart meter readings denoted as \(\mathbf{X}_{i}\in\mathbb{R}^{T\times F}\) with a length of \(T\) and \(F\) features. An occupancy status label is introduced as \(\mathbf{y}_{i}\in\{0,1\}^{T}\), where \(1\) indicates the presence of occupants and \(0\) indicates the absence. Each individual time-series sequence \(\mathbf{X}_{i}\) can be decomposed into \((\mathbf{x}_{i,1},\mathbf{x}_{i,2},\ldots,\mathbf{x}_{i,T})\), where \(\mathbf{x}_{i,t}\in\mathbb{R}^{F}\) serves as the \(F-\)dimensional feature vector at a given time step \(t\) for the \(i\)-th sample. A similar Fig. 1: Hybrid Transformer-RNN Model for occupancy detection. decomposition is applicable to the occupancy status labels, manifesting as \(\mathbf{y}_{i}=(y_{i,1},y_{i,2},\ldots,y_{i,T})\), where \(y_{i,t}\) signifies the occupancy status at time step \(t\) for the \(i\)-th sample. The time-series sequences \(\mathbf{X}_{i}\) contain valuable information regarding household energy consumption patterns and occupancy-driven behaviors. This information can be harnessed to estimate occupancy status. The aim of our model is to design a deep learning model \(f(\mathbf{X};\boldsymbol{\theta})\), parameterized by \(\boldsymbol{\theta}\). The model is capable of predicting the occupancy status \(\hat{\mathbf{y}}\) from the input smart meter data \(\mathbf{X}\), i.e., \(\hat{\mathbf{y}}=f(\mathbf{X};\boldsymbol{\theta})\), such that \(\hat{\mathbf{y}}\) can accurately predict \(\mathbf{y}\). The objective of the occupancy prediction is to minimize the loss function \(L\), which is the binary cross entropy between the predicted occupancy status \(\hat{\mathbf{y}}\) and the actual occupancy status \(\mathbf{y}\): \[L(\mathcal{D},\boldsymbol{\theta})=\frac{1}{N}\sum_{i=1}^{N}\ell(f(\mathbf{X} _{i};\boldsymbol{\theta}),\mathbf{y}_{i}), \tag{1}\] in which \(\ell\) denotes the binary cross-entropy loss for time-series classification. By optimizing the model parameters \(\boldsymbol{\theta}\), the deep learning model is primed to accurately classify the occupancy status at each time step. ### _Feature Extraction Utilizing Bi-LSTM_ Due to the inherently temporal nature of smart meter data, RNNs, such as Long Short-Term Memory (LSTM) networks [15], are a suitable architecture for feature extraction. Equipped with memory cells and gating mechanisms, LSTM networks are capable of effectively learning and retaining information over lengthy sequences. This results in an enhanced ability to extract long and intricate temporal feature dependencies, playing a crucial role in detecting occupancy accurately. In particular, the Bi-LSTM extends the capabilities of LSTM by processing the input sequence in both forward and backward directions. This allows the model to capture information from both past electricity consumption information and future usage data, providing a more comprehensive understanding of the context. A Bi-LSTM consists of two LSTM layers: a forward layer and a backward layer. For any given smart meter data \(\mathbf{X}\equiv(\mathbf{x}_{1},\mathbf{x}_{2},...,\mathbf{x}_{T})\), the forward LSTM computes \(\overrightarrow{\mathbf{H}}_{rnn}=(\overrightarrow{\mathbf{h}}_{1},\, \overrightarrow{\mathbf{h}}_{2},...,\overrightarrow{\mathbf{h}}_{T})\), in which \(\overrightarrow{\mathbf{h}}_{t}\) is the hidden state for each time step. Similarly, the backward LSTM computes the hidden states \(\overrightarrow{\mathbf{H}}_{rnn}=(\overrightarrow{\mathbf{h}}_{1},\, \overrightarrow{\mathbf{h}}_{2},...,\overrightarrow{\mathbf{h}}_{T})\), in which \(\overrightarrow{\mathbf{h}}_{t}\) is the hidden state for each time step. The final extracted temporal features at each time step are then obtained by concatenating the forward and backward hidden states as \(\mathbf{H}_{rnn}=\text{Concat}(\overrightarrow{\mathbf{H}}_{rnn}; \overrightarrow{\mathbf{H}}_{rnn})\). ### _Exploiting Transformers Encoder for Feature Extraction_ While the Bi-LSTM can significantly contribute to feature extraction, relying solely on it may not fully encapsulate the complex dynamics of smart meter data. Specifically, one inherent constraint of LSTMs is their sequential processing nature, which assumes a certain chronological order in data. While this characteristic is advantageous in capturing local temporal dependencies, it may overlook the non-sequential patterns and long-range dependencies that could exist within the household electricity consumption behavior. Hence, to improve the understanding of these dynamics, we incorporate the transformer [16] into our feature extraction process. Unlike LSTM, transformers do not necessitate a sequential processing approach but attend to different parts of the sequence regardless of their position. This is made possible through the employment of self-attention mechanisms, making transformers highly effective across diverse domains. For the purpose of feature extraction from smart meter data for occupancy detection, we utilize solely the encoder part of the transformer architecture. Given the input time series of smart meter data \(\mathbf{X}\), we apply positional encodings to generate a modified data matrix \(\mathbf{X}^{\prime}=\mathbf{X}+\text{PE}\). The positional encodings (PE) are presented as \[\text{PE}(p,2m) =\sin\left(\frac{p}{10000^{2m/F}}\right), \tag{2}\] \[\text{PE}(p,2m+1) =\cos\left(\frac{p}{10000^{2m/F}}\right), \tag{3}\] where \(p\) is the position of the time step, and \(m\) is the index of the smart meter features (divided by 2 because the encoding function alternates between sine and cosine). This operation infuses the model with an intrinsic awareness of the temporal structure in the smart meter data. This overcomes the limitation that the self-attention mechanism itself does not have any awareness of the electricity usage order. Following the application of positional encodings, the model proceeds with the multi-head self-attention mechanism. This component enables the model to focus on different parts of the input sequence for every attention head, thereby capturing a richer set of dependencies. With a total of \(U\) heads, for each attention head, \(\mathbf{X}^{\prime}\) is linearly transformed to generate corresponding queries (\(\mathbf{Q}\)), keys (\(\mathbf{K}\)), and values (\(\mathbf{V}\)). This is captured in the following equations for the \(u\)-th head: \[\mathbf{Q}_{u}=\mathbf{X}^{\prime}\mathbf{W}_{u}^{Q},\quad\mathbf{K}_{u}= \mathbf{X}^{\prime}\mathbf{W}_{u}^{K},\quad\mathbf{V}_{u}=\mathbf{X}^{\prime} \mathbf{W}_{u}^{V}, \tag{4}\] where \(\mathbf{W}_{u}^{Q}\), \(\mathbf{W}_{u}^{K}\), and \(\mathbf{W}_{u}^{V}\) are learned weight matrices. After obtaining the projected query, key, and value of dimension \(d_{k}\) for each head, we compute the value of the head using scaled dot-product attention as \[\text{head}_{u}=\text{Att}(\mathbf{Q}_{u},\mathbf{K}_{u},\mathbf{V}_{u})= \text{softmax}\left(\frac{\mathbf{Q}_{u}\mathbf{K}_{u}^{\top}}{\sqrt{d_{k}}} \right)\mathbf{V}_{u}. \tag{5}\] The outputs will then be concatenated and linearly transformed using parameter matrices \(\mathbf{W}^{O}\) to yield the final output of multi-head self-attention shown as \[\text{MultiHead}(\mathbf{X}^{\prime})=\text{Concat}(\text{head}_{1},..., \text{head}_{U})\mathbf{W}^{O}. \tag{6}\] After the multi-head self-attention operation, a residual connection is implemented. This technique is prevalent in deep learning to help mitigate the problem of vanishing gradients during training. The output of the multi-head self-attention is added directly to the encoded input. Following the residual connection, layer normalization [17] is performed to stabilize the learning dynamics and expedite the training process to achieve \(\mathbf{X}^{\prime\prime}\) shown as \[\mathbf{X}^{\prime\prime}=\text{LayerNorm}(\mathbf{X}^{\prime}+\text{MultiHead}( \mathbf{X}^{\prime})). \tag{7}\] which will then undergoes an additional transformation by a position-wise feed-forward network (FFN), enabling the model to further capture and learn from the relationships among the input data, transforming the abstract representations from the multi-head self-attention mechanism into higher-level features that can more effectively inform the occupancy status predictions. The FFN operates as follows \[\text{FFN}(\mathbf{X}^{\prime\prime})=\text{ReLU}(\mathbf{X}^{\prime\prime} \mathbf{W}_{1}+\mathbf{b}_{1})\mathbf{W}_{2}+\mathbf{b}_{2}, \tag{8}\] where \(\mathbf{W}_{1}\), \(\mathbf{b}_{1}\), \(\mathbf{W}_{2}\), and \(\mathbf{b}_{2}\) are learnable parameters of the feed-forward network. The ReLU (Rectified Linear Unit) function is applied as a non-linear activation function. Following the feed-forward network, a second residual connection and layer normalization process is implemented to produce the final feature representation \(\mathbf{H}_{trans}\) of the transformer network, presented as \[\mathbf{H}_{trans}=\text{LayerNorm}(\mathbf{X}^{\prime\prime}+\text{FFN}( \mathbf{X}^{\prime\prime})). \tag{9}\] ### _Occupancy Detection via Concatenated Feature Representation_ The strengths and limitations of Bi-LSTM and transformers show a noteworthy reciprocal relationship when handling smart meter data. Bi-LSTM with its strong ability in capturing local temporal dependencies and displaying both temporal bias and local sensitivity well compensates for these inherent limitations of transformers. Conversely, transformers equipped with their unique non-sequential architecture can capture long-term dependencies.Therefore, a hybrid architecture that combines Bi-LSTM and transformers can harness the strengths of both while simultaneously mitigating their weaknesses, leading to a more robust and effective model. In pursuit of this goal, we generate a consolidated hidden representation \(\mathbf{H}=\text{Concat}(\mathbf{H}_{rnn},\mathbf{H}_{trans})\), which concentrates the feature representations obtained from the transformer-based and Bi-LSTM-based components, respectively. This hidden representation is then passed through a classification layer to generate estimations of occupancy status for each time step. The classification layer is a dense layer with a sigmoid activation function, which maps each time step of the sequence to a probability between 0 and 1. This probability indicates the model's confidence of occupancy at that particular time step. This process is represented as \[\hat{\mathbf{y}}=\sigma(\mathbf{H}\mathbf{W}_{c}+\mathbf{b}_{c}), \tag{10}\] where \(\mathbf{W}_{c}\) and \(\mathbf{b}_{c}\) are the weight matrix and bias vector of the classification layer, respectively, and \(\sigma\) denotes the sigmoid activation function. As our objective is to minimize the binary cross-entropy loss between the predicted and actual occupancy status, the loss is computed as \[\ell(\hat{\mathbf{y}},\mathbf{y})=-\frac{1}{T}\sum_{t=1}^{T}\left[y_{t}\log( \hat{y}_{t})+(1-y_{t})\log(1-\hat{y}_{t})\right]. \tag{11}\] ## III Benchmarks and Performance Evaluation This section introduces the benchmark models we choose for comparisons. We also provide details of the metrics we will use to measure the performance of our model and the cross-validation techniques. ### _Benchmarks_ In this paper, we evaluate different variants of transformers-RNN hybrid models, as well as deep learning models including attention mechanisms from previous studies on occupancy detection. We aim to provide a robust and comprehensive evaluation of the state-of-the-art in occupancy prediction methods, while also exploring the potential advantages of novel combinations of transformers and Bi-LSTM networks. Below, we provide a brief description for each of these examined models. * **Bi-LSTM + Transformers:** This model first processes the input data with a Bi-LSTM and then feeds the output into a transformer. This sequential combination allows the transformer to build upon the temporal dependencies recognized by the Bi-LSTM. * **Transformers + Bi-LSTM:** This model reverses the order of the previous combination. The input data is first processed by a transformer, and the output is then fed into a Bi-LSTM. This configuration enables the Bi-LSTM to refine the global patterns identified by the transformer. * **Bi-LSTM + Attention:** Proposed in a previous study [11], this model combines a Bi-LSTM with an attention mechanism, processing the input data with the Bi-LSTM and using the attention mechanism to weigh the importance of different time steps in the output. We evaluate this model using both original and feature-extracted data, with the latter reflecting techniques from the same previous study where manual feature extraction is performed to potentially enhance model performance. ### _Evaluation Metrics_ By evaluating the performance of our occupancy detection models, we utilize a diverse set of metrics, each providing distinct perspectives on the functionality of the models. The following offers a detailed description of these metrics. * **Accuracy**: The ratio of correct to total occupancy predictions. Note that imbalanced occupancy rates may distort this measure and we need to examine other metrics below. * **Precision**: The fraction of true positive instances (correctly predicted presence) out of all predicted as presence. * **Recall**: The ratio of true positive instances to all real occupied instances. * **F1-Score**: Harmonic mean of precision and recall, providing a balanced performance measure. * **ROC AUC**: It represents the trade-off between recall and the false positive rate. Scores range from 0.5 (random guessing) to 1 (perfect model). Complementing the metrics previously discussed, we employed 10-fold Cross-Validation to enhance the robustness of our occupancy detection model. This technique partitions the original occupancy data into ten equally sized subsets. One of these subsets serves as the validation data for model testing, while the remaining subsets form the training data. This procedure is reiterated ten times, with each time using a different subset as the validation data. The outcomes of these iterations are then combined to yield a unified estimation of model performance. This method assists in countering the risk of overfitting and augments our model's performance estimation on unseen data. ## IV Numerical Results And Analysis ### _Data Description_ The Electricity Consumption & Occupancy (ECO) dataset [14] is the largest publicly available dataset for non-intrusive load monitoring and occupancy detection studies to our best knowledge. This comprehensive dataset, gathered over eight months from six Swiss households, provides aggregate and appliance-specific power consumption data at a 1Hz frequency. Each data point includes current, voltage, and phase shift information for each of the household's three electrical phases. Additionally, occupancy information is recorded and obtained through manual labeling via tablets or passive infrared sensors. ### _Data Preprocessing_ As our study focuses solely on smart meter data, we utilize the aggregated information on current, voltage, and phase shifts to match it with occupancy information based on date and household accordingly. Since our study aims to detect occupancy using low-resolution smart meter data, we resample both the smart meter and occupancy data to a lower resolution to obtain a dataset with one-hour intervals. For the smart meter data, we calculate the average of 3600 measurements within each hour. For the occupancy data, we assess 3600 occupancy statuses within each hour and select the most frequently occurring status. At last, we have compiled a dataset, spanning 449 days from five distinct households with detailed information shown in Table I. ### _Results Analysis_ This section presents the model evaluation results shown in Table II and Figure 2. In the following, we offer a comparative analysis of different occupancy detection models. #### Iv-C1 Our Bi-LSTM and Transformers Concatenation Model Our model achieves an accuracy of approximately 92% in residential occupancy detection, surpassing all benchmark models. The Bi-LSTM and transformers concatenation model has shown a remarkable performance in occupancy detection utilizing original smart meter data (without manual feature extraction), demonstrated by numerical results shown in Table II and the corresponding box plot for 10-fold cross-validation shown in Figure 2. Our model achieves the highest accuracy of 0.9166, indicating an exceptional proficiency in accurately detecting occupancy status from low-resolution smart meter data. Furthermore, the model achieves the highest F1 score and ROC AUC score, showcasing its ability to strike a balance between precision and recall, effectively managing potential class imbalances in the data, and demonstrating robust discriminative power between different occupancy states. The numerical results not only establish the model's robustness and reliability but are also supported by the box plot distribution depicted in Figure II, which shows consistent performance across all evaluation metrics. This reinforces the notion that the hybrid model effectively leverages the strengths of both the Bi-LSTM and transformers. Iv-C2 Effectiveness of Transformer-RNN Hybrid Models and The Impact of Different Integration Approaches Among all the Transformer-RNN hybrid models, despite the Bi-LSTM and transformers concatenation version, the Bi-LSTM + transformers and transformers + Bi-LSTM versions have also exhibited significant performance enhancements over previous Bi-LSTM + attention models for occupancy detection using original smart meter data. The method of integration of transformers and Bi-LSTM models seems to play a vital role based on the effectiveness of these hybrid models, as demonstrated by our numerical results in Table II and the corresponding box plots in Figure 2. The Bi-LSTM + transformers model, which initially processes data via the Bi-LSTM before forwarding it to the transformer, and the transformers + Bi-LSTM model, which reverses this order, both achieve better performance compared to Bi-LSTM + attention models Fig. 2: Box Plot for 10-fold Cross Validation Result Metric Comparison Across Different Methods. in almost all metrics. Notwithstanding these commendable results, it is the Bi-LSTM and transformers concatenation model that outperforms all others, showing that concatenation is the most effective way of combining the unique capabilities of both the transformers' long-range dependency capturing and Bi-LSTM's local temporal dependency modeling strengths. While transformer-RNN hybrids certainly present promising improvements over earlier Bi-LSTM + attention models, the mode of integration is crucial to optimize their performance. #### Iv-B3 Subtle Impact and Inferiority of Manual Feature Extraction When evaluating the Bi-LSTM + attention model on both original smart meter data and manually feature-extracted data, a nuanced impact on the effectiveness of manual feature extraction emerges in the problem of residential occupancy detection. While the model trained on original data exhibits a marginally higher accuracy, precision, and ROC AUC, the recall and F1 score for the model trained on feature-extracted data are slightly higher. These metrics, along with the comparable distributions displayed by the box plots of the 10-fold cross-validation results, suggest a near-parity in performance between the two models using original and feature extracted data. This subtlety of variation indicates that manual feature extraction may not yield significant improvements when the data originates from diverse households. Consequently, relying on the innate feature extraction capabilities of neural networks could be an effective and more streamlined approach for occupancy detection using smart meter data. ## V Conclusion This paper presented a compelling exploration of hybrid models combining both Bi-LSTM and transformer architectures for the task of residential occupancy detection using low-resolution smart meter data. Through effectively addressing complexities associated with traditional sensor-based methodologies and mitigating privacy concerns, this innovative approach utilizes deep learning techniques and underscores its suitability for large-scale deployments. By leveraging the strengths of Bi-LSTM and transformer in sequential and non-sequential processing and handling temporal dependencies locally and in a long range, our model achieves superior performance, evidenced by a range of evaluation metrics, including accuracy, precision, recall, F1, and ROC AUC. While these findings represent significant progress, they also shed light on potential future research directions, notably in the realm of unsupervised or semi-supervised occupancy detection for better use of unlabeled smart meter data.
2305.14856
Optimization-Based Improvement of Face Image Quality Assessment Techniques
Contemporary face recognition (FR) models achieve near-ideal recognition performance in constrained settings, yet do not fully translate the performance to unconstrained (realworld) scenarios. To help improve the performance and stability of FR systems in such unconstrained settings, face image quality assessment (FIQA) techniques try to infer sample-quality information from the input face images that can aid with the recognition process. While existing FIQA techniques are able to efficiently capture the differences between high and low quality images, they typically cannot fully distinguish between images of similar quality, leading to lower performance in many scenarios. To address this issue, we present in this paper a supervised quality-label optimization approach, aimed at improving the performance of existing FIQA techniques. The developed optimization procedure infuses additional information (computed with a selected FR model) into the initial quality scores generated with a given FIQA technique to produce better estimates of the "actual" image quality. We evaluate the proposed approach in comprehensive experiments with six state-of-the-art FIQA approaches (CR-FIQA, FaceQAN, SER-FIQ, PCNet, MagFace, SDD-FIQA) on five commonly used benchmarks (LFW, CFPFP, CPLFW, CALFW, XQLFW) using three targeted FR models (ArcFace, ElasticFace, CurricularFace) with highly encouraging results.
Žiga Babnik, Naser Damer, Vitomir Štruc
2023-05-24T08:06:12Z
http://arxiv.org/abs/2305.14856v1
# Optimization-Based Improvement of Face Image Quality Assessment Techniques ###### Abstract Contemporary face recognition (FR) models achieve near-ideal recognition performance in constrained settings, yet do not fully translate the performance to unconstrained (real-world) scenarios. To help improve the performance and stability of FR systems in such unconstrained settings, face image quality assessment (FIQA) techniques try to infer sample-quality information from the input face images that can aid with the recognition process. While existing FIQA techniques are able to efficiently capture the differences between high and low quality images, they typically cannot fully distinguish between images of similar quality, leading to lower performance in many scenarios. To address this issue, we present in this paper a supervised quality-label optimization approach, aimed at improving the performance of existing FIQA techniques. The developed optimization procedure infuses additional information (computed with a selected FR model) into the initial quality scores generated with a given FIQA technique to produce better estimates of the "actual" image quality. We evaluate the proposed approach in comprehensive experiments with six state-of-the-art FIQA approaches (CR-FIQA, FaceQAN, SER-FIQ, PCNet, MagFace, SDD-FIQA) on five commonly used benchmarks (LFW, CFP-FP, CPLFW, CALFW, XQLFW) using three targeted FR models (ArcFace, ElasticFace, CurricularFace) with highly encouraging results. Biometrics, Face recognition, Face image quality assessment, Optimization, Transfer learning + Footnote †: Supported by ARRS: P2-0250(B), J2-2501(A), Junior Researcher grants. ## I Introduction Modern face recognition (FR) systems achieve excellent results even with large-scale recognition problems, as long as the appearance variability of the facial images is reasonably constrained. However, the performance in constrained scenarios does not always translate to real-world scenarios where out-of-distribution data, often of poor quality, still presents a challenge for the majority of existing FR models [1, 2]. Face image quality assessment (FIQA) techniques aim to assist FR models in such challenging scenarios by providing additional information on the quality of facial images. This quality information can then be used to either reject low-quality samples that typically lead to false match errors or design robust quality-aware face recognition techniques. Thus, different from general purpose image quality assessment (IQA) methods [3, 4, 5] that commonly measure the perceived visual quality of images by examining explicit image characteristics, such as sharpness, lighting conditions and resolution, FIQA techniques typically try to capture the utility (or fitness) of the given face image for the recognition task [6]. In other words, they measure the usefulness of the sample for face recognition. Several groups of FIQA techniques that differ slightly in their approach have been proposed so far in the literature [7]. The majority of recent techniques learns quality-estimation networks using (reference) quality information inferred from a large database of face images [8, 9, 10, 11]. Another notable group of FIQA techniques estimates quality based only on the information present in the input image and the characteristics of the targeted FR system [12, 13]. More recently, approaches have also appeared that incorporate quality estimation directly into the FR process [14, 15], paving the way towards quality-aware face recognition. While most of the existing FIQA techniques perform well enough to distinguish between high-quality and low-quality facial images, correctly ranking face images of similar quality remains an open problem. The correct (optimal) ordering does not depend solely on the input face images, but also on the targeted FR model. Each model may, in a sense, _perceive_ the quality of individual samples differently due to different model-specific biases introduced by the learning process and the data used for training [16, 17]. This observation also suggests that FIQA techniques, that are not FR model specific, can not determine the correct order for all possible FR models. For this reason, we propose in this paper a novel optimization approach, that attempts to improve the predictive power of any given FIQA approach by incorporating quality information obtained by a particular FR model into the quality scores generated by the selected FIQA approach. Thus, the main contributions of this papers are: * A novel optimization approach that incorporates model-specific quality information into the quality scores produced by existing FIQA techniques with the goal of improving FIQA performance. * An in-depth evaluation of the proposed optimization approach over six FIQA techniques, five datasets, three recognition models and in two settings that demonstrates significant performance gains in most situations. ## II Related Work In this section, we briefly review previous FIQA research that can be broadly categorized into three groups: \((i)\) analytical, \((ii)\) regression and \((iii)\) model-based techniques. More in-depth information on face quality assessment can be found in the comprehensive survey paper by Schlett _et al._[7]. **Analytical FIQA** techniques are mostly unsupervised and rely solely on the information that can be extracted directly from the given input sample. Techniques from this group typically focus on the visual quality of the facial images and, as a result, often exhibit limited performance. The method proposed by Gao _et al._[18], for example, attempts to extract quality information based on facial symmetry estimation only. Zhang _et al._[19] try to quantify quality based on image illumination information, while Lijun _et al._[20] combine multiple cues, such as occlusions, blur and pose for the quality-estimation task. Different from these methods, two analytical FIQA techniques have been proposed recently that in addition to the characteristics of the input image also consider the targeted FR system during the quality estimation task. The first, SER-FIQ by Terhorst _et al._[12], uses the properties of dropout layers to quantify quality, while FaceQAN, by Babnik _et al._[13], exploits adversarial examples for quality assessment. Both methods were shown to yield state-of-the-art performance for various FR models and different benchmarks. **Regression-based FIQA** techniques are the most numerous and usually learn a quality estimation (regression) model to predict quality scores based on some pseudo (ground-truth) quality labels. FaceQNet [8], for example, trains a ResNet50 model using labels obtained by embedding comparisons with the highest quality image of each subject. Here, the highest quality images are determined using an external quality compliance tool. A similar approach, called PCNet [11], trains a quality-regression network on mated-image pairs, with the goal of predicting the similarity of the image pair. LightQNet [10] builds on the ideas introduced with PCNet, but additionally relies on a so-called Identification Quality (IQ) loss, while SDD-FIQA [9] considers both mated and non-mated similarity scores between a large number of samples to determine the final reference quality for the regression task. **Model-based FIQA** techniques are less common and usually try to combine face recognition and quality assessment in a single quality-aware face recognition task. The main goal of these techniques is to simultaneously produce, for a given sample, its embedding and an estimate of the sample's quality. For example, the approach presented by Shi and Jain [14], estimates a mean and variance vector for each given input sample, where the mean vector represents the embedding, while the variance provides the corresponding uncertainty and can be interpreted as a sample quality estimate. MagFace [15], a similar approach by Meng _et al._, uses a modified version of the commonly used ArcFace loss, called MagFace loss, which is able to generate quality-aware embeddings, by incorporating quality information into the magnitude of the embedding itself. The method we propose cannot be clearly assigned to one of the above groups, because it relies on an already existing FIQA approach (from any of the three groups) to generate reference quality scores. In a sense, it distills FIQA knowledge from any existing technique. However, if treated as a black-box, the proposed FIQA approach can be thought of as a regression-based technique, as it trains a regression model using quality labels extracted from a large database. ## III Methodology State-of-the-art FIQA techniques are able to efficiently discriminate between images of distinctly different qualities, yet may not be able to properly distinguish between images of similar quality. To exacerbate this problem, the relative ordering of images of similar quality may additionally depend on the targeted FR model, which not all FIQA techniques take into account. Because face quality assessment aims to quantify the utility of face images for a given FR model, the slight variations in the biases present in modern FR systems may result in different (optimal) quality scores for different FR models. For this reason, we propose in this paper an approach that aims to incorporate FR model-specific quality information into (some initial) quality scores, with the goal of improving the fine-grained performance of existing FIQA techniques. The overall pipeline of the proposed approach, shown in Fig. 1, consists of two main steps: \((i)\)_label optimization_ and \((ii)\)_transfer learning_. The label-optimization step aims to incorporate additional quality-related information into the baseline quality labels, precomputed using a selected (existing) FIQA approach. The optimized quality labels are then used in a transfer-learning scheme, that uses a pre-trained FR model, extended with a quality-regression head. ### _Method Overview_ Let \(Q\) and \(M\) denote a given FIQA method and a pre-trained FR model that produce quality scores \(q_{I}=Q(I)\) and embeddings \(e_{I}=M(I)\), respectively, for an arbitrary input face image \(I\), and \(\{I_{i}\}_{i=1}^{N}\) denote a large facial image database consisting of \(N\) distinct images. The goal of our approach is to train a regression-based quality-estimation model \(Q^{*}=H(M(I))\), where \(H\) represents a quality-regression head, that outperforms the initial FIQA method \(Q\). The model \(Q^{*}\) is trained on optimized quality labels \(\{q^{*}_{i}\}_{i=1}^{N}\) generated by the proposed optimization scheme \(O\). The method relies on information obtained from mated image pairs of the face database \(\{I_{i}\}_{i=1}^{N}\). Details on the procedure are given below. ### _Initialization_ We first extract initial quality scores \(q_{i}=Q(I_{i})\) and embeddings \(e_{i}=M(I_{i})\) from all images of the given face image database \(\{I_{i}\}_{i=1}^{N}\) using the selected FIQA method Fig. 1: **Overview of the proposed method that consists of: Label Optimization and Transfer Learning. The _label-optimization_ step incorporates information extracted from mated image pairs into quality scores precomputed with an existing FIQA technique. The _transfer-learning_ step is then used to train a FR model, extended with a regression head, on the optimized quality-scores. The learned regressor is finally utilized for quality estimation.** and chosen FR model \(M\). This initialization step is conducted once and provides the input data for the label optimization and consequently the transfer learning procedures. ### _Label Optimization_ Looking at past research [8, 9, 10, 11], we observe that quality information is often inferred from mated image comparisons, where the term _mated images_ refers to two unique images of the same individual. We, therefore, follow this insight and use such information in our optimization approach as well. By computing the similarity of mated image pairs in the embedding space of the given FR model \(M\), we are also able to include FR-specific quality estimates in the optimization. **Selecting mated image pairs.** Large-scale databases contain a significant amount of images for each individual, where many of the images may be nearly identical. Selecting all possible mated pairs, can, therefore, introduce database specific biases into our approach and adversely affect performance. To avoid such issues, we propose a technique for sampling mated image pairs based on clustering. We use a clustering procedure to find groups of similar images and to identify the most informative (and least redundant) mated image pairs. We cluster the embedding space \(\mathcal{E}^{k}=\{e_{i}^{k}\}_{i=1}^{N_{k}}\) corresponding to images of each individual \(k=1,...,K\) present in the database using K-Means, where \(N=\sum_{k}N_{k}\). The algorithm initializes \(C\) cluster centers by randomly sampling the given data points and iteratively corrects them using nearby examples. For each image \(I_{c}^{k}\) of the \(k\)-th individual belonging to cluster \(c\in[1,C]\), we randomly select images from all other clusters \(c^{\prime}\neq c,\ c^{\prime}\in[1,C]\) to form mated pairs \((I_{c}^{k},I_{c^{\prime}}^{k})\). By repeating this process for each image of every individual, we obtain the final mated image pairs for the label-optimization procedure \(G=\{(I_{i},I_{j})^{l}\}_{l=1}^{L}\), where \(i\neq j\) and \(L=N\cdot(c-1)\). **Optimizing prior quality scores.** We aim at optimizing the initial quality labels \(\{q_{I_{i}}\}_{i=1}^{N}\) using information provided by the average pair similarity \(sim_{I_{i}}\) of each image. In other words, if an image has a low quality score, yet its average pair similarity is high, we want to increase its quality. Conversely, if the opposite is true, we want to decrease it. The design of the optimization procedure is based on the assumption that the initial quality scores already provide a reasonable estimate of the true quality. We, therefore, try to retain the overall quality distribution over the face database. As a result, we simply rearrange the order of the images in the original quality score distribution generated by the selected FIQA technique \(Q\) instead of computing new optimal quality scores that could differ significantly from the initial estimates. From the list of genuine image pairs \(G\), we first calculate the cosine similarity of all image embedding pairs, i.e.: \[sim_{cos}(e_{I_{i}},e_{I_{j}})=\frac{e_{I_{i}}\cdot e_{I_{j}}}{\|e_{I_{i}}\| \cdot\|e_{I_{j}}\|}, \tag{1}\] where \(e_{I_{i}}\) and \(e_{I_{j}}\) denote embeddings of images \(I_{i}\) and \(I_{j}\). We then construct the distribution of the computed similarity scores \(\mathcal{X}_{s}\sim sim_{cos}(e_{i},e_{j})\), \(\forall(I_{i},I_{j})\in G\), by sorting all the pairs according to their calculated similarity score. From the distribution \(\mathcal{X}_{s}\) we compute for each image \(I_{i}\) its average pair index, \[\overline{id_{I_{i}}^{s}}=\frac{1}{|\mathcal{I}|}\sum_{\mathcal{I}}id^{ \mathcal{X}_{s}}(I_{i},I_{j}), \tag{2}\] where \(id^{\mathcal{X}_{s}}(\cdot)\) is a function, that for a given pair \((I_{i},I_{j})\) returns the index of \(sim_{cos}(e_{i},e_{j})\) within the similarity distribution \(\mathcal{X}_{s}\) and \(\mathcal{I}\) represents the set of all image pairs \((I_{i},I_{j})\), where the quality \(q_{I_{i}}\) is lower then \(q_{I_{j}}\). The latter follows from the fact that the quality of an image pair is computed as \(q(I_{i},I_{j})=min(I_{i},I_{j})\), i.e., it depends only on the image with the lower quality. In addition, we construct a quality score distribution \(\mathcal{X}_{q}\sim\{q_{I_{i}}\}_{i=1}^{N}\), by sorting the quality scores of all images within the given database. The average pair indices and the distribution \(\mathcal{X}_{q}\) are then used to compute the optimized quality indices \[id^{\mathcal{X}_{q}}(q_{I_{i}}^{*})=id^{\mathcal{X}_{q}}(q_{I_{i}})+\theta \cdot(\overline{id_{I_{i}}^{s}}-id^{\mathcal{X}_{q}}(q_{I_{i}})), \tag{3}\] where \(\theta\) is an open hyperparameter that controls the degree of change for the indices, and \(id^{\mathcal{X}_{q}}(\cdot)\) is a function that returns, for some quality \(q\), its index within the distribution \(\mathcal{X}_{q}\). **Final steps.** To avoid bias from randomly selecting mated pairs, we also repeat the entire process \(R\) times, and average the final optimized quality indices, \(\overline{id(q_{I_{i}}^{*})}=\frac{1}{R}\sum_{r=1}^{R}id_{r}^{\mathcal{X}_{q}}( q_{I_{i}}^{*})\) for all images. The images are then sorted by the calculated optimized quality indices \(\overline{id(q_{I_{i}}^{*})}\) and assigned the quality score according to the output of the sorted list and the original quality score distribution \(\mathcal{X}_{q}\). ### _Transfer Learning_ One of the main goals of FIQA techniques is to improve the stability and performance of FR systems. We propose to use a pre-trained state-of-the-art FR model for quality prediction, Fig. 2: **Overview of Label Optimization.**We present a visualization of the proposed optimization scheme. Based on the embeddings \(\{e_{I_{i}}\}_{i=1}^{N_{k}}\) we first generate mated image pairs. From the image pairs, we compute the pair similarity distribution \(\mathcal{X}_{s}\) using the cosine similarity of the image embeddings. At the same time, we also construct the quality distribution \(\mathcal{X}_{q}\) from the given quality scores \(\{q_{I_{i}}\}_{i=1}^{N}\). The mean similarity index \(\overline{id_{I_{i}}^{s}}\), calculated as the average index of all image pairs from \(\mathcal{I}\), is then used to update the quality index \(id^{\mathcal{X}_{q}}(q_{I_{i}})\), using the equation presented above. as it efficiently extracts identity information from given facial images. Moreover, the embeddings generated by state-of-the-art FR models already contain some information about the quality of the input image. Formally, from an FR model \(M\), we construct a quality regression model \(H\circ M\), where \(H\) represents a regression head. The regression head \(H\) attempts to extract the quality of the input image \(q_{i}=H(e_{I_{i}})\) from the embedding \(e_{I_{i}}=M(I_{i})\) and is learned through an \(L_{1}\) loss applied over the optimized labels. To improve the transfer-learning process, we normalize the optimized quality scores to the interval of \([0,1]\). ## IV Experiments and Results ### _Experimental Setup_ **Training Database.** To train the proposed approach, a large-scale database of diverse facial images with rich appearance variability is needed. To this end, we select the VGGFace2 database [21], which contains over \(3\) million images of more than \(9000\) individuals. Images in the database vary in terms of facial pose, lighting conditions, image resolution, occlusions, and other similar factors that greatly affect the overall quality and appearance of the facial images, as also illustrated in Fig. 3 for three individuals (in columns) from the database. **Evaluation Setting.** We use six state-of-the-art FIQA methods as baselines to evaluate the proposed optimization scheme, i.e., CR-FIQA [22], FaceQAN [13], MagFace [15], SER-FIQ [12], SDD-FIQA [9] and PCNet [11]. The baselines and the learned quality-regression networks are evaluated on five commonly used benchmark databases: XQLFW [23], CPLFW [24], CFP-FP [25], CALFW [26] and LFW [27]. As the pre-trained FR model, we use ArcFace [28] with a ResNet100 architecture, trained on the MS1MV3 database using an angular margin loss. For the performance evaluation we consider two different scenarios: \((i)\) the _same-model scenario_, where we use the ArcFace model for both quality-score prediction and generation of the performance indicators, and \((ii)\) the _cross-model scenario_ where ArcFace is used for quality assessment, and the CurricularFace [29] and ElasticFace-Cos+ [30] models are utilized to evaluate performance. Both of the test models are based on the ResNet100 architecture, but CurricularFace was trained on MS1MV2, while ElasticFace was trained with CASIA-WebFaces and MS1MV2. **Performance Evaluation.** The performance of a FIQA technique directly correlates with its ability to properly rank images of similar quality. Therefore, to evaluate our approach, we follow standard FIQA evaluation methodology and use Error-versus-Reject-Characteristic (ERC) curves as the basis for performance reporting [7, 13, 15, 22]. ERC curves measure the False Non-Match Rate (FNMR) at a predefined False Match Rate (FMR), typically fixed at \(0.001\), at various low-quality image drop (also unconsidered) rates. Specifically, we report the Area Under the ERC Curves (AUC) as our main performance indicator, where smaller values indicate better performance. **Implementation Details.** When clustering the embedding space of each individual within the VGGFace2 database, we decide to set the number of clusters \(C\) to \(20\). Consequently, we generate \(C-1=19\), mated image pairs for each image, which means that each individual list of mated pairs consists of approximately \(60\) million pairs. For the hyperparameter \(\theta\) we use a relatively small value of \(0.001\), since the goal is to optimize the already computed baseline quality scores. We repeat the whole process \(10\) times and average the final results. ### _Results_ Before presenting results, we note that SER-FIQ was used in the construction of the XQLFW database, so any results that combine the two are excluded from the presented analysis. **Same-Model Results.** Table I shows the AUC values produced directly with the original FIQA methods (labeled _Baseline_) as well as the AUC scores of the quality-regression network trained using our optimized labels (marked _Optimized_). For readability purposes, the AUC scores are multiplied by \(10^{3}\) and rounded to one decimal place. We observe that in most cases the results of our approach are better than those of the underlying FIQA approaches. The only exception to this observation is CR-FIQA, where a concrete improvement is observed only for the hardest of the considered datasets, i.e., XQLFW, while the results for the remaining datasets are mostly close, but deteriorate drastically for CPLFW. For all other methods the results consistently improve, with occasional outliers on the CALFW or CPLFW benchmarks. **Cross-Model Results.** Table II again shows the AUC values of both the baseline and our (optimized) regression-based FIQA techniques, but this time computed for the cross-model scenario, where the FR model used for estimating the quality of the input images differs from the FR model used for performance reporting. Looking at the individual Fig. 3: **Example VGGFace2 images. Images of three distinct individuals are shown, illustrating the amount of variability present in the database.** methods, CR-FIQA and FaceQAN do not show a clear edge for either the baseline or optimized results. While for the hardest benchmark, XQLFW, the optimized variant always performs better than the baseline variant, the opposite is true for CALFW, which contains cross-age image data. For all other FIQA approaches, the proposed optimization method yields better results, and outperforms the baselines in all cases except for PCNet on CALFW. The results are consistent for both the ElasticFace and CurricularFace model. **Cross-Model vs. Same-Model Results.** Comparing the cross-model with the same-model results, many similarities can be observed. The performance benefit due to the optimization approach is relatively unconvincing for CR-FIQA, while the results for all other methods are mirrored between the two evaluation schemes. The biggest difference is seen for FaceQAN, where the proposed method performs comparably worse in the cross-model evaluation setting. **Qualitative Analysis.** If we look more closely at how the proposed approach works, we see that the distribution of the initial quality scores remains the same under the optimization scheme. This is because the method only rearranges the order of the images and assigns them quality scores from the prior distribution. However, a potential problem with this approach, is that the quality scores of images in higher density areas of the distribution, are harder to change than the quality scores of the images in the lower density areas. This phenomenon is well illustrated in Fig. 4, where for each of the FIQA methods used, a histogram of the prior quality scores over VGGFace2 is presented together with a scatter plot, where each point represents the prior quality of a given image on the \(x\)-axis and the optimized quality score on the \(y\)-axis. Note how the quality scores in areas of lower density seem to change drastically, while almost no movement is observed in higher density areas. **Ablation Study.** To demonstrate how the optimization of the quality labels affects the final results, we present in Table III AUC scores obtained with a quality-regression network trained with the initial (unoptimized) quality labels as well as the performance gain(-)/loss(+) due to the optimization procedure (in brackets). We use the two most difficult benchmarks: CPLFW and XQLFW, as well as the LFW benchmark for this ablation study. From the presented results, we see that the effectiveness of the optimization in the _same-model scenario_, i.e. with ArcFace, to a certain extent depends on the chosen FIQA technique. For CR-FIQA and SER-FIQ the results do not really seem to favour the optimization approach, as most of the performance gains observed in Table I appear to be a consequence of the transfer learning step. On the _cross-model_ side, the results for both ElasticFace and CurricularFace seem to be more in favour of the optimized labels, with only a few counterexamples on the LFW database. **Run-time performance.** Because we use a regression-based model trained with the optimized quality scores, the run-time performance of our approach is (approximately) the same regardless of the initial FIQA method used as the basis for the reference quality scores. Thus, the proposed transfer learning step can also be seen as a knowledge distillation procedure that allows us to retain the performance of a given FIQA technique while ensuring a (approximately) fixed run-time complexity, as evidenced by the run-times in Table IV - computed on a desktop PC with an Intel i9-10900KF (\(3.70\)GHz) CPU and a Nvidia 3090 GPU with \(24\)GB of video RAM. Fig. 4: **Qualitative analysis of the proposed approach.** For each FIQA method, we show the prior distribution of the quality scores of the VGGFace2 database, and an associated scatter plot showing the changes in the quality scores due to our optimization approach. ## V Conclusion We presented a novel optimization approach, that aims to improve the performance of modern FIQA approaches. A thorough evaluation was performed using multiple state-of-the-art FIQA methods, datasets and FR models. The results of the evaluation showed significant performance improvements in most cases when using the optimization scheme both in the same-model and cross-model setting. As part of our future work, we plan to incorporate multiple sources of quality scores into the optimization procedure to benefit from the complementary quality description provided by different FIQA techniques.
2307.13798
Estimates of the reproduction ratio from epidemic surveillance may be biased in spatially structured populations
An accurate and timely estimate of the reproduction ratio R of an infectious disease epidemic is crucial to make projections on its evolution and set up the appropriate public health response. Estimates of R routinely come from statistical inference on timelines of cases or their proxies like symptomatic cases, hospitalizatons, deaths. Here, however, we prove that these estimates of R may not be accurate if the population is made up of spatially distinct communities, as the interplay between space and mobility may hide the true epidemic evolution from surveillance data. This means that surveillance may underestimate R over long periods, to the point of mistaking a growing epidemic for a subsiding one, misinforming public health response. To overcome this, we propose a correction to be applied to surveillance data that removes this bias and ensures an accurate estimate of R across all epidemic phases. We use COVID-19 as case study; our results, however, apply to any epidemic where mobility is a driver of circulation, including major challenges of the next decades: respiratory infections (influenza, SARS-CoV-2, emerging pathogens), vector-borne diseases (arboviruses). Our findings will help set up public health response to these threats, by improving epidemic monitoring and surveillance.
Piero Birello, Michele Re Fiorentin, Boxuan Wang, Vittoria Colizza, Eugenio Valdano
2023-07-25T20:05:01Z
http://arxiv.org/abs/2307.13798v1
# Estimates of the reproduction ratio from epidemic surveillance may be biased ###### Abstract An accurate and timely estimate of the reproduction ratio \(R\) of an infectious disease epidemic is crucial to make projections on its evolution and set up the appropriate public health response. Estimates of \(R\) routinely come from statistical inference on timelines of cases or their proxies like symptomatic cases, hospitalizatons, deaths. Here, however, we prove that these estimates of \(R\) may not be accurate if the population is made up of spatially distinct communities, as the interplay between space and mobility may hide the true epidemic evolution from surveillance data. This means that surveillance may underestimate \(R\) over long periods, to the point of mistaking a growing epidemic for a subsiding one, misinforming public health response. To overcome this, we propose a correction to be applied to surveillance data that removes this bias and ensures an accurate estimate of \(R\) across all epidemic phases. We use COVID-19 as case study; our results, however, apply to any epidemic where mobility is a driver of circulation, including major challenges of the next decades: respiratory infections (influenza, SARS-CoV-2, emerging pathogens), vector-borne diseases (arboviruses). Our findings will help set up public health response to these threats, by improving epidemic monitoring and surveillance. ## Main text The reproduction ratio \(R\) is arguably the most used indicator to monitor the trend in the evolution of an infectious disease epidemic. \(R\) is the average number of secondary cases that each case generates: When it is larger than one, the epidemic wave is growing; when instead it is lower than one, it is subsiding [1; 2]. The reproduction ratio also measures the effectiveness of public health interventions, whose overarching goal is to bring an unconstrained epidemic (\(R>1\)) below the epidemic threshold of \(R=1\): Accurately estimating the reproduction ratio is thus necessary to ascertain the current epidemic evolution, predict short-term trends, perform scenario analysis and plan public health action [3; 4; 5; 6; 7]. The standard way to measure \(R\) is to infer it from data coming from epidemiological surveillance [8; 9; 10; 11; 12]. These data may be timelines of detected cases or their proxies, like hospitalizations or deaths, and this approach applies to diseases spanning radically different epidemiology, transmission routes and burden, like influenza [13; 14], measles [15], COVID-19 [16], Ebola [17], cholera [18], dengue [19], malaria [20]. The resulting surveillance-based estimates of \(R\) are routinely used to design interventions [21]: Notwithstanding, we argue in this study that surveillance data may lead to biased estimates of the reproduction ratio in spatially structured populations, where geographically distinct communities (e.g., cities) are connected though human mobility. We will show that the complex interplay between spatial heterogeneities in transmissibility and the mixing network driven by human mobility hide the true dynamic structure of the epidemic process from population-level surveillance data. This mirrors the nature of most mathematical models of epidemic spread: they integrate space and spatial data at high resolution [22; 23; 24; 25; 26; 27], but they find it harder to do the reverse, which is extracting high-resolution information from limited and coarse-grained surveillance data in the absence of knowledge of the underlying spatial dynamics is [28; 29; 30]. Crucially, this means that inference on surveillance data may either overestimate or underestimate it over long periods. This is of great public health relevance: measuring for instance a reproduction ratio below one when the true value is above would falsely signal that the epidemic is under control. Here, we study this bias, identify its origin and compute its magnitude. Then, we propose a correction to case incidence data that removes this bias and ensures that surveillance-based estimates of the reproduction ratio consistently give the true reproduction ratio of the epidemic. Our theoretical findings apply to any epidemic featuring relatively short generation time and for which mobility is a contributing factor in shaping its circulation within and across communities. This covers some of the global health threats that are being worst affected by climate change and demographic trends: viruses responsible for respiratory infections - including SARS-CoV-2 and influenza - [31], vector-borne pathogens - including the arboviruses dengue, chikungunya, Zika [32, 33], and emergence events of new viruses or new viral strains [34]. To test and illustrate our findings, we use the French COVID-19 epidemic (see Fig. 1) before the advent of vaccination as a case study. ## Theoretical formalism The Galton-Watson branching process is a customary framework to model epidemic spread [35, 36, 37]. Let \(I(0)\) be the initial number of cases, \(I(1)\) the expected number of cases that the initial cases generate, and, generally, let \(I(t)\) be the expected number of cases in the \(t\)-th generation. By definition of the reproduction ratio, we have that \(I(t)=RI(t-1)\), which implies that \(I(t)=R^{t}I(0)\). This equation means that the number of cases grows exponentially if \(R>1\). In any real outbreak other factors, like acquired immunity, seasonal effects or public-health interventions, will at some point curb this exponential growth by changing the value of \(R\). Notwithstanding, we may assume \(R\) to be fairly constant either in the early phase of an outbreak, when those effects have not yet kicked in, or when the timescale at which immunity and mixing change is much longer than epidemic evolution [38, 39]. In the case of a population composed of \(N\) spatial communities, we may define the vector \(\mathbf{I}(t)\in\mathbb{R}^{N}\), whose component \(I(t)_{i}\) is the number of cases in generation \(t\) and community \(i\). Likewise, the _reproduction operator_\(\mathbf{R}\in\mathbb{R}^{N,N}\) encodes, in its component \(R_{ij}\), the average number of cases generated among the residents of community \(i\), by a case belonging to community \(j\)[40]. This definition of \(\mathbf{R}\), and the results that we are going to derive from it, applies to any epidemic and disease. The specific parametrization of \(\mathbf{R}\) will instead depend on the specific transmission dynamics and natural history of the disease: for directly-transmitted diseases \(\mathbf{R}\) typically depends on mixing patterns among communities [41]; for vector-borne diseases the local abundance of the host vectors, modulating the effective transmissibility, needs to be factored in, too [32, 42]. The expected epidemic evolution then follows the equation \[\mathbf{I}(t)=\mathbf{R}^{t}\mathbf{I}(0). \tag{1}\] \(\mathbf{I}(t)\) encodes both the total number of cases in the population in generation \(t\) and its spatial distribution. We define the former as the number \(I_{tot}(t)=\sum_{i}I(t)_{i}\) and the latter as the vector \(\mathbf{x}(t)\in\mathbb{R}^{N}\) whose components are \(x(t)_{i}=I(t)_{i}/I_{tot}(t)\). The reproduction ratio \(R\) of this process is the spectral radius of \(\mathbf{R}\) (i.e., the largest among the absolute values of its eigenvalues) [43], which is itself also a (nondegenerate) eigenvalue, because \(\mathbf{R}\) is by definition nonnegative and can be assumed irreducible (see Supplementary Methods Section 1.3) so that the Perron-Frobenius theorem holds [44]. We also define \(\mathbf{v}\) as the Perron (right) eigenvector associated with \(R\). \(\mathbf{v}\) is strictly positive (\(v_{i}>0\)) and we normalize it so that \(\sum_{i}v_{i}=1\). Measuring the true reproduction ratio of the system thus requires knowledge of the spectral structure of \(\mathbf{R}\), i.e., of the spatial structure of the epidemic. Surveillance instead measures the reproduction ratio from the evolution of the incidence of infections or their proxies. This may happen globally, at the level of the entire population, or locally in each community. In our framework, the population-level observed reproduction ratio is \(S(t)=I_{tot}(t+1)/I_{tot}(t)\), i.e., the generational growth rate. The local community-level observed reproduction ratio is instead \(s_{i}(t)=I(t+1)_{i}/I(t)_{i}\). A simple observation then underpins our study: in general \(S(t)\) and \(s_{i}(t)\) may be different from \(R\), the spectral radius of \(\mathbf{R}\), and, if that is the case, surveillance will not measure the true reproduction ratio. To explore this, we will first determine the conditions leading to an unbiased measure of the reproduction ratio: \(S(t)=R\). ## When the true and observed reproduction ratios match By virtue of the Perron-Frobenius theorem, \(\mathbf{R}^{t}\to R^{t}\mathbf{v}\mathbf{v}^{*}\) asymptotically at large \(t\), where \(\mathbf{v}^{*}\) is the dual of \(\mathbf{v}\) (easily computable as the left Perron eigenvector of \(\mathbf{R}\)) and normalized so that \(\mathbf{v}^{*}\mathbf{v}=1\). Asymptotically then equation (1) becomes \(\mathbf{I}(t)\rightarrow\left[\mathbf{v}^{*}\mathbf{I}(0)\right]R^{t}\mathbf{v}\), which implies that \(\mathbf{x}(t)\rightarrow\mathbf{v}\), \(S(t)\to R\) and \(s_{i}(t)\to R\). The epidemic dynamics thus brings the spatial distribution of cases toward \(\mathbf{v}\), which we will refer to as the _equilibrium spatial distribution of infections_. Thus, for any epidemic dynamics, if cases are spatially distributed as the equilibrium distribution (\(\mathbf{x}=\mathbf{v}\)), then the error is zero and the true reproduction ratio is measured both globally \((S=R)\) and locally \(s_{i}=R\). Fig.1**b** shows evidence of the convergence to \(\mathbf{v}\) during the COVID-19 epidemic in France in late 2020 and early 2021. We used mobility data from Meta [45], a multinational technology company, to estimate \(\mathbf{R}\) for the 94 departments of mainland France, excluding Corsica (see Reconstruction of the reproduction operator from data). We reconstructed \(\mathbf{x}\) from surveillance data released by the French public health authority (see Supplementary Methods Section 1.1). In a period when \(\mathbf{R}\) was fairly constant (required by our formalism) the angle between \(\mathbf{x}\) and \(\mathbf{v}\) consistently decreased. Such angle, however, never got to zero because then \(\mathbf{R}\) changed, and, consistently, the equilibrium distribution \(\mathbf{v}\) changed. The description of the whole course of an epidemic wave indeed requires a time-varying and that is beyond the scope of this study. Locally in time, however, in periods during which \(\mathbf{R}\) is fairly constant, the system will evolve towards the equilibrium distribution determined by the Perron eigenvector of \(\mathbf{R}\) at that time. But there exists a class of operators \(\mathbf{R}\) for which the error is globally zero even out of equilibrium (\(\mathbf{x}\neq\mathbf{v}\)). First, let us rewrite the observed reproduction ratio in matrix form as \[S(t)=\frac{I_{tot}(t+1)}{I_{tot}(t)}=\mathbf{F}^{T}\mathbf{R}\mathbf{x}(t), \tag{2}\] where we introduced \(\mathbf{F}\) as the unit column vector (\(F_{i}=1\,\forall i\)). If we assume that \(\mathbf{v}^{*}=\mathbf{F}^{T}\) (\(v_{i}^{*}=1\)) then we can apply \(\mathbf{R}\) leftwards in equation (2) and get \(S(t)=R\) at any time and for any spatial distribution \(\mathbf{x}\). Now, the requirement \(\mathbf{v}^{*}=\mathbf{F}^{T}\) imposes that \(\mathbf{R}\) is proportional to a left-stochastic matrix: indeed \(\mathbf{F}^{T}\mathbf{R}=R\mathbf{F}^{T}\) means \(\sum_{j}R_{ji}=R\), so that each column sums to \(R\). \(r_{i}\equiv\sum_{j}R_{ji}\) is by definition the expected number of secondary cases generated by a case resident of \(i\) regardless of where they are generated. If \(r_{i}\) is constant, every case, anywhere, has the same overall _transmission potential_: \(r_{i}=R\ \forall i\). If this is the case, the observed reproduction ratio is unbiased regardless of the spatial epidemic coupling among communities. This implies that only the combination of spatial epidemic coupling and spatial heterogeneity in transmission potential may cause a global difference between the observed and the true reproduction ratios. Notably, locally-measured reproduction ratios may instead differ from \(R\) even in the case \(\mathbf{v}^{*}=\mathbf{F}^{T}\). ## When the true and observed reproduction ratios do not match We now focus on the out-of-equilibrium dynamics (\(\mathbf{x}(t)\neq\mathbf{v}\)) and measure the bias in the estimate of \(R\) as the relative difference between the observed and the true reproduction ratios \[\Delta(t)=\frac{S(t)-R}{R}. \tag{3}\] We call \(\Lambda_{\alpha}\) (\(\alpha=1,\cdots,N-1\)) the (possibly degenerate) eigenvalues of \(\mathbf{R}\) other than \(R\) and, by Perron-Frobenius theorem, \(|\Lambda_{\alpha}|<R\). With calculations reported in Calculation of \(\Delta(t)\): proof of equation (4), we find that \[\Delta(t)=C(t)\sum_{\alpha}z_{\alpha}\left(1-\frac{\Lambda_{\alpha}}{R}\right) \left(\frac{\Lambda_{\alpha}}{R}\right)^{t}, \tag{4}\] where \(C(t)\) is positive and asymptotically constant, and \(z_{\alpha}\) is a (possibly complex) number proportional to the scalar product between \(\mathbf{F}\) and the projection of the initial condition \(\mathbf{x}(0)\) on the \(\alpha\)-th mode. The modes in equation (4) for which \(\Lambda_{\alpha}\approx R\), or that are almost orthogonal to the initial configuration \(\mathbf{x}(0)\), are suppressed from the start and do not bias the estimate of the reproduction ratio. The other modes, instead, possibly bias the reproduction ratio with an effect that becomes smaller as the epidemic evolves, with a characteristic decay time \(\tau_{\alpha}=1/\log\left(R/|\Lambda_{\alpha}|\right)\). In addition, those modes for which \(\Lambda_{\alpha}\) is not real and positive have an oscillating term. Specifically, if \(\Lambda_{\alpha}\) has a nonzero imaginary part, then its complex conjugate is also an eigenvalue and their combined contribution oscillates with period \(T_{\alpha}=2\pi/|\theta_{\alpha}|\), where \(\theta_{\alpha}=\arg\Lambda_{\alpha}\) (with \(\theta_{\alpha}\in(-\pi,\pi]\)). This also holds for negative eigenvalues (\(\theta_{\alpha}=\pi\)) - see Calculation of \(\Delta(t)\): proof of equation (4) for a detailed calculation. These modes with \(\theta_{\alpha}\neq 0\) will induce visible oscillations in \(\Delta(t)\) if they oscillate faster than their characteristic decay time. We can quantify this by requiring the oscillation period to be smaller than the decay time: \(T_{\alpha}\leq\tau_{\alpha}\). This gives the inequality \[\frac{|\Lambda_{\alpha}|}{R}\geq e^{-\frac{|\theta_{\alpha}|}{2\pi}}\geq e^{- \frac{1}{2}}\approx 0.61, \tag{5}\] where the lower bound in equation (5) occurs when \(\Lambda_{\alpha}\) is real and negative (\(\theta_{\alpha}=\pi\)). To test the predictions of our theory in a realistic scenario, we considered again the COVID-19 epidemic in France and built a stochastic metapopulation model using the same mobility data as in Fig. 1**b**. The details of the model are reported in Epidemic simulations. We measured the true and the observed reproduction ratios, reported in Fig. 2, which shows that surveillance-based estimates may remain consistently biased for a long period and, depending on where the epidemic wave started (initial conditions), they may either overestimate or underestimate the true reproduction ratio. The case depicted in Fig. 2**b** is particularly concerning: during the first month of the simulated epidemic, surveillance records a lower-than-one reproduction ratio which would mistakenly point to a subsiding outbreak. In reality, the true reproduction ratio is fixed to well above one, and only after two months of simulated epidemic does the surveillance based estimate reach the true value. Alongside the estimate of \(S\) given within the framework of the Galton-Watson process (equation (2)), in Fig. 2**a**,**b** we also provide an estimate of the observed reproduction ratio by feeding incident cases to the library _EpiEstim_[11], one of the most popular tools to compute the reproduction ratio from surveillance data. The fact that the two measures overlap confirms that the Galton-Watson process correctly reproduces the phenomenology under study even in realistic scenarios. Notwithstanding, more detailed frameworks [43, 46, 47] could be used to study the impact of heterogeneous generation intervals. Finally, Fig. 2c and Fig. 2d show that locally measured reproduction ratios converge to the true value at different times and with different speeds, and that, at the same moment in time, some communities may overestimate \(R\) and some underestimate it. This last point can actually be proven to be always the case. The Collatz-Wielandt inequalities tell us that, for any spatial distribution of cases \(\mathbf{x}\), \(\min_{i|x_{i}\neq 0}(\mathbf{Rx})_{i}/x_{i}\leq R\) and \(\max_{i|x_{i}\neq 0}(\mathbf{Rx})_{i}/x_{i}\geq R\). Given that \(s_{i}=(\mathbf{Rx})_{i}/x_{i}\), out of equilibrium there will always be at least one community overestimating the true reproduction ratio (\(s_{i}>R\)) and one underestimating it (\(s_{i}<R\)). Fig. 2 shows no oscillations in the sign of \(\Delta(t)\), compatible with the fact that the operator \(\mathbf{R}\) we built from mobility data has only real and positive eigenvalues. We extended our analysis to 32 European countries: 24 members of the European Union (excluding Cyprus, Ireland and Latvia for lack of data) plus Albania, Bosnia and Hercegovina, Iceland, Montenegro, Norway, Serbia, Sweden, UK - see details in Supplementary Figure 1. For all of them we built the operator \(\mathbf{R}\) using colocation and population data at the admin-2 level, similarly to what we did for France. We found at least one real, negative eigenvalue in 11 out of 32 countries, but nowhere did they cause visible oscillations, as the oscillation period was always larger than twice the decay time. We did not find non-real eigenvalues. This begs the question whether oscillations are actually observable in real systems. To rigorously determine the conditions for a specific spectrum in a generic nonnegative matrix is not possible, except for specific or low-dimensional cases [48]. We can, however, plausibly associate the presence of an oscillating mode with period \(T_{\alpha}\) to the existence of a cycle of approximate length \(T_{\alpha}\) in the (weighted, directed) network which has \(\mathbf{R}\) as its adjacency matrix [49, 50]. Slow oscillations (large \(T_{\alpha}\)) would then require the presence of long cycles in \(\mathbf{R}\), which are unlikely to be generated by the recurrent mobility patterns that drive the spatial spread of epidemic outbreaks following pathogen importation [26, 51]. Fast oscillations, and in particular those generated by real, negative eigenvalues, may instead be more common. They would require epidemics that are strongly coupled, i.e., where pairs of communities exist in which infected residents generate, on average, more cases in the other community than in their own, but this is not the case in the countries we examined and for the spatial resolution we considered. In the absence of oscillations, the observed reproduction ratio consistently either overestimates or underestimates the true reproduction ratio, as \(\Delta(t)\) decays to zero without ever changing sign. In this case, we can determine the sign of the bias from the initial condition: \(\Delta(0)=\sum_{j}r_{j}x(0)_{j}-R\). By the Perron-Frobenius theorem, \(j_{min},j_{max}\) exist so that \(r_{j_{min}}\leq R\) and \(r_{j_{max}}\geq R\). Thus, the initial location of cases will completely determine the sign of the error that surveillance will make. If the epidemic starts \(j_{max}\) - or in general in communities with high transmission potential -, surveillance will consistently overestimate the true reproduction ratio until the bias decays to zero. Conversely, if it starts in \(j_{min}\) - or in communities with low transmission potential -, surveillance will underestimate \(R\). ## 3 Correction to surveillance data So far we have proven that surveillance-based estimates of the reproduction ratio may be biased. We will now propose a way to correct for this bias. Equation (2) computes, within our simplified model, the reproduction ratio in terms of the overall observed incidence of cases \(I_{tot}(t)\). This can also be trivially interpreted as proportional to the unweighted average of the incidence across communities: \(I_{tot}(t)=N\left(\sum_{i}I_{i}(t)/N\right)\). From this, we define a new modified incidence using an average weighted by the entries of the Perron dual vector: \[I_{tot}^{(v)}(t)=N\left(\frac{\sum_{i}v_{i}^{*}I_{i}(t)}{\sum_{i}v_{i}^{*}} \right)=N\sum_{i}v_{i}^{*}I_{i}(t)=N\mathbf{v}^{*}\mathbf{I}(t). \tag{6}\] We now define a new modified observed reproduction ratio using the modified incidence (\(I_{tot}^{(v)}(t)\)) - compare this with equation (2): \[S^{(v)}(t)=\frac{I_{tot}^{(v)}(t+1)}{I_{tot}^{(v)}(t)}=\frac{\mathbf{v}^{*} \mathbf{I}(t+1)}{\mathbf{v}^{*}\mathbf{I}(t)}=\frac{\mathbf{v}^{*}\mathbf{R} \mathbf{I}(t)}{\mathbf{v}^{*}\mathbf{I}(t)}=R\frac{\mathbf{v}^{*}\mathbf{I}(t) }{\mathbf{v}^{*}\mathbf{I}(t)}=R. \tag{7}\] The practical advantage for epidemic monitoring is clear: our correction gives an unbiased estimate of the reproduction ratio from surveillance data all along the epidemic wave, unlike traditional measures. It has, however, two potential drawbacks. The former is that if the initial epidemic seeding occurs in communities where \(v_{i}^{*}\) is small, then \(\mathbf{v}^{*}\mathbf{I}(t)\) will be very small: stochastic fluctuations would then cause large changes in \(S^{(v)}\). In that case then \(S^{(v)}\) may well be accurate, but not precise. Luckily, however, no initial condition can be orthogonal to \(\mathbf{v}^{*}\) whose entries are strictly positive, so even if \(\mathbf{v}^{*}\mathbf{x}\) is initially small, it is likely to increase quickly and with it the precision of the measurement. In Fig. 3 we show that \(S^{(v)}\) accurately measures the true reproduction ratio from the beginning of the epidemic wave, in the case of the simulated epidemics of Fig. 2. Notably, Fig. 3 also shows that if you feed \(I_{tot}^{(v)}(t)\) to _EpiEstim_ instead of \(I_{tot}(t)\) you will also completely remove the bias on the estimate of the reproduction ratio. Our proposed modified incidence can then be readily incorporated to standard tools for public health surveillance, to improve their accuracy. The latter potential drawback is that our correction requires knowing \(\mathbf{v}^{*}\). We argue, however, that this does not require knowing or measuring \(\mathbf{R}\) in real time (from which \(R\) could then be directly measured) and that a good estimate of \(\mathbf{v}^{*}\) for epidemic monitoring can be computed during _peace time_, from past population and mobility data (pre-epidemic, or from data collected during earlier epidemic phases). Indeed \(\mathbf{v}^{*}\) is more stable than \(\mathbf{R}\) for the fact that any change happening homogeneously across communities (e.g., changes in the rate of immunity, public health interventions) would change the latter, not the former. Fig. 4 compares the standard observed reproduction ratio of COVID-19 in France between late 2020 and March 2021 to our correction. The former is computed with EpiEstim on inferred case incidence, the latter is computed with EpiEstim on the corrected incidence \(I_{tot}^{(v)}\), with \(\mathbf{v}^{*}\) computed from past mobility data. Notably, we tested different choices of \(\mathbf{v}^{*}\) going back up to August 2020, i.e., five months prior to the period under study, which confirms that our correction is robust to using past mobility data to reconstruct \(\mathbf{v}^{*}\). Our correction seems to point to the fact that traditional surveillance underestimated the true reproduction ratio of COVID-19 in France during January and February 2021. This underestimation is even more consequential because surveillance recorded a lower-than-one reproduction ratio during more than two weeks (see also official reports from that time [52]), indicating a subsiding epidemic wave. This is at odds with what we know happened: a growing epidemic wave - the French _third wave_ - that led to a national lockdown, enforced on April 3 2021, i.e., immediately after the time window depicted in Fig. 4. Our corrected reproduction ratio would have instead consistently signaled a growing epidemic wave throughout the first three months of 2021. This discrepancy carries great significance when put into the context of the debate over public health response at that time. In early 2021 a national curfew was in effect but cases were rising due to the introduction and gradual takeover of the Alpha variant of SARS-CoV-2. Authorities were wary of additional restrictions and were relying on mass vaccination despite models suggesting that it might not be enough [53] - only \(3\%\) of the population had received at least one dose by mid February [52] (week 6 of 2021 in Fig. 4). It is conceivable, albeit circumstantial, that the fact that surveillance underestimated the severity of the wave could have contributed to delaying the enforcement of stricter movement restrictions, which became anyway inevitable later in April. Our study describes a practicable way to improve the accuracy of the information that flows from epidemiological surveillance to public health policymakers. And better information may lead to more effective policies for preventing and controlling epidemic threats. ## Methods ### Calculation of \(\Delta(t)\): proof of equation (4) Combining equation (1) and equation (2) we get the time evolution of the observed reproduction ratio: \[S(t)=\frac{\mathbf{F}^{T}\mathbf{R}^{t+1}\mathbf{x}(0)}{\mathbf{F}^{T}\mathbf{R} ^{t}\mathbf{x}(0)}. \tag{8}\] We insert this into equation (3) and get \[\Delta(t)=\frac{1}{R}\frac{\mathbf{F}^{T}(\mathbf{R}-R)\mathbf{R}^{t}\mathbf{ x}(0)}{\mathbf{F}^{T}\mathbf{R}^{t}\mathbf{x}(0)}. \tag{9}\] We introduce the eigenvectors of \(\mathbf{R}\) (other than \(\mathbf{v}\)): \(\mathbf{w}_{\alpha}\) eigenvector with corresponding eigenvalue \(\Lambda_{\alpha}\). Analogously we define the corresponding dual vector \(\mathbf{w}_{\alpha}^{*}\). Then, we decompose \(\mathbf{F}^{T}\) in the dual basis: \(\mathbf{F}^{T}=\mathbf{v}^{*}+\sum_{\alpha}\left(\mathbf{F}^{T}\mathbf{w}_{ \alpha}\right)\mathbf{w}_{\alpha}^{*}\). Using this decomposition in equation (9) and applying \(\mathbf{R}\) leftwards on the dual eigenvectors we get \[\Delta(t)=-\frac{\mathbf{F}^{T}\left[\sum_{\alpha}\left(\frac{\Lambda_{\alpha }}{R}\right)^{t}\left(1-\frac{\Lambda_{\alpha}}{R}\right)\mathbf{w}_{\alpha} \mathbf{w}_{\alpha}^{*}\right]\mathbf{x}(0)}{\mathbf{F}^{T}\left[\mathbf{v} \mathbf{v}^{*}+\sum_{\alpha}\left(\frac{\Lambda_{\alpha}}{R}\right)^{t} \mathbf{w}_{\alpha}\mathbf{w}_{\alpha}^{*}\right]\mathbf{x}(0)}. \tag{10}\] The denominator is \(C(t)\) in equation (4): \[C(t)=\frac{1}{\mathbf{F}^{T}\left[\mathbf{v}\mathbf{v}^{*}+\sum_{\alpha} \left(\frac{\Lambda_{\alpha}}{R}\right)^{t}\mathbf{w}_{\alpha}\mathbf{w}_{ \alpha}^{*}\right]\mathbf{x}(0)}. \tag{11}\] \(C(t)\) is always strictly positive because it is proportional to \(\mathbf{F}^{T}\mathbf{R}^{t}\mathbf{x}(0)\) and tends to \(\mathbf{v}^{*}\mathbf{x}(0)\), i.e., the component of the initial condition onto the eigenspace of the Perron eigenvalue. This component is always nonzero because no \(\mathbf{x}(0)\) is nonnegative (as it is a spatial distribution of cases) and no nonnegative vector can be orthogonal to a strictly positive vector. It is thus the numerator which gives the trend and sign of \(\Delta(t)\). Equation (10) then gives the value of the factors \(z_{\alpha}\) in equation (4): \[z_{\alpha}=-\mathbf{F}^{T}\left(\mathbf{w}_{\alpha}\mathbf{w}_{\alpha}^{*} \right)\mathbf{x}(0) \tag{12}\] In the case of degenerate eigenvalue one should simply replace \(\mathbf{w}_{\alpha}\mathbf{w}_{\alpha}^{*}\) with the appropriate projector over the whole eigenspace. Note that, as discussed before, the denominator in equation (10) is always real and positive so any complex phase of \(z_{\alpha}\) must arise from \(\Lambda_{\alpha}\) and \(\mathbf{w}_{\alpha}\mathbf{w}_{\alpha}^{*}\). ### Calculation of \(\Delta(t)\): \(\tau_{\alpha},T_{\alpha}\) We isolate in equation (4) the contribution of each mode \(M_{\alpha}(t)\): \(\Delta(t)=\sum_{\alpha}M_{\alpha}(t)\), where \[M_{\alpha}(t)=M_{\alpha}(0)\left(\frac{\Lambda_{\alpha}}{R}\right)^{t}=M_{ \alpha}(0)\left(\frac{|\Lambda_{\alpha}|}{R}\right)^{t}e^{i\theta_{\alpha}t}=M _{\alpha}(0)e^{-t/\tau_{\alpha}}e^{i\theta_{\alpha}t}, \tag{13}\] where we used the definition of \(\tau_{\alpha}\) given in the main text. The decaying term with characteristic time \(\tau_{\alpha}\) is visible. If \(\Lambda_{\alpha}\) is real and positive then \(\theta_{\alpha}=0\) and the oscillating term vanishes. If \(\Lambda_{\alpha}\) is real and negative then \(\theta_{\alpha}=\pi\) and the oscillating term becomes an alternating sign: \(e^{i\theta_{\alpha}t}=(-1)^{t}\). This is an oscillation with period \(T_{\alpha}=2\), which is compatible with the definition of \(T_{\alpha}\) given in the main text. Finally, if \(\Lambda_{\alpha}\not\in\mathbb{R}\), then then \(\bar{\Lambda}_{\alpha}\) is also an eigenvalue, where the bar denotes complex conjugation. We will call \(\bar{\alpha}\) the index corresponding to that eigenvalue: \(\Lambda_{\bar{\alpha}}=\bar{\Lambda}_{\alpha}\). Also, the projector over the eigenspace of \(\Lambda_{\bar{\alpha}}\) is the elementwise complex conjugate of the projector over the eigenspace of \(\Lambda_{\alpha}\), meaning that \(z_{\bar{\alpha}}=\bar{z}_{\alpha}\), and thus \(M_{\bar{\alpha}}=\bar{M}_{\alpha}(0)\). Then \(\alpha,\bar{\alpha}\) contribute in pair, as follows: \[M_{\alpha}(t)+M_{\bar{\alpha}}(t) =e^{-t/\tau_{\alpha}}\left[M_{\alpha}(0)e^{i\theta_{\alpha}t}+M_{ \bar{\alpha}}(0)e^{-i\theta_{\alpha}t}\right]\] \[=2e^{-t/\tau_{\alpha}}\left|M_{\alpha}(0)\right|\operatorname{Re }e^{i\theta_{\alpha}t+\phi_{\alpha}}=2e^{-t/\tau_{\alpha}}\cos\left(\frac{2 \pi}{T_{\alpha}}t+\phi_{\alpha}\right). \tag{14}\] Here we used the definition of \(T_{\alpha}\) given in the main text, explicitly showing the emergence of the oscillating term with period \(T_{\alpha}\). ### Reconstruction of the reproduction operator from data The main data used for the reconstruction of reproduction operators for mainland France are Meta Colocation Maps[45]. They give the probability \(p_{ij}\) that a randomly chosen person that is resident of community \(i\) and a randomly chosen person resident of community \(j\) are both located in a same \(600m\times 600m\) square, during a randomly chosen five-minutes time window, in a given week. Note that diagonal elements \(p_{ii}\) quantify the mixing within each community. From these diagonal probabilities we discounted spurious co-location time due to people staying at home in spatially contiguous dwellings using Movement Range Maps (see Data availability and Supplementary Methods Section 1.2). The data were provided at the resolution of departments (ADM 2). To reconstruct \(\mathbf{R}\) from these data, we assumed that the expected number of secondary cases generated among the residents of community \(i\), by a case who is resident of community \(j\), is given by \(R_{ij}=Cp_{ij}n_{i}\), where \(n_{i}\) is the population of spatial patch \(i\), and \(C\) is an overall transmissibility parameter. Notably, while the value of the spectral radius of \(\mathbf{R}\) clearly depends on \(C\), the left and right Perron eigenvectors \(\mathbf{v}\) and \(\mathbf{v}^{*}\) do not, and depend solely on the data. ### Epidemic simulations The model of epidemic spread used in simulations is a stochastic discrete-time metapopulation model whereby spatially distinct communities are linked through mobility[26, 27, 54, 55]. We use a synthetic population based on census data from the National Institute of Statistics and Economic Studies (INSEE) in France. We divide this population in \(94\) spatial communities corresponding to the departments of mainland France except Corsica. Meta Colocation Maps [45] and Movement Range Maps are used to reconstruct the coupling \(p_{ij}\) between communities \(i\) and \(j\) and the within-community \(i\) mixing \(p_{ii}\). We use a compartmental model of COVID-19 from [56]. We compute the reproduction ratio for our model according to the next generation method [57], obtaining: \[R=\rho(K)\frac{\beta}{\mu}\left(1-p_{sc}+\beta_{I}p_{sc}\right), \tag{15}\] where \(\rho(K)\) is the spectral radius of the matrix \(K_{ij}=p_{ij}n_{i}\), \(n_{i}\) is the population of community \(i\), \(\mu\) is the recovery rate, \(p_{sc}\) is the probability of sub-clinical infections and \(\beta_{I}\) is the factor by which the transmissibility of sub-clinical cases is reduced (see [56]). The other parameters are also taken from [56], and the overall transmission rate \(\beta\) is set so that \(R=1.5\). ## Data availability Meta Colocation Maps and Meta Movement Range Maps, which were used to reconstruct reproduction operators and to infer between- and within-community mixing for stochastic simulations can be requested at [https://dataforgood.facebook.com/dfg/tools/colocation-maps](https://dataforgood.facebook.com/dfg/tools/colocation-maps) and [https://dataforgood.facebook.com/dfg/tools/movement-range-maps](https://dataforgood.facebook.com/dfg/tools/movement-range-maps) respectively. Hospital admission data in France are available at [https://www.data.gouv.fr](https://www.data.gouv.fr). French census data can be found at [https://www.insee.fr](https://www.insee.fr). All websites accessed June 2023. ## Acknowledgements Colocation data were available thanks to _Data For Good at Meta_. ## References * [1] Keeling, M. J. & Rohani, P. _Modeling Infectious Diseases in Humans and Animals_ isbn: 978-0-691-11617-4 (Princeton University Press, Princeton, NJ, USA, 2007). * [2] Nishiura, H. & Chowell, G. en. in _Mathematical and Statistical Estimation Approaches in Epidemiology_ (eds Chowell, G., Hyman, J. M., Bettencourt, L. M. A. & Castillo-Chavez, C.) 103-121 (Springer Netherlands, Dordrecht, 2009). isbn: 978-90-481-2313-1. * [3] Wallinga, J., van Boven, M. & Lipsitch, M. Optimizing infectious disease interventions during an emerging epidemic. _Proceedings of the National Academy of Sciences_**107.** Publisher: Proceedings of the National Academy of Sciences, 923-928 (Jan. 2010). * [4] Ridenhour, B., Kowalik, J. M. & Shay, D. K. Unraveling R0: Considerations for Public Health Applications. _American Journal of Public Health_**108,** S445-S454. issn: 0090-0036 (Dec. 2018). * [5] Thompson, R. N., Gilligan, C. A. & Cunniffe, N. J. Control fast or control smart: When should invading pathogens be controlled? en. _PLOS Computational Biology_**14.** Publisher: Public Library of Science, e1006014. issn: 1553-7358 (Feb. 2018). * [6] Dhillon, R. S., Srikrishna, D. & Chowell, G. Getting to zero in the DR Congo Ebola outbreak. English. _The Lancet Infectious Diseases_**20.** Publisher: Elsevier, 395-397. issn: 1473-3099, 1474-4457 (Apr. 2020). * [7] Pan, A. _et al._ Association of Public Health Interventions With the Epidemiology of the COVID-19 Outbreak in Wuhan, China. _JAMA_**323,** 1915-1923. issn: 0098-7484 (May 2020). * [8] Wallinga, J. & Lipsitch, M. How generation intervals shape the relationship between growth rates and reproductive numbers. _Proceedings of the Royal Society B: Biological Sciences_**274,** 599-604. issn: 0962-8452 (Feb. 2007). * [9] Davoudi, B. _et al._ Early Real-Time Estimation of the Basic Reproduction Number of Emerging Infectious Diseases. _Physical Review X_**2.** Publisher: American Physical Society, 031005 (July 2012). * [10] Obadia, T., Haneef, R. & Boelle, P.-Y. The R0 package: a toolbox to estimate reproduction numbers for epidemic outbreaks. _BMC Medical Informatics and Decision Making_**12,** 147. issn: 1472-6947 (Dec. 2012). * [11] Cori, A., Ferguson, N. M., Fraser, C. & Cauchemez, S. A New Framework and Software to Estimate Time-Varying Reproduction Numbers During Epidemics. _American Journal of Epidemiology_**178,** 1505-1512. issn: 0002-9262 (Nov. 2013). * [12] Thompson, R. N. _et al._ Improved inference of time-varying reproduction numbers during infectious disease outbreaks. en. _Epidemics_**29,** 100356. issn: 1755-4365 (Dec. 2019). * [13] Biggerstaff, M., Cauchemez, S., Reed, C., Gambhir, M. & Finelli, L. Estimates of the reproduction number for seasonal, pandemic, and zoonotic influenza: a systematic review of the literature. _BMC Infectious Diseases_**14,** 480. issn: 1471-2334 (Sept. 2014). * [14] Thompson, R., Wood, J. G., Tempia, S. & Muscatello, D. J. Global variation in early epidemic growth rates and reproduction number of seasonal influenza. en. _International Journal of Infectious Diseases_**122,** 382-388. issn: 1201-9712 (Sept. 2022). * [15] Guerra, F. M. _et al._ The basic reproduction number (R0) of measles: a systematic review. English. _The Lancet Infectious Diseases_**17.** Publisher: Elsevier, e420-e428. issn: 1473-3099, 1474-4457 (Dec. 2017). * [16] Li, Y. _et al._ The temporal association of introducing and lifting non-pharmaceutical interventions with the time-varying reproduction number (R) of SARS-CoV-2: a modelling study across 131 countries. English. _The Lancet Infectious Diseases_**21.** Publisher: Elsevier, 193-202. issn: 1473-3099, 1474-4457 (Feb. 2021). * [17] Maganga, G. D. _et al._ Ebola Virus Disease in the Democratic Republic of Congo. _New England Journal of Medicine_**371.** Publisher: Massachusetts Medical Society, 2083-2091. issn: 0028-4793 (Nov. 2014). * [18] Mukandaire, Z. _et al._ Estimating the reproductive numbers for the 2008-2009 cholera outbreaks in Zimbabwe. en. _Proceedings of the National Academy of Sciences_**108,** 8767-8772. issn: 0027-8424, 1091-6490 (May 2011). * [19] Codeco, C. T., Villela, D. A. M. & Coelho, F. C. Estimating the effective reproduction number of dengue considering temperature-dependent generation intervals. en. _Epidemics_**25,** 101-111. issn: 1755-4365 (Dec. 2018). * [20] Routledge, I. _et al._ Estimating spatiotemporally varying malaria reproduction numbers in a near elimination setting. en. _Nature Communications_**9.** Number: 1 Publisher: Nature Publishing Group, 2476. issn: 2041-1723 (June 2018). * [21]_Introducing a coherent European framework for tuning COVID-19 response measures_ en. Mar. 2021. * [22] Hufnagel, L., Brockmann, D. & Geisel, T. Forecast and control of epidemics in a globalized world. _Proceedings of the National Academy of Sciences_**101.** Publisher: Proceedings of the National Academy of Sciences, 15124-15129 (Oct. 2004). * [23] Balcan, D. & Vespignani, A. Phase transitions in contagion processes mediated by recurrent mobility patterns. en. _Nature Physics_**7.** Number: 7 Publisher: Nature Publishing Group, 581-586. issn: 1745-2481 (July 2011). * [24] Pastor-Satorras, R., Castellano, C., Van Mieghem, P. & Vespignani, A. Epidemic processes in complex networks. _Reviews of Modern Physics_**87,** 925-979. issn: 15390756 (2015). * [25] Soriano-Panos, D., Lotero, L., Arenas, A. & Gomez-Gardenes, J. Spreading Processes in Multiplex Metapopulations Containing Different Mobility Networks. _Physical Review X_**8.** Publisher: American Physical Society, 031039 (Aug. 2018). * [26] Gomez-Gardenes, J., Soriano-Panos, D. & Arenas, A. Critical regimes driven by recurrent mobility patterns of reaction-diffusion processes in networks. en. _Nature Physics_**14.** Number: 4 Publisher: Nature Publishing Group, 391-395. issn: 1745-2481 (Apr. 2018). * [27] Chang, S. _et al._ Mobility network models of COVID-19 explain inequities and inform reopening. en. _Nature_**589.** Publisher: Nature Publishing Group, 82-87. issn: 1476-4687 (Nov. 2020). * [28] Coletti, P., Poletto, C., Turbelin, C., Blanchon, T. & Colizza, V. Shifting patterns of seasonal influenza epidemics. en. _Scientific Reports_**8.** Number: 1 Publisher: Nature Publishing Group, 12786. issn: 2045-2322 (Aug. 2018). * [29] Scarpino, S. V. & Petri, G. On the predictability of infectious disease outbreaks. en. _Nature Communications_**10.** Number: 1 Publisher: Nature Publishing Group, 898. issn: 2041-1723 (Feb. 2019). * [30] Castro, M., Ares, S., Cuesta, J. A. & Manrubia, S. The turning point and end of an expanding epidemic cannot be precisely forecast. en. _Proceedings of the National Academy of Sciences_**117.** Publisher: National Academy of Sciences Section: Biological Sciences, 26190-26196. issn: 0027-8424, 1091-6490 (Oct. 2020). * [31] Li, Y. & Nair, H. Trends in the global burden of lower respiratory infections: the knowns and the unknowns. English. _The Lancet Infectious Diseases_**22.** Publisher: Elsevier, 1523-1525. issn: 1473-3099, 1474-4457 (Nov. 2022). * [32] Messina, J. P. _et al._ The current and future global distribution and population at risk of dengue. en. _Nature Microbiology_**4.** Number: 9 Publisher: Nature Publishing Group, 1508-1515. issn: 2058-5276 (Sept. 2019). * [33] Romanello, M. _et al._ The 2022 report of the Lancet Countdown on health and climate change: health at the mercy of fossil fuels. English. _The Lancet_**400.** Publisher: Elsevier, 1619-1654. issn: 0140-6736, 1474-547X (Nov. 2022). * [34] Carlson, C. J. _et al._ Climate change increases cross-species viral transmission risk. en. _Nature_**607.** Number: 7919 Publisher: Nature Publishing Group, 555-562. issn: 1476-4687 (July 2022). * [35] Watson, H. W. & Galton, F. On the Probability of the Extinction of Families. _The Journal of the Anthropological Institute of Great Britain and Ireland_**4,** 138-144. issn: 09595295 (1875). * [36] Lloyd-Smith, J. O., Schreiber, S. J., Kopp, P. E. & Getz, W. M. Superspreading and the effect of individual variation on disease emergence. en. _Nature_**438.** Number: 7066 Publisher: Nature Publishing Group, 355-359. issn: 1476-4687 (Nov. 2005). * [37] Hellewell, J. _et al._ Feasibility of controlling COVID-19 outbreaks by isolation of cases and contacts. en. _The Lancet Global Health_**8,** e488-e496. issn: 2214-109X (Apr. 2020). * [38] Kucharski, A. J. _et al._ Effectiveness of ring vaccination as control strategy for Ebola virus disease. _Emerging infectious diseases_**22,** 105 (2016). * Europe_**28,** 100614. issn: 2666-7762 (May 2023). * [40] Susswein, Z. _et al._ Ignoring spatial heterogeneity in drivers of SARS-CoV-2 transmission in the US will impede sustained elimination. _medRxiv._ eprint: [https://www.medrxiv.org/content/early/2021/08/10/2021.08.09.21261807.full.pdf](https://www.medrxiv.org/content/early/2021/08/10/2021.08.09.21261807.full.pdf) (2021). * [41] Mazzoli, M., Valdano, E. & Colizza, V. Projecting the COVID-19 epidemic risk in France for the summer 2021. _Journal of Travel Medicine_**28.** issn: 1708-8305 (Oct. 2021). * [42] Jourdain, F. _et al._ From importation to autochthonous transmission: Drivers of chikungunya and dengue emergence in a temperate area. en. _PLOS Neglected Tropical Diseases_**14.** Publisher: Public Library of Science, e0008320. issn: 1935-2735 (2020). * [43] Diekmann, O., Heesterbeek, J. A. P. & Metz, J. A. J. On the definition and the computation of the basic reproduction ratio R0 in models for infectious diseases in heterogeneous populations. en. _Journal of Mathematical Biology_**28,** 365-382. issn: 1432-1416 (June 1990). * [44] Horn, R. A. & Johnson, C. R. _Matrix Analysis_ isbn: 0-521-38632-2 (Cambridge University Press, 1990). * [45] Iyer, S. _et al._ Large-scale measurement of aggregate human colocation patterns for epidemiological modeling. _Epidemics_**42,** 100663. issn: 1755-4365 (2023). * [46] White, L. F., Archer, B. & Pagano, M. Estimating the reproductive number in the presence of spatial heterogeneity of transmission patterns. _International Journal of Health Geographics_**12,** 35. issn: 1476-072X (July 2013). * [47] Trevisin, C. _et al._ Spatially explicit effective reproduction numbers from incidence and mobility data. _Proceedings of the National Academy of Sciences_**120.** Publisher: Proceedings of the National Academy of Sciences, e2219816120 (May 2023). * [48] Egleston, P. D., Lenker, T. D. & Narayan, S. K. The nonnegative inverse eigenvalue problem. en. _Linear Algebra and its Applications. Special Issue on the Tenth ILAS Conference (Auburn, 2002)_**379,** 475-490. issn: 0024-3795 (Mar. 2004). * [49] Kellogg, R. B. & Stephens, A. B. Complex eigenvalues of a non-negative matrix with a specified graph. en. _Linear Algebra and its Applications_**20,** 179-187. issn: 0024-3795 (Jan. 1978). * [50] Torre-Mayo, J., Abril-Raymundo, M. R., Alarcia-Estevez, E., Marijuan, C. & Pisonero, M. The nonnegative inverse eigenvalue problem from the coefficients of the characteristic polynomial. EBL digraphs. en. _Linear Algebra and its Applications_**426,** 729-773. issn: 0024-3795 (Oct. 2007). * [51] Schneider, C. M., Belik, V., Couronne, T., Smoreda, Z. & Gonzalez, M. C. Unravelling daily human mobility motifs. _Journal of the Royal Society Interface_**10. issn: 17425662 (2013). * [52] France, S. P. _COVID-19 : point epidemiologique du 11 fevrier 2021_ fr. * [53] Di Domenico, L., Sabbatini, C. E., Pullano, G., Levy-Bruhl, D. & Colizza, V. Impact of January 2021 curfew measures on SARS-CoV-2 B.1.1.7 circulation in France. en. _Eurosurveillance_**26.** Publisher: European Centre for Disease Prevention and Control, 2100272. issn: 1560-7917 (Apr. 2021). * [54] Colizza, V. & Vespignani, A. Epidemic modeling in metapopulation systems with heterogeneous coupling pattern: Theory and simulations. _Journal of Theoretical Biology_**251,** 450-467. issn: 0022-5193 (2008). * [55] Balcan, D. _et al._ Multiscale mobility networks and the spatial spreading of infectious diseases. _Proceedings of the National Academy of Sciences_**106.** Publisher: Proceedings of the National Academy of Sciences, 21484-21489 (Dec. 2009). * [56] Faucher, B. _et al._ Agent-based modelling of reactive vaccination of workplaces and schools against COVID-19. _Nature Communications_**13,** 1414 (Mar. 2022). * [57] Diekmann, O., Heesterbeek, J. & Roberts, M. The construction of next-generation matrices for compartmental epidemic models. _Journal of the Royal Society, Interface / the Royal Society_**7,** 873-85 (Nov. 2009). ## Supplementary information Supplementary Methods, Supplementary Figure 1 and Supplementary References 1-3. **Supplementary Information for** **Estimates of the reproduction ratio from epidemic surveillance may be biased** **in spatially structured populations** Piero Birello1,+, Michele Re Fiorentin2, Boxuan Wang1, Vittoria Colizza1, and Eugenio Valdano1,*. [MISSING_PAGE_POST] Supplementary Methods ### Estimate of incidence from hospital admissions data We estimate the incidence of COVID-19 infections (number of new infections per time interval per unit of population) in France from hospital admission data [1, 2]. Analogously to what we did in Ref. [2], we reconstruct incident infections \(I(t)\) from incident hospitalizations \(H(t)\) as follows: \(I(t)=H(t+7)/0.032\), where \(0.032\) is the average fraction of hospitalisations per infectious case and \(7\) is the average time from infection to hospitalization [3]. Hospitalization data come from the French Public Health Authority (_Sante Publique France_) and are accessible at [https://www.data.gouv.fr](https://www.data.gouv.fr). ### Correction to within-community Colocation Maps In the case of densely populated spatial patches, home-staying co-locations are likely to consistently increase the within-community mixing measured by Meta Colocation Maps [4]. Analogously to what we did in Ref. [2], we hence correct diagonal \(\mathbf{p}\) entries using Meta Movement Range Maps (see Data availability). Movement Range Maps give the average fraction \(sp_{i}\) of residents of community \(i\) that do not leave a given \(600m\times 600m\) tile for the whole day. To be precise, data points include observations from 8 pm to 7:59 pm of the next day in local time. The probability \(p_{ii}^{(sp)}\) to observe home-staying co-locations in community \(i\) is given by the ratio between the number of observed co-locations in that community, and all possible co-locations in it. In turn, the number of observed co-locations in \(i\) is given by the couples of people remaining home in each tile times the number of tiles in that patch. Then: \[p_{ii}^{(sp)}=\frac{(sp_{i}d_{i}A)(sp_{i}d_{i}A-1)m_{i}}{n_{i}(n_{i}-1)}\,, \tag{1}\] where \(d_{i}\) is the population density in community \(i\), \(A=0.36\)\(km^{2}\) is the area of a single tile, \(m_{i}\) is the number of tiles occupied by community \(i\) and \(n_{i}\) is \(i\)'s population. We subtract \(p_{ii}^{(sp)}\) to \(p_{ii}\) for each \(i\). ### On the applicability of the Perron-Frobenius theorem The Perron-Frobenius theorem as used in the main paper requires that the matrix be strictly positive (\(R_{ij}>0\)), or nonnegative (\(R_{ij}\geq 0\)) and irreducible. In our case \(\mathbf{R}\) may not be strictly positive if, for some \(i,j\), cases from \(i\) generate no cases in \(j\), so we shall prove here that it is irreducible, or that it can be made irreducible. A nonnegative matrix is irreducible if and only if its associated directed graph is strongly connected [5]. The associated graph of \(\mathbf{R}\) is that which has a link between nodes \(i,j\) if \(R_{ij}>0\). In general, a suitable permutation of the node indices will bring \(\mathbf{R}\) to the following form, which mirrors the general bow-tie structure of the associated directed graph: \[\mathbf{R}=\left(\begin{array}{c|c|c}\mathbf{T}_{u}&0&0\\ \hline\mathbf{B}_{1}&\mathbf{R}_{scc}&0\\ \hline\mathbf{B}_{2}&\mathbf{B}_{3}&\mathbf{T}_{d}\end{array}\right). \tag{2}\] where the blocks \(\mathbf{T}_{u},\mathbf{T}_{d}\) are lower diagonal and \(\mathbf{R}_{scc}\) is the adjacency submatrix of strongly connected component. The spectrum of \(\mathbf{R}\) is then the union of the diagonal elements of \(\mathbf{T}_{u},\mathbf{T}_{d}\) and the spectrum of \(\mathbf{R}_{scc}\). Now three options are possible. First, if \(R\), the spectral radius of \(\mathbf{R}\) and true reproduction ratio, is among the diagonal elements of \(\mathbf{T}_{d}\) this means that there is one community that sustains the epidemic and at most exports cases to other sink communities (remember \(\mathbf{T}_{d}\) is lower diagonal), so it is a trivial case with no actual epidemic dynamics between communities. Second, if \(R\) belongs to the spectrum of \(\mathbf{R}_{scc}\), then we shall write the Perron eigenvector in blocks as follows: \[\mathbf{v}=\left(\begin{array}{c}\mathbf{v}_{u}\\ \hline\mathbf{v}_{scc}\\ \hline\mathbf{v}_{d}\end{array}\right). \tag{3}\] If we write by blocks the eigenvector equation \(\mathbf{R}\mathbf{v}=R\mathbf{v}\), on the top block we have \(\mathbf{T}_{u}v_{u}=Rv_{u}\), whose only solution is \(\mathbf{v}_{u}=0\) as \(R\) is not an eigenvalue of \(\mathbf{T}_{u}\). This means \(\mathbf{R}_{scc}\mathbf{v}_{scc}=R\mathbf{v}_{scc}\), and \(\mathbf{v}_{d}=\left(R-\mathbf{T}_{d}\right)^{-1}\mathbf{B}_{3}\mathbf{v}_{scc}\), this being nonsingular because \(R\) is not an eigenvalue of \(\mathbf{T}_{d}\). The dynamics is thus completely determined by the strongly connected component, and we can restrict our study to \(\mathbf{R}_{scc}\), which represents by definition a strongly-connected graph and as such is irreducible, proving our initial claim. Finally, if \(R\) is among the diagonal elements of \(\mathbf{T}_{u}\) again this means that there is one community that generates cases and exports them, possibly through several steps, to the strongly-connected component. Again this is seeding part is trivial and underlies no actual epidemic dynamics between communities, so that again we can restrict our study to \(\mathbf{R}_{scc}\). ### Generation interval and EpiEstim settings The generation interval is the time between infection and subsequent transmission to another individual. Estimating \(R\) through the R-package EpiEstim requires feeding the generation interval distribution associated to the disease[4]. We obtain here the generation interval for our epidemic model. The compartmental model we use (see Methods and Ref. [6]) has a rate of transition from the E to the I compartment (\(\epsilon\)) and a recovery rate \(\mu\). We also define an effective transmissibility \(\mu R\) (see Eq. (15) of main paper). Let \(\tau\) be the generation time. Let the probability that a transmission event occurs exactly after \(\tau\) has passed since primary infection be \(P(t)dt\). Also, let \(\tau_{E}\) be the time one stays in the E compartment: \(\tau_{E}\sim\text{Exp}(\epsilon)\), and \(\tau_{I}\) be the time one stays in the I compartment: \(\tau_{I}\sim Exp(\mu)\). Conditioning on \(\tau_{E},\tau_{I}\), we have \[P(\tau|\tau_{E},\tau_{I})dt=\mu Rdt\theta(\tau-\tau_{E})\theta(\tau_{I}+\tau_{ E}-\tau), \tag{4}\] where \(\theta\) is the Heaviside's function. So now we marginalize and get the generation time distributio \(P(\tau)\): \[P(\tau)=\int_{0}^{\infty}d\tau_{E}d\tau_{I}P(\tau|\tau_{E},\tau_{I})P(\tau_{E} )P(\tau_{I})=\frac{\mu\epsilon R}{\mu-\epsilon}\left(e^{-\epsilon\tau}-e^{- \mu\tau}\right). \tag{5}\] Note that a discrete distribution is required by the EpiEstim package. We choose to compute it over 50 bins in the interval \([0,49]\). The time window over which to estimate \(R\) is set to be a week. In all simulations, we assign the estimate for \(R\) returned for the week interval \([t,t+6]\) to the day \(t+3\). Also, given the smaller precision of early estimates, as reported in the documentation [4], we arbitrarily choose not to plot EpiEstim points associated to the first two weeks from the start of the synthetic epidemic. Supplementary Figure **Fig.1\(|\)Decay time and oscillation period of the modes of \(\Delta=S-R\) for selected European countries.** The color shows the value, for each country, \(\max_{\alpha}\tau_{\alpha}/T_{\alpha}\), where \(\tau_{\alpha},T_{\alpha}\) are defined in the main paper and are, respectively, the decay time and the oscillation period of the \(\alpha\)th mode. 32 countries are included: 24 members of the European Union (excluding Cyprus, Ireland and Latvia for lack of data) plus Albania, Bosnia and Herzegovina, Iceland, Montenegro, Norway, Serbia, Sweden, UK (see Data availability). Countries with real positive eigenvalues only are colored in white (\(T_{\alpha}=\infty\)), countries not included are in gray.
2303.16317
Operator learning with PCA-Net: upper and lower complexity bounds
PCA-Net is a recently proposed neural operator architecture which combines principal component analysis (PCA) with neural networks to approximate operators between infinite-dimensional function spaces. The present work develops approximation theory for this approach, improving and significantly extending previous work in this direction: First, a novel universal approximation result is derived, under minimal assumptions on the underlying operator and the data-generating distribution. Then, two potential obstacles to efficient operator learning with PCA-Net are identified, and made precise through lower complexity bounds; the first relates to the complexity of the output distribution, measured by a slow decay of the PCA eigenvalues. The other obstacle relates to the inherent complexity of the space of operators between infinite-dimensional input and output spaces, resulting in a rigorous and quantifiable statement of a "curse of parametric complexity", an infinite-dimensional analogue of the well-known curse of dimensionality encountered in high-dimensional approximation problems. In addition to these lower bounds, upper complexity bounds are finally derived. A suitable smoothness criterion is shown to ensure an algebraic decay of the PCA eigenvalues. Furthermore, it is shown that PCA-Net can overcome the general curse for specific operators of interest, arising from the Darcy flow and the Navier-Stokes equations.
Samuel Lanthaler
2023-03-28T21:27:36Z
http://arxiv.org/abs/2303.16317v5
# Operator learning with PCA-Net: ###### Abstract PCA-Net is a recently proposed neural operator architecture which combines principal component analysis (PCA) with neural networks to approximate operators between infinite-dimensional function spaces. The present work develops approximation theory for this approach, improving and significantly extending previous work in this direction: First, a novel universal approximation result is derived, under minimal assumptions on the underlying operator and the data-generating distribution. Then, two potential obstacles to efficient operator learning with PCA-Net are identified, and made precise through lower complexity bounds; the first relates to the complexity of the output distribution, measured by a slow decay of the PCA eigenvalues. The other obstacle relates to the inherent complexity of the space of operators between infinite-dimensional input and output spaces, resulting in a rigorous and quantifiable statement of the curse of dimensionality. In addition to these lower bounds, upper complexity bounds are derived. A suitable smoothness criterion is shown to ensure an algebraic decay of the PCA eigenvalues. Furthermore, it is shown that PCA-Net can overcome the general curse of dimensionality for specific operators of interest, arising from the Darcy flow and the Navier-Stokes equations. ## 1 Introduction The application of neural networks [14] to computational science and engineering is receiving growing interest. At their core, many problems of scientific interest involve the approximation of an underlying operator, which defines a mapping between two infinite-dimensional spaces of functions. _Neural operators_[2, 3, 28, 20] are a generalization of neural networks to such an infinite-dimensional setting. They aim to approximate, or "learn", operators from data given in the form of input and output pairs. Neural operators hold promise as surrogate models to accelerate and complement traditional numerical methods in many-query problems, and they can be used for data-driven discovery of the underlying input-output map, even when no mathematical model is available. Recent years have seen the emergence of several neural operator architectures. This includes deep operator networks (DeepONet) [28], building on early work on operator learning in [7]; we also mention subsequent extensions of DeepONets, for example [17, 38, 45, 24, 36]. DeepONets have been deployed with success in a variety of applications [12, 30, 5]. Another popular approach is based on a class of neural operators introduced in [2, 20]. Here, neural operators are defined in close analogy with conventional neural networks, where the weight matrices in the hidden layers are generalized to integral operators. Special cases of this framework include the graph neural operator [2, 26] and the Fourier neural operator (FNO) [27]. In this context, we also mention related frameworks in [39, 46, 15]. Another notable, and somewhat distinct, approach to operator learning is the operator-valued random feature model proposed in [33]. Universal approximation results for many of these frameworks are known in a variety of settings; the universality of DeepONets is established in [7, 28, 23], FNOs are shown to be universal in [19], another universal approximation result for neural operators is derived in [20]. We also mention recent work [22], which proves a general universal approximation result for the so-called "averaging neural operator" (ANO), a minimal architecture that is at the core of many other frameworks, thereby allowing to unify much of the analysis of this emerging zoo of neural operator architectures. Going beyond universality, it is crucial to improve our understanding of the required computational complexity of neural operators in order to assess when the methods will be effective. Pertinent numerical experiments may be found in [10]. Relevant analysis of linear problems from this point of view has been given in [4, 11]. The required complexity of neural network-based methods for specific PDE operators of interest are studied from a approximation theoretic point of view, in e.g. [19, 44, 21, 23, 43]. A focus of these papers is on beating the _"curse of dimensionality"_. Since the input and output spaces are infinite-dimensional in these problems, clarification may be needed as to the meaning of beating the curse of dimensionality: it is interpreted as identifying conditions under which the required size (number of tunable parameters) of the operator approximation grows only algebraically with the inverse of the desired error. The present work will provide further clarification and motivation for the use of this term in the context of operator learning. The focus of the present work is the so-called _PCA-Net_, a methodology which combines ideas from principal component analysis with neural networks [16, 3]. Principal component analysis (PCA) is a standard tool for dimension reduction in high-dimensional statistics and unsupervised learning [18]. In [3], a combination of PCA with neural networks has been proposed as a data-driven operator learning framework. As indicated above, the goal of operator learning is to approximate an unknown operator \(\Psi^{\dagger}:\mathcal{X}\rightarrow\mathcal{Y}\), mapping between two infinite-dimensional spaces \(\mathcal{X}\) and \(\mathcal{Y}\). Given data in the form of pairs of inputs and outputs, we seek to determine an accurate, data-driven approximation of \(\Psi^{\dagger}\). The PCA-Net operator learning architecture achieves this goal by (i) using PCA to reduce the dimensions of the input and output spaces and (ii) approximating a map between the resulting finite-dimensional latent spaces [3]. First analysis, including a universal approximation result, have been derived in [3]. Furthermore, in the same work, the efficacy of the proposed architecture has been demonstrated empirically for prototypical problems, including the solution operator of the viscous Burgers equation and the Darcy flow equation. However, so far, a detailed mathematical analysis providing a theoretical underpinning for this empirically observed efficiency of PCA-Net, has been outstanding. The present work fills this gap by developing relevant approximation theory for PCA-Net. The main contributions of this paper are the following: * **Universal approximation:** We prove a novel universal approximation theorem for PCA-Net, Theorem 3.1, under significantly relaxed conditions on the distribution of the data-generating measure and the underlying operator \(\Psi^{\dagger}\) compared to previous work; the universality of PCA-Net is here shown under natural minimal conditions, which are in fact necessary for PCA to be well-defined on the input and output spaces. * **Curse of dimensionality:** A rigorous result is proven which demonstrates that the curse of dimensionality cannot be overcome by PCA-Net in general (cp. Theorem 3.3); more precisely, this result shows that it is impossible to derive algebraic complexity bounds when considering general classes of operators, such as the class of all Lipschitz- or even \(\mathcal{C}^{k}\)-continuous operators. Hence, we conclude that at this level of generality, the curse of dimensionality is unavoidable. * **Overcoming the curse of dimensionality:** Given the negative result on the general curse of dimensionality, we argue that a central challenge in operator learning is to identify the relevant class of operators which _do allow_ for efficient approximation by a given operator learning framework. To gain further insight into the relevant mathematical structure that can be leveraged by PCA-Net, we restrict attention to two prototypical PDE operators of interest arising from the Darcy flow and Navier-Stokes equations. In both cases, we show that PCA-Net can overcome the general curse of dimensionality; algebraic error and complexity estimates are established in Theorems 3.9 and 3.15, demonstrating that these operators belong to a restricted class which is efficiently approximated by PCA-Net. ### Overview In section 2, relevant background on PCA and the PCA-Net methodology is provided; First, PCA and empirical PCA are reviewed in section 2.1, and two error estimates for the PCA projection error are stated in section 2.2. Next, it is explained how PCA-Net combines PCA with a neural network, resulting in an operator learning architecture. In Section 3, we develop approximation theory for PCA-Net. A new universal approximation result for PCA-Net is derived in section 3.1. In section 3.2, two potential obstacles to the efficacy of PCA-Net are identified through rigorous lower complexity bounds. Upper complexity bounds are the subject of the remaining sections: In section 3.3, a smoothness criterion is shown to rule out the first potential obstacle to efficient operator learning. We then finally show, in section 3.4, how PCA-Net can overcome the curse of dimensionality for two prototypical PDE operators arising in the context of the Darcy flow and Navier-Stokes equations, respectively. Conclusions and perspectives for future work are summarized in Section 4. ### Notation Throughout the following discussion, \(\mathcal{H},\mathcal{X},\mathcal{Y}\) denote separable Hilbert spaces. We will use \(\|\cdot\|_{\mathcal{H}}\) and \(\langle\,\cdot\,,\,\cdot\,\rangle_{\mathcal{H}}\) to denote the norm and inner product on \(\mathcal{H}\); if it is clear from the context, we may occasionally omit the subscript to aid readability. On finite-dimensional Euclidean spaces, \(|\cdot|\) is used for the Euclidean norm, and \(|\cdot|_{\infty}\) denotes the maximum-norm. The space of probability measures on \(\mathcal{H}\) is denoted \(\mathcal{P}(\mathcal{H})\). We denote by \(u\sim\mu\) a random variable distributed according to probability measure \(\mu\). \(\mathbb{E}_{u\sim\mu}[F(u)]\) denotes the expectation of \(F\) with respect to \(\mu\). We consistently use \(\Psi^{\dagger}\) to denote the underlying (truth) operator, and \(\Psi\) will denote a (PCA-Net) approximation of \(\Psi^{\dagger}\). For two numbers \(a,b\), we will write \(a\sim b\) for equivalence up to constants, i.e. there exists \(C>0\) such that \(C^{-1}a\leq b\leq Ca\). Similarly \(a\lesssim b\), \(a\gtrsim b\) denotes inequality up to a constant, i.e. \(a\leq Cb\) and \(Ca\geq b\), respectively. On occasion, we write a subscript \(a\lesssim_{k}b\) to emphasize the dependence of the implied constant \(C=C(k)\) on a given parameter \(k\). We follow the convention that constants in estimates can change their value from line to line; their dependence on the relevant parameters will always be indicated. Other notation is introduced as needed. ## 2 PCA-Net methodology PCA-Net combines principal component analysis (PCA) for dimension reduction of the input and output spaces, with a neural network mapping between the resulting finite-dimensional latent spaces. Before summarizing PCA-Net, we first review PCA in subsection 2.1, and derive two high-probability estimates for the PCA projection error in subsection 2.2. Next, we summarize how PCA-Net combines PCA with a neural network, resulting in an operator learning architecture in subsection 2.3. ### Principal Component Analysis In the present section, we provide necessary background material on PCA, and we prove a high-probability estimate for the PCA projection error, building on previous results [40, 32]. Pca.Given a Hilbert space \(\mathcal{H}\), a probability measure \(\mu\) on \(\mathcal{X}\), and a projection dimension \(d\), PCA aims to minimize the average reconstruction error \(\mathbb{E}_{u\sim\mu}[\|u-Pu\|^{2}]\) over the set \(\Pi_{d}\) of orthogonal projections \(P:\mathcal{H}\rightarrow\mathcal{H}\) of rank \(d\). It is well-known (e.g. [18, 3, 40]) that this can be achieved by considering the covariance operator \(\Sigma=\mathbb{E}_{u\sim\mu}[u\otimes u]\), which is diagonalizable; i.e., there exists a sequence \(\lambda_{1}\geq\lambda_{2}\geq\cdots\geq 0\) of eigenvalues and corresponding orthonormal basis of eigenvectors \(\phi_{1},\phi_{2},\cdots\in\mathcal{H}\), such that \(\Sigma\phi_{j}=\lambda_{j}\phi_{j}\) for all \(j\). The optimal PCA projection of dimension \(d\), \(P_{\leq d}:\mathcal{H}\rightarrow\mathcal{H}\), can then be written as a composition \(P_{\leq d}=\mathcal{D}_{\mathcal{H}}^{\mathrm{opt}}\circ\mathcal{E}_{ \mathcal{H}}^{\mathrm{opt}}\), where the optimal PCA encoder \(\mathcal{E}_{\mathcal{H}}^{\mathrm{opt}}\) is given by, \[\mathcal{E}_{\mathcal{H}}^{\mathrm{opt}}:\mathcal{H}\rightarrow\mathbb{R}^{d}, \quad\mathcal{E}_{\mathcal{H}}^{\mathrm{opt}}(u):=(\langle u,\phi_{1}\rangle, \ldots,\langle u,\phi_{d}\rangle), \tag{2.1}\] and the corresponding PCA decoder \(\mathcal{D}_{\mathcal{H}}^{\mathrm{opt}}\) is the mapping, \[\mathcal{D}_{\mathcal{H}}^{\mathrm{opt}}:\mathbb{R}^{d}\rightarrow\mathcal{H},\quad\mathcal{D}_{\mathcal{H}}^{\mathrm{opt}}(\eta)=\sum_{j=1}^{d}\eta_{j} \phi_{j}. \tag{2.2}\] It can be shown that \(P_{\leq d}\) defines an orthogonal projection on \(\mathcal{H}\), such that \[\mathbb{E}_{u\sim\mu}[\|u-P_{\leq d}u\|_{\mathcal{H}}^{2}]=\min_{P\in\Pi_{d}} \mathbb{E}_{u\sim\mu}[\|u-Pu\|_{\mathcal{H}}^{2}].\] In the following we denote by \[\mathcal{R}_{d}^{\mathrm{opt}}(\mu):=\min_{P\in\Pi_{d}}\mathbb{E}_{u\sim\mu}[ \|u-Pu\|_{\mathcal{H}}^{2}], \tag{2.3}\] this optimal PCA projection error. One can show, e.g. [23, Thm. 3.8], that \(\mathcal{R}_{d}^{\mathrm{opt}}(\mu)\) is related to the PCA eigenvalues by \[\mathcal{R}_{d}^{\mathrm{opt}}(\mu)=\sum_{j>d}\lambda_{j}. \tag{2.4}\] Empirical PCA.Empirical PCA applies the above procedure to the empirical distribution \(\mu_{N}=\frac{1}{N}\sum_{k=1}^{N}\delta_{u_{k}}\), obtained by sampling from \(\mu\): Given a finite number of independent and identically distributed (i.i.d.) samples \(u_{1},\ldots,u_{N}\stackrel{{ iid}}{{\sim}}\mu\), define the covariance operator \(\Sigma_{N}\), by \[\Sigma_{N}=\frac{1}{N}\sum_{k=1}^{N}u_{k}\otimes u_{k}.\] Letting \(\widehat{\lambda}_{1}\geq\widehat{\lambda}_{2}\geq\cdots\geq 0\) and \(\widehat{\phi}_{1},\widehat{\phi}_{2},\cdots\in\mathcal{H}\) denote the eigenvalues and corresponding orthonormal eigenbasis. The empirical PCA projection \(\widehat{P}_{\leq d}\) of dimension \(d\) is given by \(\widehat{P}_{\leq d}=\mathcal{D}_{\mathcal{H}}\circ\mathcal{E}_{\mathcal{H}}\), where \(\mathcal{E}_{\mathcal{H}}\), \(\mathcal{D}_{\mathcal{H}}\) are defined as in (2.1), (2.2), but replacing the eigenvectors \(\phi_{j}\) by their empirical counterparts \(\widehat{\phi}_{j}\). ### Projection error of empirical PCA We first note that the empirical PCA projection error approximates the optimal projection error, provided that a sufficient amount of data is available. We state the following result in high probability. **Proposition 2.1**.: Let \(\mathcal{H}\) be a separable Hilbert space. Let \(\mu\in\mathcal{P}(\mathcal{H})\) be a probability measure with finite second moments, \(\mathbb{E}_{u\sim\mu}[\|u\|_{\mathcal{H}}^{2}]<\infty\). Then for any \(\delta,\epsilon>0\), there exists a requisite amount of data \(N_{0}=N_{0}(\mu,d,\delta,\epsilon)\), such that the encoding error for empirical PCA with dimension \(d\), and based on \(N\geq N_{0}\) samples \(u_{1},\ldots,u_{N}\stackrel{{ iid}}{{\sim}}\mu\), satisfies \[\mathbb{E}_{u\sim\mu}\left[\|u-\mathcal{D}_{\mathcal{H}}\circ\mathcal{E}_{ \mathcal{H}}(u)\|_{\mathcal{H}}^{2}\right]\leq\mathcal{R}_{d}^{\mathrm{opt}}( \mu)+\epsilon, \tag{2.5}\] with probability at least \(1-\delta\). \(\Diamond\) The proof of Proposition 2.1 relies on a well-known bound on the excess risk for empirical PCA in terms of the Hilbert-Schmidt distance \(\|\Sigma-\widehat{\Sigma}\|_{2}\) of the true and empirical covariance operators [40, e.g. Sect. 2.2]. This is combined with a general Monte-Carlo estimate to prove Proposition 2.1. We present the details in Appendix A. The result of Proposition 2.1 is purely qualitative, as it doesn't give us any estimate on the required amount of data \(N\). Our next goal is to establish a _quantitative_ bound, under additional assumptions on \(\mu\). We will call a probability measure \(\mu\in\mathcal{P}(\mathcal{H})\)**sub-Gaussian**, if there exists \(K_{\mu}\geq 0\), such that \[\mathbb{E}_{u\sim\mu}\left[\|u\|_{\mathcal{H}}^{p}\right]^{1/p}\leq K_{\mu} \sqrt{p},\quad\forall\,p\geq 1. \tag{2.6}\] According to this definition (2.6), \(\mu\) is sub-Gaussian if the real-valued random variable \(\|u\|_{\mathcal{H}}\), with \(u\sim\mu\), is sub-Gaussian in the conventional sense. The moment bound (2.6) is one of many equivalent characterizations of real-valued sub-Gaussian random variables (see e.g. [47, Sect. 2.5.1]). We then have: **Proposition 2.2**.: Let \(\mathcal{H}\) be a separable Hilbert space, and let \(\mu\) be a sub-Gaussian probability measure on \(\mathcal{H}\). Fix \(\delta\in(0,1/2)\). The encoding error for empirical PCA with dimension \(d\), and based on \(N\geq\log(2/\delta)\) samples \(u_{1},\ldots,u_{N}\stackrel{{ iid}}{{\sim}}\mu\), satisfies the following upper bound, \[\mathbb{E}_{u\sim\mu}\left[\|u-\mathcal{D}_{\mathcal{H}}\circ\mathcal{E}_{ \mathcal{H}}(u)\|_{\mathcal{H}}^{2}\right]\leq\mathcal{R}_{d}^{\mathrm{opt}}( \mu)+\sqrt{\frac{Qd\log(2/\delta)}{N}}. \tag{2.7}\] with probability at least \(1-\delta\). Here, \(Q=Q(K_{\mu})\) depends only on the constant \(K_{\mu}\) in (2.6). \(\Diamond\) Proposition 2.2 above is a natural high-probability analogue of a previous result on empirical PCA of [3], derived in expectation in that work. Proposition 2.2 uses the same bound on the PCA excess risk as Proposition 2.1, but combines it with a general Bernstein concentration bound for \(\mathcal{H}\)-valued random variables to derive quantitative rates. We include the details in Appendix A. _Remark 2.3_.: We note that under more fine-grained information on the underlying measure \(\mu\), considerable improvements to the upper bound of Proposition 2.2 are possible; this has e.g. been achieved in [32] for a different notion of a "sub-Gaussian" distribution, requiring that \[\sup_{p\in\mathbb{N}}\frac{\mathbb{E}_{u\sim\mu}[|\langle u,v\rangle_{\mathcal{H }}|^{p}]^{1/p}}{\sqrt{p}}\leq K^{\prime}_{\mu}\mathbb{E}_{u\sim\mu}[|\langle u,v \rangle_{\mathcal{H}}|^{2}]^{1/2},\quad\forall\,v\in\mathcal{H}, \tag{2.8}\] for a constant \(K^{\prime}_{\mu}\) depending only on \(\mu\). The condition (2.8) is stronger than (2.6); Assuming (2.8), it can in fact be shown that the empirical PCA projection error is of order \(\mathcal{O}\left(\mathcal{R}^{\mathrm{opt}}_{d}(\mu)\right)\) whenever \(N\gtrsim d\log(1/\delta)\), thereby achieving essentially optimal PCA convergence rates. However, in the present context, PCA will be applied for dimension reduction on both the input and output spaces under a non-linear mapping \(\Psi^{\dagger}\). Unfortunately, even if \(\mu\) satisfies (2.8), it is unclear whether this property is preserved under the push-forward by \(\Psi^{\dagger}\), i.e. whether a bound of the form (2.8) continues to hold for \(\Psi^{\dagger}_{\#}\mu\). In contrast, the bound (2.6) is robust under such a push-forward. Therefore, we contend ourselves with the more pessimistic bound of Proposition 2.2, and leave potential improvements, possibly building on [32], as a challenge for future work. \(\Diamond\) ### PCA-Net architecture We next recall the PCA-Net architecture proposed in [3], which combines empirical PCA with a neural network mapping, to approximate an underlying operator \(\Psi^{\dagger}:\mathcal{X}\to\mathcal{Y}\). In the following, let \(\mathcal{X}\) and \(\mathcal{Y}\) be separable Hilbert spaces and let \(\Psi^{\dagger}:\mathcal{X}\to\mathcal{Y}\) be a non-linear operator. The goal of the PCA-Net methodology is to approximate \(\Psi^{\dagger}\) from a finite number of input-/output-samples \(\{u_{k},\Psi^{\dagger}(u_{k})\}_{k=1}^{N}\). To this end, PCA-Net combines an encoding \(\mathcal{E}_{\mathcal{X}}:\mathcal{X}\to\mathbb{R}^{d_{\mathcal{X}}}\), a neural network \(\psi:\mathbb{R}^{d_{\mathcal{X}}}\to\mathbb{R}^{d_{\mathcal{Y}}}\) and a decoding \(\mathcal{D}_{\mathcal{Y}}:\mathbb{R}^{d_{\mathcal{Y}}}\to\mathcal{Y}\) (cp. Figure 1). Here, \(\mathcal{E}_{\mathcal{X}}\) and \(\mathcal{D}_{\mathcal{Y}}\) are chosen as an empirical PCA encoder and decoder, respectively. A precise definition is given below, see equations (2.11), (2.12). Given these ingredients, the resulting PCA-Net is defined as the mapping \[\Psi:\mathcal{X}\to\mathcal{Y},\quad\Psi(u):=\mathcal{D}_{\mathcal{Y}}\circ \psi\circ\mathcal{E}_{\mathcal{X}}(u). \tag{2.9}\] The encoder \(\mathcal{E}_{\mathcal{X}}\) and decoder \(\mathcal{D}_{\mathcal{Y}}\) perform a dimension reduction on the input and output spaces. The neural network \(\psi\) approximates a mapping on the resulting finite-dimensional latent spaces. The intuition is that the underlying Figure 1: Diagrammatic illustration of PCA-Net based on a PCA encoder \(\mathcal{E}_{\mathcal{X}}\), a neural network \(\psi\), and a PCA decoder \(\mathcal{D}_{\mathcal{Y}}\). encoder/decoder pairs, and the neural network \(\psi\), can be chosen to satisfy the following approximate identities [3]: \[\mathcal{D}_{\mathcal{X}}\circ\mathcal{E}_{\mathcal{X}}\approx\mathrm{id}_{ \mathcal{X}},\quad\mathcal{D}_{\mathcal{Y}}\circ\mathcal{E}_{\mathcal{Y}} \approx\mathrm{id}_{\mathcal{Y}},\quad\mathcal{D}_{\mathcal{Y}}\circ\psi\circ \mathcal{E}_{\mathcal{X}}\approx\Psi^{\dagger}.\] Here, \(\mathrm{id}_{\mathcal{X}}\) and \(\mathrm{id}_{\mathcal{Y}}\) denote the identity mappings on \(\mathcal{X}\) and \(\mathcal{Y}\), respectively. Our analysis of the PCA-methodology aims to quantify the accuracy of these approximations. PCA encoding and decoding.We will now specify the particular choice of \(\mathcal{E}_{\mathcal{X}}\) and \(\mathcal{D}_{\mathcal{Y}}\). In the following, we assume that the input samples \(\{u_{k}\}_{k=1}^{N}\) are i.i.d. samples from a probability measure \(\mu\) on \(\mathcal{X}\). Note that the output-samples \(\{\Psi^{\dagger}(u_{k})\}_{k=1}^{N}\) are then i.i.d. with respect to the corresponding push-forward measure \(\Psi^{\dagger}_{\#}\mu\) on \(\mathcal{Y}\). For a given choice of latent dimensions \(d_{\mathcal{X}}\) and \(d_{\mathcal{Y}}\), we apply empirical PCA to the samples \(\{u_{k}\}_{k=1}^{N}\subset\mathcal{X}\) and \(\{\Psi^{\dagger}(u_{k})\}_{k=1}^{N}\subset\mathcal{Y}\). In the following, we denote by \[\phi_{1}^{\mathcal{X}},\ldots,\phi_{d_{\mathcal{X}}}^{\mathcal{X}}\in \mathcal{X},\quad\phi_{1}^{\mathcal{Y}},\ldots,\phi_{d_{\mathcal{Y}}}^{ \mathcal{Y}}\in\mathcal{Y}, \tag{2.10}\] the empirical PCA bases on \(\mathcal{X}\) and \(\mathcal{Y}\), respectively. We emphasize that the empirical PCA bases are themselves random variables, as they depend on the random input-/output-samples \(\{u_{k},\Psi^{\dagger}(u_{k})\}_{k=1}^{N}\). The first basis, \(\{\phi_{j}^{\mathcal{X}}\}_{j=1}^{d_{\mathcal{X}}}\), defines an encoder on \(\mathcal{X}\), \[\mathcal{E}_{\mathcal{X}}:\mathcal{X}\to\mathbb{R}^{d_{\mathcal{X}}},\quad \mathcal{E}_{\mathcal{X}}(u):=(\langle u,\phi_{1}^{\mathcal{X}}\rangle,\ldots,\langle u,\phi_{d_{\mathcal{X}}}^{\mathcal{X}}\rangle), \tag{2.11}\] with corresponding decoder, \(\mathcal{D}_{\mathcal{X}}:\mathbb{R}^{d_{\mathcal{X}}}\to\mathcal{X}\), \(\mathcal{D}_{\mathcal{X}}(\xi):=\sum_{j=1}^{d_{\mathcal{X}}}\xi_{j}\phi_{j}^{ \mathcal{X}}\). Similarly, \(\phi_{1}^{\mathcal{Y}},\ldots,\phi_{d_{\mathcal{Y}}}^{\mathcal{Y}}\) defines an encoder on \(\mathcal{Y}\), \(\mathcal{E}_{\mathcal{Y}}:\mathcal{Y}\to\mathbb{R}^{d_{\mathcal{Y}}}\), \(\mathcal{E}_{\mathcal{Y}}(v):=(\langle v,\phi_{j}^{\mathcal{Y}}\rangle)_{j=1}^ {d_{\mathcal{Y}}}\), with corresponding decoder, \[\mathcal{D}_{\mathcal{Y}}:\mathbb{R}^{d_{\mathcal{Y}}}\to\mathcal{Y},\quad \mathcal{D}_{\mathcal{Y}}(\eta):=\sum_{j=1}^{d_{\mathcal{Y}}}\eta_{j}\phi_{j}^ {\mathcal{Y}}. \tag{2.12}\] Neural networks.Given a depth \(L\in\mathbb{N}\), layer widths \(d_{k}\) (\(k=0,\ldots,L\)), and weights and biases \(A_{k}\in\mathbb{R}^{d_{k}\times d_{k-1}},b_{k}\in\mathbb{R}^{d_{k}}\), \(k=1,\ldots,L\), a _(deep) neural network_ (DNN) \(\psi\) is a mapping \(\xi\mapsto\psi(\xi)\), defined as a composition of non-linear layers, \[\left\{\begin{aligned} \xi_{0}&:=\xi,\\ \xi_{k}&:=\sigma(A_{k}\xi_{k-1}+b_{k}),\quad\text{for }k=1, \ldots,L,\\ \psi(\xi)&:=A_{L}\xi_{L}+b_{L}.\end{aligned}\right. \tag{2.13}\] The non-linear layers are expressed in terms of an activation function \(\sigma:\mathbb{R}\to\mathbb{R}\) which is applied componentwise. In the following, we will restrict attention to the ReLU activation function \(\sigma(\xi):=\max(\xi,0)\). Extension of our results to more general choices of \(\sigma\) is possible. Given a DNN \(\psi\), we define \(\text{size}(\psi):=\sum_{k=1}^{L}(\|A_{k}\|_{0}+\|b_{k}\|_{0})\) as the total number of non-zero weights and biases in the architecture, and we define \(\text{depth}(\psi):=L\) as the number of layers. With these definitions, \(\text{size}(\psi)\) and \(\text{depth}(\psi)\) provide a measure of the complexity of the DNN. Neural network training.Ideally, the neural network would be chosen as a minimizer of the expected loss \[\mathcal{L}(\psi):=\mathbb{E}_{(\xi,\eta)}\left[|\psi(\xi)-\eta|^{2}\right],\] where the expectation is over pairs \((\xi,\eta)\in\mathbb{R}^{d_{\mathcal{X}}}\times\mathbb{R}^{d_{\mathcal{Y}}}\) of encoded input-/output-pairs of the form \(\xi=\mathcal{E}_{\mathcal{X}}(u)\), \(\eta=\mathcal{E}_{\mathcal{Y}}(\Psi^{\dagger}(u))\), with \(u\sim\mu\). In practice, the neural network \(\psi\) in (2.9) is usually trained to minimize the following empirical loss, \[\widehat{\mathcal{L}}(\psi):=\frac{1}{N}\sum_{k=1}^{N}|\psi(\xi_{k})-\eta_{k} |^{2},\] where \(\xi_{k}:=\mathcal{E}_{\mathcal{X}}(u_{k})\in\mathbb{R}^{d_{\mathcal{X}}}\) and \(\eta_{k}:=\mathcal{E}_{\mathcal{Y}}(\Psi^{\dagger}(u_{k}))\in\mathbb{R}^{d_{ \mathcal{Y}}}\) are the encoded input and output samples. _Remark 2.4_.: In the present work, we will not address the practical training of the neural network \(\psi\), analysis of which is a notoriously difficult problem. Instead, we will contend ourselves with an "approximation theoretic" approach; it will be shown that certain \(\psi\) exists, but no a priori guarantee is given that this \(\psi\) can be found by the numerical optimization of the empirical loss \(\widehat{\mathcal{L}}\). The samples \(u_{1},\ldots,u_{N}\) will, however, enter our analysis of the empirical PCA encoder \(\mathcal{E}_{\mathcal{X}}\) and decoder \(\mathcal{D}_{\mathcal{Y}}\). \(\Diamond\) ## 3 Approximation theory This section develops approximation theory for PCA-Net, and is divided into four subsections. First, in Section 3.1, we discuss a new universality theorem for PCA-Net. Next, we point out two potential obstacles to efficient operator learning with PCA-Net in Section 3.2. As explained there, one such obstacle relates to the complexity of the output distribution on \(\mathcal{Y}\), which could in principle lead to an arbitrarily slow decay of the PCA-Net error with the PCA-dimension \(d_{\mathcal{Y}}\) (cp. Proposition 3.2). The second potential obstacle to efficient operator learning relates to the inherent complexity of the space of operators between infinite-dimensional spaces (cp. Theorem 3.3); this makes rigorous a notion of "curse of dimensionality" for operator learning intuited in earlier work [19, 23]. The lower bounds of Section 3.2 are complemented by upper bounds in the subsequent sections; In Section 3.3, we show that a suitable smoothness condition rules out the first obstacle, ensuring an algebraic decay of the PCA encoding error on \(\mathcal{Y}\). Finally, in Section 3.4, we demonstrate that PCA-Net can leverage additional structure for two operators of interest, allowing it to overcome the general curse of dimensionality when approximating the solution operator of the Darcy flow and Navier-Stokes equations. ### Universal approximation We first state the following universal approximation theorem in high probability: **Theorem 3.1** (Universal approximation).: Let \(\mathcal{X},\mathcal{Y}\) be separable Hilbert spaces and let \(\mu\in\mathcal{P}(\mathcal{X})\) be a probability measure on \(\mathcal{X}\). Let \(\Psi^{\dagger}:\mathcal{X}\to\mathcal{Y}\) be a \(\mu\)-measurable mapping. Assume the following moment conditions, \[\mathbb{E}_{u\sim\mu}[\|u\|_{\mathcal{X}}^{2}],\ \ \mathbb{E}_{u\sim\mu}[\|\Psi^{ \dagger}(u)\|_{\mathcal{Y}}^{2}]<\infty.\] Then for any \(\delta,\epsilon>0\), there are dimensions \(d_{\mathcal{X}}=d_{\mathcal{X}}(\epsilon,\delta)\), \(d_{\mathcal{Y}}=d_{\mathcal{Y}}(\epsilon,\delta)\), a requisite amount of data \(N=N(d_{\mathcal{X}},d_{\mathcal{Y}},\mu,\Psi^{\dagger})\), a neural network \(\psi\), such that the PCA-Net, \(\Psi=\mathcal{D}_{\mathcal{Y}}\circ\psi\circ\mathcal{E}_{\mathcal{X}}\), satisfies \[\mathbb{E}_{u\sim\mu}\left[\|\Psi^{\dagger}(u)-\Psi(u;\{u_{k}\})\|_{\mathcal{ Y}}^{2}\right]\leq\epsilon,\] with probability at least \(1-\delta\) in the input data \(u_{1},\ldots,u_{N}\sim\mu\). \(\Diamond\) We have written \(\Psi(u)=\Psi(u;\{u_{k}\})\) to emphasize the dependency of the PCA-Net encoder \(\mathcal{E}_{\mathcal{X}}\) and decoder \(\mathcal{D}_{\mathcal{Y}}\) on the given data \(u_{1},\ldots,u_{N}\). The detailed proof, included in Appendix B, combines well-known universal approximation results for neural networks with the high-probability bounds on the PCA projection error in Proposition 2.1 in Section 2.2, above; more precisely, it is shown in Lemma B.2 that the PCA error can be decomposed into an error due to the encoding on \(\mathcal{X}\), the decoding error on \(\mathcal{Y}\) and a neural network approximation error. The encoding and decoding errors are expressed in terms of the PCA projection error, which can be bounded by invoking Proposition 2.2. The neural network approximation error can then be made arbitrarily small by a suitable choice of the neural network \(\psi\). Theorem 3.1 shows that PCA-Net is able to approximate almost arbitrary operators to any desired accuracy, provided a sufficient number of data points are available for empirical PCA, and provided that the underlying neural network is sufficiently large. A previous universal approximation result in [3, Theorem 3.1] was stated only for Lipschitz continuous \(\Psi^{\dagger}\) and under an assumption of finite fourth moments. In contrast, Theorem 3.1 requires no regularity condition on the underlying operator \(\Psi^{\dagger}\), and shows that a bound on the second moments of the input measure \(\mu\) and the push-forward measure \(\Psi^{\dagger}_{\#}\) suffices for universal approximation. We have formulated Theorem 3.1 in high-probability, whereas [3, Theorem 3.1] is derived in expectation. This last difference is mostly for consistency with the high probability results in later sections; indeed, Theorem 3.1 is in fact derived from a corresponding result in expectation (cp. Proposition B.1). The main drawback of universal approximation results is that they are purely qualitative, and do not provide any information about the required size of the data or the neural network; hence, universal approximation cannot provide information on the efficiency of operator learning with PCA-Net. Deriving more quantitative bounds is particularly relevant in view of the two potential obstacles to efficient operator learning, elaborated upon in the next section. ### Obstacles to effective operator learning Theorem 3.1 does not provide any quantitative information on the required complexity to achieve a given accuracy. For the practical success of PCA-Net in operator learning, it is crucial that the PCA-Net approximation is not only possible in principle, but also efficient in practice; we interpret "efficiency" as the statement that the PCA dimensions \(d_{\mathcal{X}}\), \(d_{\mathcal{Y}}\), the requisite amount of data \(N\) and the size of the neural network \(\psi\), that is required to achieve a desired accuracy \(\epsilon>0\), should grow at most at an algebraic rate \(\epsilon^{-\gamma}\), with quantifiable exponent \(\gamma>0\). As explained in this section, there are at least two potential obstacles to the efficiency of PCA-Net. Complexity of the output distribution.The first potential reason for the inefficiency of PCA-Net is a consequence of the following lower bound on the approximation error. **Proposition 3.2**.: Let \(\mu\in\mathcal{P}(\mathcal{X})\) be a probability measure, and let \(\Psi^{\dagger}\in L^{2}_{\mu}(\mathcal{X};\mathcal{Y})\) be an operator. Let \(\lambda_{1}\geq\lambda_{2}\geq\cdots\geq 0\) be the PCA eigenvalues of the push-forward measure \(\Psi^{\dagger}_{\#}\mu\) on \(\mathcal{Y}\). Then we have the following lower bound, \[\mathbb{E}_{u\sim\mu}\left[\|\Psi^{\dagger}(u)-\Psi(u)\|_{ \mathcal{Y}}^{2}\right]\geq\sum_{j>d_{\mathcal{Y}}}\lambda_{j}, \tag{3.1}\] for any PCA-Net \(\Psi\) with PCA dimension \(d_{\mathcal{Y}}\) on \(\mathcal{Y}\). \(\Diamond\) The proof of Proposition 3.2 is an almost verbatim repetition of the argument in [23, Theorem 3.6]; we won't repeat the proof here. As a consequence of (3.1), the approximation error that can be achieved with a PCA dimension \(d_{\mathcal{Y}}\) is lower bounded by the decay of the PCA eigenvalues of the push-forward measure \(\Psi^{\dagger}_{\#}\mu\), i.e. by the optimal PCA projection error \(\mathcal{R}^{\mathrm{opt}}_{d}(\Psi^{\dagger}_{\#}\mu)=\sum_{j>d_{\mathcal{Y}} }\lambda_{j}\). In particular, if this decay is very slow, e.g. \(\mathcal{R}^{\mathrm{opt}}_{d}(\Psi^{\dagger}_{\#}\mu)\gtrsim\log(d_{\mathcal{ Y}})^{-1}\), then an _exponentially_ large PCA dimension \(d_{\mathcal{Y}}(\epsilon)\sim\exp(\epsilon^{-1})\) is required, which entails that also an exponential number of samples \(N\gtrsim\exp(\epsilon^{-1})\) and an exponential neural network size \(\mathrm{size}(\psi)\gtrsim\exp(\epsilon^{-1})\) are required. The first potential obstacle thus relates to the _complexity of the output space_, encoded in the PCA eigenvalue decay of the measure \(\Psi^{\dagger}_{\#}\mu\). We note in passing that the problem of a slow eigenvalue decay can sometimes be ameliorated by replacing the linear decoder \(\mathcal{D}_{\mathcal{Y}}\) by a non-linear mapping, leading to improved results both theoretically and empirically [45, 24]. Curse of dimensionality.The last section shows that the complexity, or "size", of the output space \(\mathcal{Y}\) can be one obstacle to operator learning with PCA-Net. A second potential obstacle to efficient operator learning is that the space of operators \(\Psi^{\dagger}:\mathcal{X}\rightarrow\mathcal{Y}\) itself is very large. It is well-known that the task of approximating a high-dimensional function \(f:\mathbb{R}^{d}\to\mathbb{R}\) by ordinary methods, such as polynomial interpolation, suffers from a curse of dimensionality, where the number of function degrees of freedom needed to achieve a desired accuracy, scales exponentially in \(d\). Indeed, optimal error bounds for interpolation are typically of the form, \[\sup_{\xi\in D}|f(\xi)-p(\xi)|\lesssim\|f\|_{C^{k}}N^{-k/d}, \tag{3.2}\] where \(p\) denotes the (e.g. polynomial) interpolant and \(N\) represents the number of degrees of freedom of the interpolation space. Similar results have also been established for neural networks, see e.g. [48, 1], including upper and lower bounds on the required number of weights. In the context of operator learning, the underlying input and output spaces are infinite-dimensional. Given the unfavorable scaling of (3.2) with the dimension \(d\), it is thus far from obvious that operator learning should be practicable at all. In analogy with the above, we next introduce the notion of an **algebraic convergence rate**: Given a set \(\mathcal{C}\subset L^{2}_{\mu}(\mathcal{X};\mathcal{Y})\) of operators, we say that \(\gamma_{\mathcal{C}}>0\) is an algebraic convergence rate for \(\mathcal{C}\), if for any \(\Psi^{\dagger}\in\mathcal{C}\), there exist a constant \(C(\Psi^{\dagger})>0\) and a PCA-Net mapping \(\Psi:=\mathcal{D}_{\mathcal{Y}}\circ\psi\circ\mathcal{E}_{\mathcal{X}}: \mathcal{X}\to\mathcal{Y}\), such that \[\mathbb{E}_{\nu\sim\mu}\left[\|\Psi^{\dagger}(u)-\Psi(u)\|_{\mathcal{Y}}^{2} \right]\leq C(\Psi^{\dagger})\,\text{size}(\psi)^{-\gamma_{\mathcal{C}}}, \tag{3.3}\] The central point is that the convergence rate \(\gamma_{\mathcal{C}}\) in (3.3) should be uniform over \(\mathcal{C}\), and that only the multiplicative constant depends on \(\Psi^{\dagger}\). This is a natural analogue of (3.2). In (3.3), the linear encoder \(\mathcal{E}_{\mathcal{X}}:\mathcal{X}\to\mathbb{R}^{d_{\mathcal{X}}}\) and decoder \(\mathcal{D}_{\mathcal{Y}}:\mathbb{R}^{d_{\mathcal{Y}}}\to\mathcal{Y}\) are allowed to be arbitrary; In particular, we do not restrict the dimensions \(d_{\mathcal{X}}\), \(d_{\mathcal{Y}}\). Even though the additional dependence on \(d_{\mathcal{X}}\) and \(d_{\mathcal{Y}}\) is also of practical relevance, we postulate that any operator \(\Psi^{\dagger}\in L^{2}_{\mu}(\mathcal{X};\mathcal{Y})\), or indeed class of operators \(\mathcal{C}\subset L^{2}_{\mu}(\mathcal{X};\mathcal{Y})\), which is "efficiently" approximated by PCA-Net, must (at the very least) possess a finite algebraic convergence rate \(\gamma_{\mathcal{C}}\) in the above sense (3.3). The next result shows that for often considered classes of operators \(\Psi^{\dagger}:\mathcal{X}\to\mathcal{Y}\), such as the set of all Lipschitz continuous or \(r\)-times Frechet differentiable operators, efficient operator learning by PCA-Net is in fact impossible. **Theorem 3.3** (Curse of dimensionality).: Let \(\mathcal{X},\mathcal{Y}\) be separable Hilbert spaces, with \(\dim(\mathcal{X})=\infty\), \(\dim(\mathcal{Y})\geq 1\). Let \(\mu\in\mathcal{P}(\mathcal{X})\) be a non-degenerate Gaussian measure. Fix \(k\in\mathbb{N}\) and let \(\mathcal{C}^{k}\) denote the set of all \(k\)-times Frechet differentiable operators \(\Psi^{\dagger}:\mathcal{X}\to\mathcal{Y}\) whose \(k\)-th total derivative is uniformly bounded on \(\mathcal{X}\). For _any_\(\gamma>0\), there exists \(\Psi^{\dagger}_{\gamma}\in\mathcal{C}^{k}\) and a constant \(c_{\gamma}>0\), such that \[\mathbb{E}_{u\sim\mu}\left[\|\Psi^{\dagger}_{\gamma}(u)-\Psi(u)\|_{\mathcal{Y} }^{2}\right]\geq c_{\gamma}\text{size}(\psi)^{-\gamma},\] for any PCA-Net \(\Psi=\mathcal{D}_{\mathcal{Y}}\circ\psi\circ\mathcal{E}_{\mathcal{X}}\). In particular, there _cannot_ exist a finite algebraic convergence rate \(\gamma_{\mathcal{C}}\) for \(\mathcal{C}^{k}\), as in (3.3). \(\Diamond\) Sketch of proof.: The proof of Theorem 3.3 is based on the following lower bound for ReLU neural network approximation of functions in the unit cube of the Sobolev space \(W^{k,\infty}([0,1]^{d})\), \[F_{d,k}:=\big{\{}f\in W^{k,\infty}([0,1]^{d})\,\big{|}\,\|f\|_{W^{k,\infty}}\leq 1 \big{\}},\] which is derived in the present work and may be of independent interest: **Proposition 3.4**.: Fix \(k,d\in\mathbb{N}\). There exists \(f\in F_{k,d}\), a constant \(c_{k,d}>0\) depending only on \(k\) and \(d\), and an absolute constant \(\lambda>0\), independent of both \(d\) and \(k\), such that for any neural network \(\psi\), we have the lower bound \[\|f-\psi\|_{L^{2}([0,1]^{d})}\geq c_{k,d}\,\mathrm{size}(\psi)^{-\lambda k/d}. \tag{3.4}\] \(\Diamond\) This finite-dimensional lower bound (3.4) builds on the recent work [1]. Given the lower bound of Proposition 3.4, the proof of Theorem 3.3 is then based on the intuition that \(d\) can be chosen arbitrarily large if the underlying input space \(\mathcal{X}\) is infinite-dimensional, and hence the exponent in (3.4) can be arbitrarily small; additional work is needed to make this intuition rigorous. Detailed proofs of Theorem 3.3 and Proposition 3.4 are given in Appendix C. In particular, Theorem 3.3 shows that it is _impossible_ to derive algebraic error and complexity estimates for PCA-Net, when e.g. assuming only Lipschitz regularity of \(\Psi^{\dagger}\). It should be emphasized that this result holds even when the relevant space of input and output functions can be efficiently approximated; indeed, no restriction on the decay of the PCA eigenvalues is assumed in Theorem 3.3, allowing them to decay at an arbitrarily fast rate on both \(\mathcal{X}\) and \(\mathcal{Y}\). In fact, Theorem 3.3 even allows for \(\mathcal{Y}=\mathbb{R}\), in which case reconstruction on \(\mathcal{Y}\) is _trivial_ (in contrast, the assumptions do imply that infinitely many PCA eigenvalues on \(\mathcal{X}\) are non-zero). As alluded to above, the reason for the obstacle to efficient operator learning expressed by Theorem 3.3 is the intrinsic complexity of the space of all \(\mathcal{C}^{k}\)-regular operators (or functionals), defined on an infinite-dimensional input space \(\mathcal{X}\). _Remark 3.5_.: Under slightly stronger assumptions than Theorem 3.3 (in particular, assuming an algebraic decay of the PCA eigenvalues \(\lambda_{j}\)), one can likely show that there in fact exists \(\Psi^{\dagger}\in\mathcal{C}^{k}\) and constants \(c,\gamma>0\), depending only on \(\mu\) and \(k\), such that \[\mathbb{E}_{u\sim\mu}\left[\|\Psi^{\dagger}(u)-\Psi(u)\|_{\mathcal{Y}}^{2} \right]\geq c\log\left(\mathrm{size}(\psi)\right)^{-\gamma}.\] for any PCA-Net \(\Psi=\mathcal{D}_{\mathcal{Y}}\circ\psi\circ\mathcal{E}_{\mathcal{X}}\). Thus, achieving accuracy \(\epsilon\) requires an _exponential complexity_ of the underlying neural network, \(\mathrm{size}(\psi)\gtrsim\exp(c\epsilon^{-1/\gamma})\). Similar exponential lower bounds will be the subject of forthcoming work [25]. Given the negative result of Theorem 3.3, we posit that a central challenge in operator learning is to _identify_ the relevant class of operators \(\mathcal{C}\subset L^{2}_{\mu}(\mathcal{X};\mathcal{Y})\), which allow for efficient approximation by a given operator learning framework, and which possess a prescribed (finite) algebraic convergence rate \(\gamma_{\mathcal{C}}\) in (3.3). ### Quantitative encoding error bounds As pointed out above, one potential obstacle to the efficacy of PCA-Nets is a slow decay of the PCA eigenvalues on the output function space. In this section, we provide a general estimate on the PCA projection error, \(\mathcal{R}^{\mathrm{opt}}_{d}(\nu)=\sum_{\ell>d}\lambda_{\ell}\), based on smoothness properties of the underlying functions. Given a domain \(D\subset\mathbb{R}^{n}\), we recall that the Sobolev space \(H^{s}(D;\mathbb{R}^{n})\) is defined as the space of all functions \(u:D\to\mathbb{R}^{n}\) possessing \(L^{2}\)-integral weak derivatives up to order \(s\). We note that \(H^{s}(D;\mathbb{R}^{n})\) is a Hilbert space. We then have: **Proposition 3.6**.: Let \(n,n^{\prime}\) be integers. Let \(\mathcal{Y}=H^{s}(D;\mathbb{R}^{n^{\prime}})\), \(s\geq 0\), be a Sobolev space defined on either a Lipschitz domain \(D\subset\mathbb{R}^{n}\) or the periodic torus \(D=\mathbb{T}^{n}\). Assume that \(\nu\in\mathcal{P}(\mathcal{Y})\) is a probability measure and that there exists \(\zeta>0\), such that \(\mathbb{E}_{u\sim\nu}\left[\left\|u\right\|_{H^{s+\zeta}}^{2}\right]<\infty\). Then, there exists a constant \(C=C(n,n^{\prime})>0\), such that \[\mathcal{R}^{\mathrm{opt}}_{d}(\nu)\leq Cd^{-2\zeta/n}\mathbb{E}_{u\sim\nu} \left[\left\|u\right\|_{H^{s+\zeta}}^{2}\right],\quad\forall\,d\in\mathbb{N}.\] We recall that \(\mathcal{R}^{\mathrm{opt}}_{d}(\nu)\) is the minimal projection error over all projections \(P\) of rank \(d\) (cp. 2.3). \(\Diamond\) Thus, smoothness of the output functions ensures an algebraic decay of the PCA eigenvalues and thus rules out the first potential obstacle to efficient operator learning by PCA-Net, pointed out after Proposition 3.2. A proof of Proposition 3.6 is provided in Appendix D. Proposition 3.6 extends a result in [23, Prop. 3.14], which was restricted to the periodic case. The main novel ingredient in our proof here, and which allows the generalization to arbitrary Lipschitz domains \(D\), is based on a general Sobolev extension result by Stein [19, e.g. Appendix B]. ### Overcoming the curse of dimensionality The last section provides a general criterion to tame the lower bound of Proposition 3.2, which could limit the efficiency of PCA-Net when the relevant distribution of outputs in \(\mathcal{Y}\) is very complex (cf. Section 3.2). As proved in Theorem 3.3, a second obstacle to efficient operator learning is the _curse of dimensionality_. It would be very desirable to find a useful mathematical characterization of the entire class of operators \(\mathcal{C}\subset L^{2}_{\mu}(\mathcal{X};\mathcal{Y})\) for which an algebraic convergence rate, as in (3.3), can be established. At present, this appears a very distant goal. Instead, in this section, we aim to develop additional intuition of the basic mechanisms that PCA-Net can exploit to achieve algebraic convergence rates for specific examples. To this end, we consider two prototypical operators arising in the context of PDEs; the Darcy flow and Navier-Stokes equations. #### 3.4.1 Darcy flow Let \(D\subset\mathbb{R}^{n}\) be a bounded domain. We consider the operator \(\Psi^{\dagger}:a\mapsto w\), mapping the coefficient field \(a\) to the solution \(w\) of the following elliptic problem: \[\begin{cases}-\nabla\cdot(a(x)\nabla w(x))=f(x),&(x\in D),\\ \qquad\qquad\qquad w(x)=0,&(x\in\partial D).\end{cases} \tag{3.5}\] Here, \(D\subset\mathbb{R}^{n}\) is a given smooth domain, the right-hand side \(f\) is fixed, and we assume Dirichlet boundary conditions. Let \(H^{1}_{0}(D)\) be the Sobolev space consisting of weakly differentiable functions \(w:D\to\mathbb{R}\) which vanish on the boundary \(\partial D\), and whose gradient is square-integrable. It is well-known, e.g. [13, Chapt. 6], that if the coefficient field \(a\in L^{\infty}(D)\), satisfies two-sided (coercivity and upper) bounds \[0<\lambda\leq a(x)\leq\Lambda<\infty,\quad\forall\,x\in D, \tag{3.6}\] for constant \(\lambda,\Lambda\), and if \(f\in H^{-1}(D)\) belongs to the dual space of \(H^{1}_{0}(D)\), then there exists a unique solution \(w\in H^{1}_{0}(D)\) to (3.5). Furthermore, there exists a constant \(C=C(\lambda,\Lambda,D)>0\), such that \[\|w\|_{H^{1}_{0}}\leq C\|f\|_{H^{-1}}. \tag{3.7}\] It is thus natural to consider the output space \(\mathcal{Y}=H^{1}_{0}(D)\). We will follow the celebrated work by Cohen, Devore and Schwab [8], and subsequent extensions in [9, 44, 34, 35], and consider the following setting: We assume that the underlying measure \(\mu\) can be written as the law of random coefficient fields \(a(x)=a(x;\boldsymbol{z})\), of the parametrized form \[a(x;\boldsymbol{z})=\overline{a}(x)+\sum_{\ell=1}^{\infty}\gamma_{\ell}z_{ \ell}\rho_{\ell}(x), \tag{3.8}\] where \(\boldsymbol{z}=(z_{1},z_{2},\dots)\) is a sequence of random variables (not necessarily independent), such that \(|z_{\ell}|\leq 1\). In our analysis, we will assume that the functions \(\rho_{1},\rho_{2},\dots\in\mathcal{X}\) are orthonormal in a Hilbert space \(\mathcal{X}\), where \(\mathcal{X}{\hookrightarrow}L^{\infty}\) possesses a continuous embedding. Note that the series (3.8) converges, if \[0\leq\gamma_{\ell}\leq M\ell^{-1-\alpha}, \tag{3.9}\] decays at an algebraic rate with constants \(M,\alpha>0\). We will assume this bound (3.9). To ensure coercivity (cp. Remark 3.7 below) we assume that there exists \(\kappa>0\), such that \[\sum_{\ell=1}^{\infty}\gamma_{\ell}\|\rho_{\ell}\|_{L^{\infty}}\leq\frac{ \kappa}{1+\kappa}\overline{a}_{\min}, \tag{3.10}\] with \(\overline{a}_{\min}:=\operatorname{ess}\inf_{x\in D}\overline{a}(x)>0\). _Remark 3.7_.: The upper bound (3.10) ensures uniform coercivity, since \[a(x;\boldsymbol{z})\geq\overline{a}_{\min}-\sum_{\ell=1}^{\infty}\gamma_{\ell} \|\rho_{\ell}\|_{L^{\infty}}\geq\frac{1}{1+\kappa}\overline{a}_{\min}=:\lambda >0,\] for any \(\boldsymbol{z}\in U:=[-1,1]^{\mathbb{N}}\). On the other hand, the assumed embedding \(\mathcal{X}{\hookrightarrow}L^{\infty}(D)\) implies that there exists a constant \(C>0\), such that \(\|\cdot\|_{L^{\infty}}\leq C\|\cdot\|_{\mathcal{X}}\), and hence (3.9) ensures a uniform upper bound, \[a(x;\boldsymbol{z})\leq\|\overline{a}\|_{L^{\infty}}+CM\sum_{\ell=1}^{\infty} \ell^{-1-\alpha}=:\Lambda<\infty.\] In particular, all \(a\) in the support of the probability measure \(\mu\) satisfy the two-sided bounds (3.6), and hence the elliptic PDE (3.5) is well-posed. \(\Diamond\) _Remark 3.8_.: We do not assume any _(explicit) knowledge_ of the functions \(\overline{a}\), \(\rho_{\ell}\), the parameters \(\gamma_{\ell}\), \(\alpha\), \(\lambda\), \(\Lambda\) or indeed any information on the law of the joint random variable \(\boldsymbol{z}=(z_{1},z_{2},\dots)\in[-1,1]^{\mathbb{N}}\). In particular, the random variables \(z_{1},z_{2},\dots\) need not be independent. For the following arguments, it is sufficient that an expansion of the form (3.8) exists. \(\Diamond\) Given this setting, we prove the following theorem: _Theorem 3.9_.: Under the setting and the prevailing assumptions of this section and with \(\mu\in\mathcal{P}(\mathcal{X})\) the law of \(a(\,\cdot\,;\boldsymbol{z})\) given by (3.8). For any \(\delta,\eta>0\) and \(\epsilon>0\), there exists a PCA-Net \(\Psi=\mathcal{D}_{\mathcal{Y}}\circ\psi\circ\mathcal{E}_{\mathcal{X}}\) satisfying the error bound, \[\mathbb{E}_{a\sim\mu}\left[\|\Psi^{\dagger}(a)-\Psi(a)\|_{H^{1}_{0}}^{2} \right]\leq C\epsilon,\] with probability at least \(1-\delta\), and with constant \(C=C(\mu,\eta)>0\) depending only on \(\mu\) and \(\eta\). With the same implied constant, the required PCA dimensions are at most \(d_{\mathcal{X}}=d_{\mathcal{Y}}=d\sim\epsilon^{-\frac{1}{2\alpha}-\eta}\) and the required number of samples for PCA is at most \(N\sim d^{1+4\alpha}\log(1/\delta)\). Furthermore, the following complexity bounds hold for the ReLU network \(\psi\), \[\text{size}(\psi)\leq C\epsilon^{-\frac{1}{\alpha}-\eta},\quad\text{depth}( \psi)\leq C\log(\epsilon^{-1})^{2},\] independently of the data \(a_{1},\dots,a_{N}\sim\mu\). \(\Diamond\) We provide a sketch of the proof of Theorem 3.9 at the end of this subsection. _Remark 3.10_.: Theorem 3.9 shows that approximation of the Darcy flow operator \(\Psi^{\dagger}\) is possible with algebraic bounds on the PCA dimensions \(d_{\mathcal{X}}\), \(d_{\mathcal{Y}}\), the number of required PCA samples \(N\), and the number of neural network parameters, \(\text{size}(\psi)\). In fact, even under a mild decay rate \(\alpha>1\), the required size of \(\psi\) is at most _linear_ in the desired accuracy. \(\Diamond\) We conjecture that the scaling of \(N\sim d^{1+4\alpha}\) in Theorem 3.9 is highly pessimistic; indeed, under a potential improvement of the empirical PCA estimate of Proposition 2.2, as discussed in Remark 2.3, the much more favorable scaling \(N\sim d\) appears natural. Sketch of proof of Theorem 3.9.To prove Theorem 3.9, we first define a parametric mapping \(\mathcal{F}:U\to\mathcal{Y}\), where \(U=[-1,1]^{\mathbb{N}}\), \(\mathcal{Y}=H_{0}^{1}(D)\), by \[\mathcal{F}(\boldsymbol{z}):=\Psi^{\dagger}\left(\overline{a}+\sum_{\ell=1}^{ \infty}\gamma_{\ell}z_{\ell}\rho_{\ell}\right). \tag{3.11}\] This mapping has been studied in a series of papers [8, 9, 44, 34, 35], and is known to allow for a convergent Taylor series expansion (in the variables \(\boldsymbol{z}\in U\)). To state the next Lemma, which implies a suitable convergence of Taylor series, we recall that \(\boldsymbol{\nu}\) is called a multi-index in this infinite-dimensional context, if \(\boldsymbol{\nu}=(\nu_{1},\nu_{2},\dots)\) is a sequence of non-negative integers, such that \(\nu_{\ell}=0\) for almost all \(\ell\). For \(\boldsymbol{z}\in U\) and a multi-index \(\boldsymbol{\nu}\), we define the monomial \(\boldsymbol{z}^{\boldsymbol{\nu}}=\prod_{\nu_{\ell}\neq 0}z_{\ell}^{\nu_{ \ell}}\). The following lemma follows immediately from [9, Thm. 1.3]. **Lemma** : _Assume the setting and prevailing assumptions of this section. Then there exists a set of coefficients \(t_{\boldsymbol{\nu}}\in\mathcal{Y}\) (the Taylor coefficients), indexed by multi-indices \(\boldsymbol{\nu}\), such that for any \(m\in\mathbb{N}\), there is a set \(\Lambda_{m}\) of multi-indices \(\boldsymbol{\nu}\), with cardinality \(|\Lambda_{m}|=m\), such that_ \[\sup_{\boldsymbol{z}\in U}\|\mathcal{F}(\boldsymbol{z})-\sum_{\boldsymbol{\nu }\in\Lambda_{m}}t_{\boldsymbol{\nu}}\boldsymbol{z}^{\boldsymbol{\nu}}\|_{ \mathcal{Y}}\leq Cm^{-\alpha+\eta}, \tag{3.12}\] _for any small constant \(\eta>0\). Here \(C=C(\mathcal{F},\mu,\eta)>0\) is a constant depending on \(\eta\), but is independent of \(m\). \(\lozenge\)_ For completeness, we provide the details in Appendix E.1. As a consequence of this lemma, we can estimate the optimal PCA encoding errors not only on \(\mathcal{X}\) but also on \(\mathcal{Y}\). Indeed, we will derive the following result. **Proposition** : Under the setting and prevailing assumptions of this section. For any \(\eta>0\), there exists a constant \(C=C(\mathcal{F},\mu,\eta)>0\), such that \[\mathcal{R}^{\mathrm{opt}}_{d_{\mathcal{X}}}(\mu)\leq Cd_{\mathcal{X}}^{-2 \alpha-1},\quad\mathcal{R}^{\mathrm{opt}}_{d_{\mathcal{Y}}}(\Psi^{\dagger}_{ \#}\mu)\leq Cd_{\mathcal{Y}}^{-2\alpha+\eta}.\] \(\lozenge\) The \(\mathcal{X}\) estimate of Proposition 3.12 is a straightforward consequence of the assumed expansion (3.8). The \(\mathcal{Y}\) estimate follows from the observation that (3.12) provides a bound of the PCA projection error by comparing it to the projection onto \(\mathrm{span}\{t_{\boldsymbol{\nu}}\,|\,\boldsymbol{\nu}\in\Lambda_{d_{ \mathcal{Y}}}\}\subset\mathcal{Y}\). Details of the required argument are given in Appendix E.2. Since the input measure \(\mu\) and its push-forward \(\Psi^{\dagger}_{\#}\mu\) are concentrated on a bounded set of functions in the respective norms on \(\mathcal{X}\) and \(\mathcal{Y}\), Proposition 2.2 immediately implies that empirical PCA with \(d_{\mathcal{X}}=d_{\mathcal{Y}}=d\) and with a sufficient number of \(N\gtrsim d^{1+4\alpha}\log(\delta/2)\) samples achieves, up to a constant and with high probability, the same asymptotic error as the optimal PCA projection on \(\mathcal{X}\) and \(\mathcal{Y}\). The main remaining challenge is then to construct a neural network \(\psi\), such that the composition \(\Psi=\mathcal{D}_{\mathcal{Y}}\circ\psi\circ\mathcal{E}_{\mathcal{X}}\) approximates \(\Psi^{\dagger}\) to within a prescribed tolerance. The construction of such \(\psi\) relies on a neural network approximation result for the parametric mapping \(\mathcal{F}(\mathbf{z})\). The following result follows from [35, Theorem 4.11] (the present statement is closer in formulation to [44, Theorem 3.9], and has appeared with slightly sharper bounds in [23]): **Lemma 3.13**.: Under the prevailing assumption and with \(\mathcal{F}\) defined by (3.11). Let \(\eta>0\) be a small (fudge) constant. There exists a constant \(C=C(\mathcal{F},\mu,\eta)>0\), depending only on \(\mathcal{F}\), on the decay rate \(\alpha\) and on \(\eta\), such that for any PCA encoder \(\mathcal{E}_{\mathcal{Y}}:\mathcal{Y}\to\mathbb{R}^{d_{\mathcal{Y}}}\), and for every \(m\in\mathbb{N}\), there exists a ReLU network \(\psi^{\star}:\mathbb{R}^{m}\to\mathbb{R}^{d_{\mathcal{Y}}}\), \(y\mapsto\psi^{\star}(z_{1},\dots,z_{m})\) with \[\sup_{\mathbf{z}\in[-1,1]^{\mathbb{N}}}\|\mathcal{E}_{\mathcal{Y}}\circ\mathcal{F }(\mathbf{z})-\psi^{\star}(z_{1},\dots,z_{m})\|_{\ell^{2}}\leq Cm^{-\alpha+\eta}, \tag{3.13}\] and such that \(\mathrm{size}(\psi^{\star})\leq Cm\left(\log(m)^{2}+d_{\mathcal{Y}}\right)\) and \(\mathrm{depth}(\psi^{\star})\leq C\log(m)^{2}\). \(\Diamond\) The main additional problem in the present context of PCA-Net is that the neural network \(\psi\) in the definition of a PCA-Net, \(\Psi=\mathcal{D}_{\mathcal{Y}}\circ\psi\circ\mathcal{E}_{\mathcal{X}}\), acts on the _encoded input_\(\mathcal{E}_{\mathcal{X}}(a)\), with encoder \(\mathcal{E}_{\mathcal{X}}\) obtained from empirical PCA. Therefore, we cannot directly access the coefficients \(z_{1},z_{2},\dots\) in the parametric expansion (3.8) of \(a\), and indeed there is no way to exactly recover these coefficients from the PCA encoding of \(a\). A careful discussion of this "compatibility issue" between the PCA encoding and the a priori expansion (3.8) is therefore necessary. This is the main issue addressed in Appendix E.3, where we show that such \(\psi\) can indeed be constructed and the additional error due to the incompatibility can be controlled. This then leads to the statement of Theorem 3.9. #### 3.4.2 Navier-Stokes equations In the previous section, we showed that the solution operator \(\Psi^{\dagger}\) of the Darcy flow equations can be efficiently approximated by PCA-Net. With some additional effort, this result could likely be extended to a more general class of so-called \((\mathbf{b},\epsilon)\)-holomorphic operators, e.g. [44, Section 2.1]. In particular, the relevant underlying class of operators is here characterized by _analytic regularity_; this assumption goes well beyond \(\mathcal{C}^{k}\)-regularity, and thereby, PCA-Net can overcome the general curse of dimensionality of Theorem 3.3, for this restricted class of operators. However, many operators of interest, in particular in the context of advection dominated problems such as hyperbolic PDEs, are not holomorphic in this sense. In the present section, we therefore discuss another mechanism by which polynomial complexity estimates can be obtained, even in the absence of holomorphy: namely, PCA-Net can efficiently emulate numerical methods. Following ideas developed in [23, 19], and starting from a known (and convergent) numerical method, such an emulation result provides an upper bound on the required complexity of \(\Psi\), by showing that a specific choice of the neural network weights can emulate the given numerical method. The derivation of explicit estimates requires us to fix a particular numerical method for the analysis, but the intuition behind these emulation results is that \(\Psi\) can, in principle, efficiently emulate a very rich class of numerical methods. This includes methods with high convergence rates, which the neural network can explore during optimization; this expressive power of neural networks can thus provide a theoretical rationale for their efficiency within the PCA-Net methodology. In the following, we will focus on an emulation result for the Navier-Stokes equations, based on spectral methods. The underlying idea is similar to a recent emulation result for Fourier neural operators [19]; however, while Fourier neural operators very naturally (and by design) provide the necessary ingredients to build a spectral method, the PCA-Net methodology requires an extension of the results of [19], including a detailed discussion of compatibility of the PCA-projection with such emulation results. In addition, at a more technical level, it is here shown that the smoothness assumption on the activation function, which was _essential_ in all proofs of [19], can be substantially relaxed. Indeed, the results of the present work are based on the popular ReLU activation function \(\sigma(x)=\max(x,0)\), leading to comparable complexity estimates as in [19] differing only by log-factors. To illustrate the general approach, we consider the periodic Navier-Stokes equations in spatial dimension \(n=2\), over a fixed time interval \([0,T]\): \[\begin{cases}\partial_{t}u+u\cdot\nabla u+\nabla p=\nu\Delta u,\\ \operatorname{div}(u)=0,\,u(t=0)=\overline{u}.\end{cases} \tag{3.14}\] Here \(u:\mathbb{T}^{n}\times[0,T]\to\mathbb{R}^{n}\) is the flow vector field, \(p:\mathbb{T}^{n}\times[0,T]\to\mathbb{R}\) is the scalar pressure and \(\nu\geq 0\) is the viscosity (we allow \(\nu=0\), corresponding to the incompressible Euler equations). We have denoted by \(\overline{u}:\mathbb{T}^{n}\to\mathbb{R}^{n}\) the (divergence-free) initial data. Since the solution operator \(\Psi^{\dagger}\) associated with (3.14) is only known to be well-defined in two spatial dimensions, we focus on this case. We however point out that all results readily extend to the three-dimensional case, under additional (unproven) smoothness assumptions. _Remark 3.14_.: Classical well-posedness results for the Navier-Stokes (\(\nu>0\)) and Euler (\(\nu=0\)) equations, e.g. [29] and references therein, imply in the two-dimensional case \(n=2\), that if the random initial data \(\overline{u}\sim\mu\), is uniformly bounded \(\|\overline{u}\|_{H^{r}}\leq\overline{M}\) for \(r>n/2+1\), then the corresponding solution \(u(t)\) at a fixed later time \(t\in[0,T]\) satisfies a similar bound \(\|u(t)\|_{H^{r}}\leq M\), for some \(M=M(T,\overline{M})\). \(\Diamond\) The main result of the present section is Theorem 3.15, below: _Theorem 3.15_.: Consider the two-dimensional, periodic Navier-Stokes equations. Fix parameters \(M,T>0\), and integer \(r>n/2+1\). Assume that \(\mu\in\mathcal{P}(L^{2}(\mathbb{T}^{2};\mathbb{R}^{2}))\) is a probability measure on initial data \(\overline{u}\) of (3.14), such that \(\|\overline{u}\|_{H^{r}}\leq M\)\(\mu\)-almost surely. Let \(\Psi^{\dagger}\) be the forward solution operator of the Navier-Stokes equations, mapping initial data to the solution at the final time \(T\), \(\Psi^{\dagger}:u(0)\mapsto u(T)\). For any \(\epsilon,\delta>0\), there exists a PCA-Net \(\Psi=\mathcal{D}_{\mathcal{Y}}\circ\psi\circ\mathcal{E}_{\mathcal{X}}\), such that \[\mathbb{E}_{u\sim\mu}\left[\|\Psi^{\dagger}(u)-\Psi(u)\|_{L^{2}_{x}}^{2} \right]\leq C\epsilon,\] with probability at least \(1-\delta\), and with a constant \(C=C(M,r,T)>0\). The PCA dimensions \(d_{\mathcal{X}}\) and \(d_{\mathcal{Y}}\) are bounded by \(d_{\mathcal{X}},d_{\mathcal{Y}}\leq C\epsilon^{-1/r}\), the requisite amount of data \(N\leq Cd_{\mathcal{X}}^{1+2r}\log(1/\delta)\) and the neural network \(\psi\) satisfies the complexity bounds \[\begin{cases}\quad\text{size}(\psi)\leq C\epsilon^{-1/r}\left(\epsilon^{-1/2} \log(\epsilon^{-1})^{2}+\epsilon^{-1/r}\right),\\ \text{depth}(\psi)\leq C\epsilon^{-1/2}\log(\epsilon^{-1})^{2}.\end{cases}\] \(\Diamond\) A sketch of the proof of Theorem 3.15 is provided below. _Remark 3.16_.: In fact, \(\psi\) in Theorem 3.15 can be written as an \(n_{T}\)-fold composition \[\psi=Q\circ\underbrace{\psi_{*}\circ\cdots\circ\psi_{*}}_{n_{T}\text{-fold}} \circ R,\] where the mappings \(R:\mathbb{R}^{d_{X}}\rightarrow\mathbb{R}^{d_{H}}\), \(Q:\mathbb{R}^{d_{H}}\rightarrow\mathbb{R}^{d_{\mathcal{Y}}}\) are linear (input and output layers), \(n_{T}\leq C\epsilon^{-1/2}\), and \(\psi_{*}:\mathbb{R}^{d_{H}}\rightarrow\mathbb{R}^{d_{H}}\) is a ReLU neural network with "hidden layer" dimension \(d_{H}\leq C\epsilon^{-1/r}\), such that \[\begin{cases}\quad\text{size}(\psi_{*})\leq C\epsilon^{-1/r}\log(\epsilon^{-1 })^{2},\\ \text{depth}(\psi_{*})\leq C\log(\epsilon^{-1})^{2}.\end{cases}\] \(\Diamond\) Sketch of proof of Theorem 3.15.The derivation of the quantitative error and complexity bounds of Theorem 3.15 is based on an _emulation result_; the idea is to show that for any choice of PCA bases, there exists a ReLU neural network \(\psi\) which can efficiently emulate a numerical method which is known to converge at a precisely quantifiable rate. The complete details of the required argument, including all proofs, will be provided in Appendix F. Here, we instead give a general overview of the main ideas, to aid intuition. As a first step towards this emulation result, we review a convergent spectral scheme in section F.1. Then, we construct a ReLU neural network emulation of this spectral scheme in section F.2, leading to Algorithm 1 and the neural network size estimates of Lemma F.3. The constructed neural network emulator defines a mapping from the truncated Fourier coefficients of the initial data \(u(0)\), to (an approximation of) the truncated Fourier coefficients of the solution \(u(T)\). The relevant set of truncated Fourier modes with cut-off parameter \(K\in\mathbb{N}\) is given by \[\mathcal{K}=\mathcal{K}_{K}:=\big{\{}k=(k_{1},k_{2})\in\mathbb{Z}^{2}\,\big{|} \,|k|_{\infty}:=\max(|k_{1}|,|k_{2}|)\leq K\big{\}}.\] The mapping on these truncated Fourier coefficients defines a mapping between two-finite dimensional Euclidean spaces, upon identifying \(\mathbb{C}^{\mathcal{K}}\simeq\mathbb{R}^{2\mathcal{K}}\). This mapping can be represented by an ordinary neural network. Given this construction, we then proceed to analyze the approximation error of the neural network emulation in section F.3, leading to the following proposition. This is our core emulation result, essentially stating the fact that ReLU neural networks can indeed efficiently emulate the underlying spectral scheme. **Proposition 3.17**.: Let \(M,\,r,\,T>0\) be given. For any \(\epsilon>0\), there exists \(K\in\mathbb{N}\), \(K\sim\epsilon^{-1/r}\), and a ReLU neural network \(\widehat{\psi}:\mathbb{C}^{\mathcal{K}}\to\mathbb{C}^{\mathcal{K}}\), \(\mathcal{K}=\mathcal{K}_{K}\), such that \[\|\widehat{\psi}(\widehat{u}(0))-\widehat{u}(T)\|_{\ell^{2}}\leq\epsilon, \qquad\text{whenever}\,\,\,\|u(0)\|_{H^{r}}\leq M,\] where \(\widehat{u}(t)=\{\widehat{u}_{k}(t)\}_{|k|_{\infty}\leq K}\) denote the Fourier coefficients of the solution \(u(t)\) of (3.14), with initial data \(u(0)\). Furthermore, we have the following complexity estimates: \[\begin{cases}\quad\text{size}(\widehat{\psi})\leq C\epsilon^{-2/r-1}\log( \epsilon^{-1})^{2},\\ \text{depth}(\widehat{\psi})\leq C\epsilon^{-1}\log(\epsilon^{-1})^{2}.\end{cases}\] The constant \(C=C(M,T,r)>0\) depends only \(M,T,r\), but is independent of \(\epsilon\). Furthermore, \(\widehat{\psi}\) can be written as a \(n_{T}\)-fold composition \(\widehat{\psi}=\widehat{\psi}_{*}\circ\cdots\circ\widehat{\psi}_{*}\), where \(\widehat{\psi}_{*}:\mathbb{C}^{\mathcal{K}}\to\mathbb{C}^{\mathcal{K}}\) is a ReLU neural network with \[\begin{cases}\quad\text{size}(\widehat{\psi}_{*})\leq C\epsilon^{-2/r}\log( \epsilon^{-1})^{2},\\ \text{depth}(\widehat{\psi}_{*})\leq C\log(\epsilon^{-1})^{2},\end{cases}\] and \(n_{T}\leq C\epsilon^{-1}\), corresponding to the number of time-steps of the underlying scheme. \(\Diamond\) In fact, Proposition 3.17 is the two-dimensional case of a general \(d\)-dimensional result derived in the appendix (cf. Proposition F.7). Given this neural network emulation result, the remaining issue is that the empirical PCA encoder \(\mathcal{E}_{\mathcal{X}}\) and decoder \(\mathcal{D}_{\mathcal{Y}}\) do _not_ act on Fourier coefficients. And hence, additional work is necessary to suitably adapt the construction of the neural network in Proposition 3.17, ultimately resulting in an efficient PCA-Net approximation \(\Psi=\mathcal{D}_{\mathcal{Y}}\circ\psi\circ\mathcal{E}_{\mathcal{X}}\) in Section F.4. We summarize the result in the following lemma: **Lemma 3.18**.: For any \(\epsilon>0\), there exists a PCA-Net \(\Psi=\mathcal{D}_{\mathcal{Y}}\circ\psi\circ\mathcal{E}_{\mathcal{X}}\), such that \[\mathbb{E}_{u\sim\mu}\left[\|\Psi(u)-\Psi^{\dagger}(u)\|_{L^{2}( \mu)}^{2}\right]^{1/2}\leq C\epsilon+C\mathbb{E}_{u\sim\mu}\left[\|u-\mathcal{ D}_{\mathcal{X}}\circ\mathcal{E}_{\mathcal{X}}(u)\|_{L^{2}}^{2}\right]^{1/2}\\ +\mathbb{E}_{v\sim\Psi^{\dagger}_{\#}\mu}\left[\|v-\mathcal{D}_{ \mathcal{Y}}\circ\mathcal{E}_{\mathcal{Y}}(v)\|_{L^{2}}^{2}\right]^{1/2},\] where \(\psi\) is a neural network of size \[\text{size}(\psi)\leq C\epsilon^{-2/r}\left(\epsilon^{-1}\log(\epsilon^{-1}) ^{2}+(d_{\mathcal{X}}+d_{\mathcal{Y}})\right),\quad\text{depth}(\psi)\leq C \epsilon^{-1}\log(\epsilon^{-1}),\] and \(C=C(M,r,T)>0\) is a constant independent of \(\epsilon,d_{\mathcal{X}},d_{\mathcal{Y}}\). \(\Diamond\) Finally, the smoothness bound on the PCA eigenvalues of Proposition 3.6 can be used to estimate the PCA projection errors with high probability, see Lemma F.9 for details. Combining the above lemma with the estimates on the PCA-projection errors, results in Theorem 3.15. ## 4 Conclusion PCA-Net is a data-driven operator learning methodology introduced in [3, 16], which (i) uses PCA to reduce the dimensions of the input and output spaces and (ii) uses neural networks to approximate a map between the resulting finite-dimensional latent spaces. The main aim of the present work is to develop relevant approximation theory for PCA-Net. Our first main result is a novel universal approximation theorem for PCA-Net, Theorem 3.1. Compared to previous work [3], this theorem establishes universality under significantly relaxed conditions on the distribution of the data-generating measure and the underlying operator \(\Psi^{\dagger}\). The present assumptions are in fact minimal conditions to ensure that PCA is well-defined on the input and output spaces. The next main contribution of the present work is a detailed discussion of two potential obstacles to efficient operator learning with PCA-Net in Section 3.2; the first obstacle relates to the complexity of the output distribution. The second obstacle relates to the inherent complexity of the space of operators between infinite-dimensional input and output spaces, and gives rigorous meaning to the notion of a curse of dimensionality; Theorem 3.3 shows that is impossible to derive algebraic complexity bounds, when considering general classes of operators, such as the class of all Lipschitz- or even \(\mathcal{C}^{k}\)-continuous operators. Hence, we conclude that at this level of generality, the curse of dimensionality is inevitable. Given this negative result demonstrating the curse of dimensionality over general classes of operators, we posit that a central challenge in the approximation theory of PCA-Net (and other operator learning methodologies) is to identify and characterize operators, and classes of such operators, which allow for efficient approximation by PCA-Net. To obtain first insight into this problem for PCA-Net, we focus our attention on two prototypical PDE operators of interest, arising from the Darcy flow and Navier-Stokes equations. In both cases, we show that PCA-Net can overcome the general curse of dimensionality, establishing algebraic error and complexity estimates in Theorems 3.9 and 3.15. This demonstrates that these operators belong to a restricted class which is efficiently approximated by PCA-Net. In the case of Darcy flow, our proof relies on the analytic regularity (holomorphy) of the underlying operator. For the Navier-Stokes equations, we rely on an emulation result, showing that PCA-Net can emulate a known spectral scheme to efficiently approximate the underlying operator. It is an open challenge for future work to improve our understanding of the relevant class of operators for which operator learning is feasible, and to derive a useful mathematical characterization of relevant features that enable efficient approximation by PCA-Net and other operator learning architectures. Future work could also aim to derive more precise lower bounds which characterize the curse of dimensionality. In this context, we mention the forthcoming article [25], where similar ideas are refined and considerably generalized, and _exponential_ lower bounds are derived for a more general class of neural operators, albeit with respect to the supremum norm; Those results do not translate to any lower bounds in the \(L^{2}_{\mu}\)-norm considered here. In a different research direction, we point out that the present work has focused only on an approximation theoretic point of view, leaving out important questions related to optimization and generalization errors, given a finite amount of data. A significant challenge for future work is to address the practical training of the underlying neural network, and in particular, to determine bounds on the amount of training data that is necessary to achieve a desired accuracy. We leave these general research directions as interesting avenues for future work. ## Acknowledgement The author would like to thank Siddhartha Mishra for helpful discussions and guidance when developing many of the ideas that have gone into this work. This work has been supported by Postdoc.Mobility grant P500PT-206737 from the Swiss National Science Foundation.
2301.07463
Temporal Perceiving Video-Language Pre-training
Video-Language Pre-training models have recently significantly improved various multi-modal downstream tasks. Previous dominant works mainly adopt contrastive learning to achieve global feature alignment across modalities. However, the local associations between videos and texts are not modeled, restricting the pre-training models' generality, especially for tasks requiring the temporal video boundary for certain query texts. This work introduces a novel text-video localization pre-text task to enable fine-grained temporal and semantic alignment such that the trained model can accurately perceive temporal boundaries in videos given the text description. Specifically, text-video localization consists of moment retrieval, which predicts start and end boundaries in videos given the text description, and text localization which matches the subset of texts with the video features. To produce temporal boundaries, frame features in several videos are manually merged into a long video sequence that interacts with a text sequence. With the localization task, our method connects the fine-grained frame representations with the word representations and implicitly distinguishes representations of different instances in the single modality. Notably, comprehensive experimental results show that our method significantly improves the state-of-the-art performance on various benchmarks, covering text-to-video retrieval, video question answering, video captioning, temporal action localization and temporal moment retrieval. The code will be released soon.
Fan Ma, Xiaojie Jin, Heng Wang, Jingjia Huang, Linchao Zhu, Jiashi Feng, Yi Yang
2023-01-18T12:15:47Z
http://arxiv.org/abs/2301.07463v1
# Temporal Perceiving Video-Language Pre-training ###### Abstract Video-Language Pre-training models have recently significantly improved various multi-modal downstream tasks. Previous dominant works mainly adopt contrastive learning to achieve _global_ feature alignment across modalities. However, the local associations between videos and texts are not modeled, restricting the pre-training models' generality, especially for tasks requiring the temporal video boundary for certain query texts. This work introduces a novel text-video localization pre-text task to enable fine-grained temporal and semantic alignment such that the trained model can accurately perceive temporal boundaries in videos given the text description. Specifically, text-video localization consists of moment retrieval, which predicts start and end boundaries in videos given the text description, and text localization which matches the subset of texts with the video features. To produce temporal boundaries, frame features in several videos are manually merged into a long video sequence that interacts with a text sequence. With the localization task, our method connects the fine-grained frame representations with the word representations and implicitly distinguishes representations of different instances in the single modality. Notably, comprehensive experimental results show that our method significantly improves the state-of-the-art performance on various benchmarks, covering text-to-video retrieval, video question answering, video captioning, temporal action localization and temporal moment retrieval. Codes will be released. ## 1 Introduction Video-language pre-training that learns generic representations from large-scale multi-modality data has been popular in the past two years [7, 11, 13, 36, 27, 33]. The pre-trained models demonstrate excellent multi-tasks capabilities, covering video question answering (QA) [35, 37], text-to-video retrieval [23, 3], and video captioning [20, 35], under various settings of zero-shot, few-shot or transfer learning [7, 33]. By aligning representations between large-scale video-text pairs, the pre-trained video-language models have achieved encouraging performance on various applications [36, 27, 13]. Contrastive learning is widely used in the video-language pre-training by globally aligning the video with text representations [27, 16]. However, the video often contains irrelevant frames to the text, and the global contrastive learning dismisses the fine-grained alignment between frames and texts. For instance, a party video with caption "the person is cutting cake" may contain frames where children run around the table. This would not only limit the matching performance between videos and texts, but also lead the model to fail to learn distinguished visual features for temporal localization. The fine-grained temporal-aware alignment between relevant texts and video frames is thus essential to learn generic multi-modal representations. A few works [11, 12, 33] recently adopted masked language modeling (MLM) to enhance the fine-grained interactions by predicting the masked element with unmasked visual and text features. Albeit MLM works well on mul Figure 1: **Temporal Perceiving Video-Language (TemPVL) pre-training. The left part is the moment localization given a sentence query, and the right part is the text localization task for the video query. The frame and text features are represented with the ellipse and rectangle, respectively. The paired video and text features are marked with the same color, and the [CLS] token in every sentence is red outlined.** tiple reasoning tasks such as video QA and captioning, the fine-grained alignment along the temporal dimension is not guaranteed. LocVTP [7] manages to form the fine-grained contrastive loss by splitting the video into several clips and extracting phrases from sentences, but the objective is constructed on the pseudo alignment since the temporal annotations are not available. In practice, the temporal annotations are either unavailable in current short video-text pairs (WebVID [4]), or heavily noisy in long videos (HowTo100M [25]). The reliable video-text pairs with accurate annotations are still missed for fine-grained temporal alignment modeling. To augment the temporal modeling ability of video-language models for better perceiving fine-grained interaction between videos and texts, we introduce a novel text-video localization pre-training task where temporal annotations are no longer required. The proposed task consists of two objectives as shown in Fig. 1, the moment localization with language and the text localization with video. Specifically, we follow the mainstream settings [12, 13] to adopt dual encoders for encoding video and language inputs separately and use a multi-modal encoder to fuse both visual and text features. For the moment localization, the frame features of several videos are merged into a single long video sequence where the temporal position of each video can be inferred from the merging strategy and used for pre-training the model. The merged frame features interact with text tokens in the multi-modal encoder to enhance alignment between the fine-grained frame and text features. Similarly, we merge multiple text tokens for text localization. The matched text positions to a video will be predicted to correlate frame features with all text features. With text-video localization, fine-grained frame-word alignment is well established and temporal context modeling is implicitly encoded. Extensive experimental results on several downstream tasks also demonstrate the superiority of our proposed text-video localization pre-training task. Our method improves zero-shot text-to-video retrieval performance by 3.3\(\%\) on DiDeMo and acquires 2.6\(\%\) performance gain on the moment retrieval task. In summary, our contributions are threefold. * We present a novel video-text localization task for video-language pre-training where temporal modeling across multi-modalities is well designed and fine-grained interaction between visual and language signals is encouraged. * With the text-video localization pre-training task, the generalization capability of the pre-training model is consistently improved across different tasks and backbones. * Comprehensive experiments on five downstream tasks demonstrate the superiority of our method. In addition, off-the-shelf models in temporal action localization and moment retrieval tasks, can further boost performance by using our extracted video features. ## 2 Related Work ### Video-Language Pre-training Large-scale multi-modal data has been leveraged to build pre-training video-language (VidL) models Pre-training recently. Pre-trained models exhibit surprising generalization capacities when fine-tuning on a series of popular downstream video-language tasks, including text-to-video retrieval [3, 37], video question answering [35, 37], and video captioning [20, 37]. Contrastive learning is widely used in video-language pre-training to project videos and texts into the identical feature space [24, 36]. Contrastive learning only coarsely aligns the representations between videos and text descriptions. To enable multi-modal interactions, several models, such as VideoBERT [30], HERO [18], ActBERT [43], ClipBERT [16], MERLOT [40], SwinBert [20], VIOLET [11], All-in-One [33], adopt popular masked language modeling (MLM) also to predict masked signals. However, the fine-grained alignment is still not achieved in these works, limiting the model generalization capacity. LocVTP [7] manages to build fine-grained alignment by introducing the clip-phrase contrastive objective. However, the objective is based on pseudo supervision where the matching between clips and phrases is not granted. In this work, we introduce a novel text-video localization task to encourage fine-grained video and text feature alignment without any annotations, achieving significant improvement on multiple video-text downstream tasks. ### Video Temporal Modeling Temporal modeling is a critical yet challenging topic in video understanding, containing action recognition and localization tasks. Prominent ideas including sparse sampling [34, 10], spatial-temporal operations [32, 6] are introduced for temporal modeling in both convolution and Transformer architectures [32, 6]. To enhance the temporal modeling for obtaining better video representations, TSP [1] trains video encoders to be temporally sensitive by predicting clips inside or outside the action where temporal annotations are required in training datasets. All-in-One [33] manages to enhance temporal interaction by rolling the video features in temporal dimension. LocVTP [7] explicitly models the clip-word matching from the video language pairs based on pseudo supervision. In this work, we construct a long video sequence in the training batch and feed them to the multi-modal encoder, where frame features interact with temporal contexts and text features to predict accurate temporal boundaries. ## 3 Method ### Preliminary In this section, we present TemPVL, a new text-video localization pre-text task for pre-training. We follow the prominent architecture [13, 36] that uses two encoders for extracting video and text features separately and one multi-modal encoder for both visual and text features. Given the video input, the visual encoder \(E_{v}\) outputs the video features \(\mathbf{f}_{v}\in\mathbb{R}^{C_{v}\times T\times h\times w}\) where \(C_{v}\) denotes the feature channel, and \(h\) and \(w\) denote the down-scaled spatial resolution. We adopt BERT-base architecture for both the multi-modal and text encoders. The input text is first tokenized into a token sequence, where two special tokens [CLS] and [SEP] are inserted at the beginning and the end respectively. The text encoder \(E_{s}\) outputs the token features \(\{\mathbf{f}_{w}^{i}\in\mathbb{R}^{C_{w}}\}_{i=0}^{L+1}\) where \(C_{w}\) is the feature channel and there are \(L\) tokens in each sentence. To fuse both the visual and text features, the video feature is first pooled along the spatial dimension into frame tokens. All the frame and text tokens are then projected into the common embedding space to form the concatenated multi-modal input \(\{\mathbf{f}_{m}^{i}\in\mathbb{R}^{C}\}_{i=0}^{T+L+1}\), where the first \(T\) tokens come from the video frames. Every frame token interacts with all the frame and word tokens in the multi-modal encoder to learn unified representations for both visual and language inputs. Previous pre-training methods use contrastive loss on the paired video and text embedding to align the cross-modal representations. However, the contrastive learning only encourages the global video-text matching, lacking correspondence between the individual frames and words. On the one hand, the temporal modeling on video frames is not well established with coarse text-video contrastive learning. On the other hand, the representations are not well aligned, limiting the performance on downstream extensions, such as the text-to-video retrieval. To achieve the fine-grained multi-modal alignment, we introduce a novel text-video localization pre-training task. ### Text-Video Localization In the multi-modal encoder, the frame tokens interact with text tokens to update representations. Previous methods adopt masked language modeling or video-text matching to enhance the interaction. However, masked language modeling mainly benefits the reasoning task, such as video question answering and video captioning, failing to super-vise the fine-grained alignment. The video-text matching uses positive and negative video-text pairs and does the binary classification on the text [CLS] token, which is also the rough alignment. In this section, we present text-video localization to enable fine-grained alignment between different modalities. Specifically, our TemPVL predicts the temporal boundary from video tokens for the text description, and produces the localization from the word tokens given the visual input. For most video-text pre-training datasets, short video-text pairs are common and the temporal annotations are usually unavailable that indicate which clips in videos are aligned with text descriptions. To enable the temporal alignment, we form the long video sequences by merging frame features of different videos in the training batch and get the long paragraph descriptions by merging word tokens of different sentences. Next, we present each localization task in detail. #### 3.2.1 Moment Retrieval with Language The moment retrieval is to temporally localize the clip in the video related to the text description as shown in Fig. 2. During the pre-training, the short videos in a training batch are used to constitute a long video sequence. For each sentence, the start and end frames of the moment that match the texts are predicted. **Video Merging.** As each video contains multiple frames, we constitute a long video sequence by combining frames in different videos. Instead of directly merging video frames, Figure 2: **Moment retrieval with different video merging strategies**. The frame tokens of videos are first extracted through the visual encoder \(E_{v}\). A long frame sequence is then obtained by merging all frame tokens. The frame tokens are concatenated with the text tokens to form the input for the multi-modal encoder. The multi-modal encoder encourages the interactions between all frame and text features by predicting the start and end position of the clip that aligns with the text description. we first extract all frame features via the visual encoder and combine the videos via concatenating the frame features. Given the video feature, we use spatial mean pooling followed by a linear projection layer to get frame tokens \(\mathbf{t}_{v}\in\mathbb{R}^{C\times T}\). The \(T\) is usually small as pre-training models use a few frames in each video. To construct a long sequence, we adopt two ways to merge videos as shown in Fig. 2. The first way is to concatenate frame features of different videos by shuffling video segments. Suppose we have \(B\) videos in a training batch, a sequence with \(BT\) frame tokens \(\mathbf{t}_{mv}=[\mathbf{t}_{v_{1}},...,\mathbf{t}_{v_{B}}]\in\mathbb{R}^{BT \times C}\) is formed where the temporal order of frames in the video is retained and the video order is randomly permuted. The merged frame tokens are added with frame position embeddings and then concatenated with word tokens in one sentence to constitute the multi-modal input tokens \(\mathbf{t}_{m}\in\mathbb{R}^{C\times(BT+L+2)}\). We also use frame sampling strategy to generate long video sequences. Specifically, we define the total number of frame tokens as \(K\) and the number of positive frame tokens as \(K_{p}\). We sample \(K_{p}\) frame tokens from the video that correlates to the text and \(K-K_{p}\) background tokens from the rest of videos. The positive frame tokens is randomly inserted in the background tokens. For both merging strategies, the temporal boundary for the combined text tokens can be easy inferred and denoted as \((st_{v},ed_{v})\). **Boundary Prediction.** In multi-modal encoder, all tokens interact with each other so the frame tokens can encode text information to improve the representation and vice versa. To strengthen the interactions between different modalities, we introduce a localization task where the temporal boundaries for the merged text are required to be predicted. We apply two linear layers with a norm layer to output frame tokens for producing localization predictions \(\mathbf{r}_{vl}\in\mathbb{R}^{BT\times 2}\). The localization objective is written as: \[\mathcal{L}_{vl}=-\log\mathrm{softmax}(\mathbf{r}_{vl}^{0})^{st_{v}}-\log \mathrm{softmax}(\mathbf{r}_{vl}^{1})^{ed_{v}}, \tag{1}\] where \(\mathbf{r}_{vl}^{0}\in\mathbb{R}^{BT}\) is the start logit prediction and \(\mathbf{r}_{vl}^{1}\) is the end logit prediction. We use softmax operation to get the start and end probability of all frame tokens for the text. The \(st_{v}\) and \(ed_{v}\) are the ground-truth start and end indices. Different from the regression loss in many temporal action localization, the classification loss is used in our moment localization task to supervise the learning process. As only one matched video in the merged frame tokens, the classification loss is simple yet effective to encourage the coherence between visual and text features. By converging the predictions with the start and end positions, the frame features absorb temporal contexts and align with the text representation. #### 3.2.2 Text Localization with Video The text localization is to localize the boundary in text tokens for a video clip. The text could contain both relevant and irrelevant parts for the video clip. Similar to moment retrieval, we merge word tokens from several training sentences and concatenate them with frame tokens in one video. **Text Merging.** As shown in Fig. 3, we can also use the shuffling strategy to combine different sentence tokens. All [CLS] and [SEP] tokens in every sentence are also merged in the shuffling strategy. The sampling merging in moment retrieval is not feasible for the text tokens as the same words or phrases could be contained in different sentences, and the semantic information is changed if only a few words are sampled. In addition, we propose to merge texts by only combining the [CLS] token in every sentence. For one video and \(B\) merged text tokens, we formulate the multi-modal input tokens as \(\mathbf{t}_{m}\in\mathbb{R}^{C\times(T+B)}\). **Classification.** We adopt the same start and end prediction loss when the shuffling strategy is adopted on the text tokens. For the second merging strategy, we only predict the matched [CLS] token index since only one token is used for each sentence. \[\mathcal{L}_{tl}=-\log\mathrm{softmax}(\mathbf{r}_{tl})^{m_{t}}, \tag{2}\] where \(\mathbf{r}_{tl}\in\mathbb{R}^{B}\) is the logit prediction and \(m_{t}\) is the matched text index for the video. ### Pre-training Objectives The text-video contrastive is to coarsely project video and text into the common feature space, while the text-video localization is to enhance fine-grained visual language interaction for video-text alignment. With the mask language modeling, our pre-training objective is formed via: Figure 3: **Text localization for video with different merging strategies**. We merge word tokens in two ways to form a long text sequence. All the merged word tokens are concatenated with one video tokens to feed the multi-modal encoder. The text localization is to predict the matched position from the output text tokens for the fused frame tokens. \[\mathcal{L}=\mathcal{L}_{vtc}+\alpha\mathcal{L}_{mlm}+\beta\mathcal{L}_{vtl}, \tag{3}\] where \(\mathcal{L}_{vtl}=\mathcal{L}_{vl}+\mathcal{L}_{tl}\) denotes the text-video localization task. The \(\alpha\) and \(\beta\) are the hyper-parameters to balance each pre-training tasks, which are set to 1 in our experiment. The \(\mathcal{L}_{vtc}\) and \(\alpha\mathcal{L}_{mlm}\) denote contrastive learning and masked language modeling. ## 4 Experiments ### Datasets and Downstream Tasks **Pre-training datasets.** Following recent work [13], we jointly pre-train our TemPVL on the WebVid [5] with 2.5M video-text pairs and the Google Conceptual Captions (CC3M) [29] with about 3M image-text pairs. The static image is treated as the video with only single frame during the pre-training. **Downstream tasks.** We evaluate our method on five popular downstream tasks. (1) **Text-to-video retrieval** on four datasets: MSR-VTT [37], DiDeMo [3], MSVD [35] and LSMDC [23]. This task evaluates how well the text representations align with the video features. (2) **Video question answering** task on MSR-VTT [37] and MSVD [35]. The open-ended setting is adopted in QA to evaluate the reasoning ability of the pre-training models. (3) **Video captioning** that requires understanding the action and event in the video on MSR-VTT [37] and MSVD [35]. (4) **Temporal action localization** on THUMOS [14]. Perceiving temporal context and discriminative frame features are significant in this task. (5) **Video moment retrieval** is similar to temporal action localization where the text query is engaged to localize temporal boundaries in videos. We conduct experiments on DiDeMo [3] to testify the pre-training models. ### Implementation Details We adopt VideoSwin [21] as the video encoder with pre-trained weights on the Kinetics-400 dataset [15], and pre-trained BERT-base model as the text encoder. The multi-modal encoder is initialized from the last three layers of the pre-trained BERT-base model. All modules are end-to-end tuned during both pre-training and fine-tuning. We pre-train our model for 40 epochs, using a batch size of 2048 on 64 NVIDIA A100 GPUs. We use AdamW [22] optimizer with a weight decay 0.005 and betas (0.9, 0.98). The learning rate is first set to 5e-5 and then decays by 10 times following a cosine annealing decay schedule. All video frames are resized to 224\(\times\)224, and 8 frames are randomly sampled in a video while the temporal order is preserved. During pre-training, all words in the sentence is random masked with 15% probability to enable the mask language modeling in both normal and causal attentions. For the retrieval task, we only fine-tune the uni-modal encoders with the contrastive learning. For both video QA and video captioning tasks, we adopt the casual mask in both text and multi-modal encoders to generate both answers and descriptions. For both temporal action localization and moment retrieval with language tasks, we use the pre-trained visual encoder to extract video features first and adopt the off-the-shelf algorithms to train corresponding models with our extracted features. ### Comparison to Prior Arts #### 4.3.1 Text-to-Video Retrieval Tab. 1 illustrates the text-to-video retrieval results on MSR-VTT [37], DiDeMo [3], MSVD [35] and LSMDC [23] datasets under zero-shot and fine-tuning settings. Clover [13] and LocVTP [7] are also pre-trained on WebVid [5]+CC3M [29]) where Clover uses the ranking loss to sort the alignment between different modalities, and LocVTP introduces fine-grained contrastive learning on the pseudo frame-phrase matching predictions. Our proposed method significantly outperforms the previous approaches among all the datasets. Notably, the performance improvement with zero-shot evaluation demonstrates the stronger generalization ability of our method. Our TempVL achieves the highest recall on four datasets under the zero-shot setting. In detail, our method outperforms Clover [13] by 1.4\(\%\) on MSR-VTT, 3.3\(\%\) on DiDeMo, 1.1\(\%\) on LSMDC in Recall@1. Moreover, our proposed method surpasses VIOLET by a large margin on both MSR-VTT and DiDeMo, even though VIOLET is pre-trained with more text-video pairs (_i.e.,_ WebVid [5]+CC3M [29]+YTT180M [41]). When fine-tuned on the four datasets, TemPVL also shows superiority over the compared methods. Our method outperforms the compared methods across all the metrics on MSR-VTT and DiDeMo with a clear improvement. Compared to videos in MSR-VTT, videos in DiDeMo contain more frames and diverse scenes. The noticeable improvement on the DiDeMo also suggest that our method better matches long videos with texts. Compared to LocVTP [7] that leverages pseudo fine-grained alignment information, our model with the text-video localization pre-training task achieves much higher results under both zero-shot and fine-tune settings. #### 4.3.2 Video Question Answering Tab. 2 shows the results on two open-ended video question answering datasets. We compare our method with several methods, including JustAsk [39], ALPRO [17], VIOLET [11], All-in-one [33], Clover [13] and Lavender [19]. Different from All-in-one and Clover that use classification loss for the open-ended QA, we use the generative way to produce answers where the categories are not limited. Our method outperforms all other methods on the MSR-VTT dataset and surpasses Lavender by 0.4\(\%\). #### 4.3.3 Video Captioning We present the comparison on video captioning task in Tab. 3. For this task, the causal mask is used in both text and multi-modal encoders and 60\(\%\) of words are masked during fine-tuning. We compare our method with four pre-training models, including DECEMBERT [31], SwinBERT [20], MV-GPT [28] and Lavender [19]. The results show that our model achieves the highest CIDEr score on both MSR-VTT and MSVD datasets, demonstrating the capability of the proposed TemPVL. #### 4.3.4 Temporal Action Localization We go a step further to evaluate the effectiveness of our proposed method on temporal action localization task. We follow the previous setting [1] to only extract video features after pre-training and use the GTAD [38] to train the temporal action localization model. The representative THUMOS14 [14] is selected as the test data for its high ratio between the background and foreground. From Tab. 4, our method achieves 44.5\(\%\) in [email protected], a 1.3\(\%\) gain over the TSP [1]. By using our extracted video features, the GTAD observes significant improvement. #### 4.3.5 Video Moment Retrieval We also evaluate our method on the moment retrieval task where the temporal boundary in a video is predicted for the text description. We follow LocVTP [7] to use the 2D-TAN [42] as the baseline model for comparison. Tab. 5 provides the video moment retrieval results on DiDeMo. Although the same pre-training datasets are used, our method outperforms LocVTP by a large margin (4.3) on the [email protected]. It shows that temporal boundaries are more accurate in the top retrieval results with our extracted features. \begin{table} \begin{tabular}{l|c c c c|c c c c|c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{3}{c|}{MSR-VTT} & \multicolumn{3}{c|}{DiDeMo} & \multicolumn{3}{c|}{MSVD} & \multicolumn{3}{c}{LSMDC} \\ & Recall@1 & Recall@5 & Recall@10 & Recall@1 & Recall@5 & Recall@10 & Recall@1 & Recall@5 & Recall@10 & Recall@1 & Recall@5 & Recall@10 \\ \hline \multicolumn{10}{c}{Zero-shot} \\ \hline NoiseEst [2] & 8.0 & 21.3 & 29.3 & - & - & - & 13.7 & 35.7 & 47.7 & 4.2 & 11.6 & 17.1 \\ SupportSet [26] & 12.7 & 27.5 & 36.2 & - & - & - & 21.4 & 46.2 & 57.7 & - & - & - \\ VideoCLIP [36] & 10.4 & 22.2 & 30.0 & 16.6 & 46.9 & - & - & - & - & - & - \\ Frozen [5] & 18.7 & 39.5 & 51.6 & 21.1 & 46.0 & 56.2 & 33.7 & 64.7 & 76.3 & 9.3 & 22.0 & 30.1 \\ Clover [13] & 25.8 & 49.6 & 60.1 & 28.0 & 53.5 & 65.1 & - & - & 13.8 & 28.1 & 38.3 \\ VIOLET [11] & 25.9 & 49.5 & 59.7 & 23.5 & 49.8 & 59.8 & - & - & - & - & - & - \\ LocVTP [7] & 22.1 & 48.0 & 55.3 & - & - & - & - & - & - & - & - & - \\ \hline \multicolumn{10}{c}{Fine-tune} \\ \hline NoiseEst [2] & 17.4 & 41.6 & 53.6 & - & - & - & 20.3 & 49.0 & 63.3 & 6.4 & 19.8 & 28.4 \\ SupportSet [26] & 30.1 & 58.5 & 69.3 & - & - & - & 28.4 & 60.0 & 72.9 & - & - & - \\ VideoCLIP [36] & 30.9 & 55.4 & 66.8 & - & - & - & - & - & - & - & - \\ Frozen [5] & 31.0 & 59.5 & 70.5 & 31.0 & 59.8 & 72.4 & 45.6 & 79.8 & **88.2** & 15.0 & 30.8 & 39.8 \\ Clover [13] & 38.6 & 67.4 & 76.2 & 45.1 & 74.3 & 82.2 & & 22.7 & 42.0 & 52.6 \\ Lavender [19] & 37.8 & 63.8 & 75.0 & 47.4 & 74.7 & 82.4 & 46.3 & 76.9 & 86.0 & 22.2 & **43.8** & **53.5** \\ VIOLET [11] & 34.5 & 63.0 & 73.4 & 32.6 & 62.8 & 74.7 & - & - & - & 16.1 & 36.6 & 41.2 \\ LocVTP [7] & 36.5 & 64.3 & 76.8 & - & - & - & - & - & - & - & - \\ All-in-one [33] & 37.9 & 68.1 & 77.1 & 32.7 & 61.4 & 73.5 & - & - & - & - & - & - \\ \hline TemPVL (Ours) & **41.0** & **68.2** & **77.7** & **48.6** & **76.1** & **85.4** & **47.8** & **79.7** & 87.2 & **23.2** & 42.2 & 51.3 \\ \hline \hline \end{tabular} \end{table} Table 1: Text-to-video retrieval comparison on MSR-VTT, DiDeMo, MSVD, and LSMDC under the zero-shot and fine-tune setups. Higher Recall@k indicate better performance. The best performance is masked in bold under each setting. \begin{table} \begin{tabular}{l|c|c} \hline \hline \multirow{2}{*}{Method} & MSR-VTT & MSVD \\ & CIDEr & CIDEr \\ \hline DECEMBERT [31] & 52.3 & - \\ SwinBERT [20] & 53.8 & 120.6 \\ MV-GPT [28] & 60.0 & - \\ Lavender [13] & 58.0 & 142.9 \\ \hline TemPVL (Ours) & **61.9** & **148.2** \\ \hline \hline \end{tabular} \end{table} Table 3: Video captioning comparison on MSR-VTT and MSVD under the open-ended setting. We report the CIDEr score and and the highest score is masked in bold. ### Analysis We conduct ablation experiments on WebVid1M which is a subset of WebVid dataset containing one million video-text pairs, to study the effectiveness of the proposed designs. #### 4.4.1 Pre-training Objective As shown in Tab. 6, we evaluate our proposed objectives on three tasks, including text-to-video retrieval, video question answering and captioning. Compared to the baseline model which uses contrastive learning and masked language modeling, our model with two localization objectives significantly improves the zero-shot retrieval performance by 1.6\(\%\) on recall@1. The model pre-trained with moment retrieval task acquires highest accuracy on VQA and obtains higher performance gain than the model pre-trained with the text localization objective. For the video captioning task, the improvement is most significant when the model pre-trained with both objectives. On all three tasks, our model pre-trained with text-video localization outperforms the baseline model. This suggests that the model pre-trained with the text-video localization task obtains superior generalization capacity. #### 4.4.2 Video Merging For the moment localization with the language query, we adopt three strategies in Tab. 7 to combine individual video features into a long video sequence. The quality of the merged sequence is essential since the localization task is to predict the temporal boundary from the sequence. The Shuffling strategy only changes the order of different videos where the order of frame features is preserved. Sampling strategy requires the max length to specify how many frames should be sampled. The positive number of frames is also required to define how many frames are related to the text descriptions. In this experiment, we specify the max sample number \(K\) to 128, and the minimum and the maximum number of positive frames \(K_{p}\) to 1 and 32. The Hard-Sampling denotes that frame features are only sampled from those most similar videos in a batch. Specifically, we calculate all the similarities between video features. Frame feature is then sampled from the top 10 videos with the highest similarity score. From Tab. 7, we observe that the Hard-Sampling strategy acquires the highest zero-shot retrieval performance. It shows that merging frame tokens from similar videos for moment localization is more helpful to the retrieval task. #### 4.4.3 Text Merging For the text localization task, we compare two merging strategies in Tab. 8. The word merging is similar to the shuffling in video merging, where only the sentence order is rearranged. We retain the special tokens ([CLS] and [SEP]) when merging words. We predict the start and end positions for the video in this merging strategy. CLS merging denotes that only the first token in each sentence is selected to combine different texts. We found that the model with CLS merging achieves a bit better retrieval performance from the experimental results. Since only the [CLS] token feature is used in the text-to-video retrieval task, using the [CLS] token for the text localization is beneficial to align the text representations with different video features. \begin{table} \begin{tabular}{l|c c c} \hline \hline & Retrieval & VQA & Captioning \\ \hline Baseline & 20.8 & 42.7 & 56.9 \\ +\(\mathcal{L}_{vl}\) & 22.1 & **43.1** & 57.7 \\ +\(\mathcal{L}_{tl}\) & 21.9 & 42.7 & 57.5 \\ +\(\mathcal{L}_{vtl}\) & **22.4** & 43.0 & **58.1** \\ \hline \hline \end{tabular} \end{table} Table 6: Effect of pretraining tasks on downstream tasks. The recall@1, accuracy, and CIDEr are reported in the zero-shot text-to-video retrieval, video question answering, and video captioning tasks, respectively. \begin{table} \begin{tabular}{l|c c c} \hline \hline Method & [email protected] & [email protected] & AVG \\ \hline 2D-TAN [42] & 42.8 & 23.2 & 33.0 \\ LocVTP [7] & 41.2 & **24.8** & 33.0 \\ \hline TemPVL (Ours) & **45.4** & 24.7 & **35.1** \\ \hline \hline \end{tabular} \end{table} Table 5: Moment retrieval comparison on Charades-STA. The [email protected] denotes the top-1 retrieval results with temporal IoU greater than 0.5. AVG indicates the average score of two metrics. \begin{table} \begin{tabular}{l|c c c} \hline \hline Strategy & Retrieval & VQA & Captioning \\ \hline Shuffling & 21.7 & 42.9 & **58.1** \\ Sampling & 21.5 & 42.8 & 56.9 \\ HardSampling & **22.1** & **43.1** & 57.7 \\ \hline \hline \end{tabular} \end{table} Table 7: Analysis of video merging strategies for moment retrieval pre-training task. Results on text-to-video retrieval, video question answering, and video captioning tasks are reported. #### 4.4.4 Effect of Visual Backbones We summarize the experimental results of our method with different backbones on five tasks in Tab. 9. Two visual backbones, including a video encoder SwinT [21] and a frame encoder ViT [9], are adopted to verify the text-video localization pre-training. SwinT and ViT are initialized with weights pre-trained on Kinetics and ImageNet, respectively. On MSR-VTT, the model with TVL obtains significant improvement in the zero-shot text-to-video retrieval task for both video and image encoders. For the temporal action localization on THUMOS14, we only extract features on RGB frames with the pre-trained model and use the GTAD [38] for training. Both ViT and SwinT based models with TVL pre-training obtain about 1\(\%\) performance gain on the [email protected] metric. Our method also significantly improves the [email protected] on Charades. Albeit ViT based model achieves lower performance than the SwinT based model, both visual encoders with the TVL pre-training obtain improvement on almost all tasks. This demonstrates that generic representations are well learned with our proposed text-video localization task. #### 4.4.5 Qualitative Examples We further analyze how the model pre-trained with the text-video localization task performs better on the downstream tasks. We extract frame and text features on the Charades-STA [3], and calculate the similarity between the visual and language features. Each video is about 30s and contains several actions and events. We visualize the results in Fig. 4 to show the difference between models trained with and without text-video localization. For the video with caption "person runs up the stairs", although the global match scores of the two models are close, the model only with TVL pre-training accurately localizes frames that match the caption. It shows that the cross modality features are better aligned in our method, and the temporal boundaries are more accurate for text descriptions. ## 5 Conclusion We introduce a text-video localization task for video-language pre-training. Without any temporal annotations, we construct long sequences by merging video and text features and use the multi-modal encoder to predict the boundary. The proposed localization task is simple yet effective to learn a generic multi-modal representations. Extensive experiments on text-video retrieval, temporal action localization and moment localization tasks also demonstrate the strength of the proposed pre-text task. Figure 4: Visualization of similarity between frame and text features. The temporal ground-truth for the text description is marked with red. \begin{table} \begin{tabular}{l|c c|c|c} \hline \hline \multirow{2}{*}{Backbone} & \multicolumn{3}{c|}{MSR-VTT} & THUMOS & Charades \\ & Recall@1 & Acc & CIDEr & [email protected] & [email protected] \\ \hline ViT (w/o TVL) & 16.4 & 36.2 & 53.1 & 20.2 & 38.8 \\ ViT (w TVL) & 17.7 & 36.9 & 53.0 & 20.8 & 40.2 \\ \hline SwinT (w/o TVL) & 20.8 & 42.7 & 56.9 & 29.5 & 41.8 \\ SwinT (w TVL) & 22.4 & 43.1 & 58.1 & 30.6 & 43.6 \\ \hline \hline \end{tabular} \end{table} Table 9: Effect of text-video localization pre-training task with different visual backbones. Results on three datasets with five tasks are reported. \begin{table} \begin{tabular}{l|c c|c|c} \hline \hline Strategy & Retrieval & VQA & Captioning \\ \hline \multirow{2}{*}{Merge words} & 21.7 & **42.9** & 57.4 \\ & **21.9** & 42.7 & **57.5** \\ \hline \hline \end{tabular} \end{table} Table 8: Analysis of word merging strategies for text localization pre-training task. Results on text-to-video retrieval, video question answering, and video captioning tasks are reported. ## Appendix A Experimental Results ### Multi-modal Encoder For the multi-modal encoder, we use a pre-trained Bert model to initialize it. In this section, we study the impact of multi-modal encoder on different tasks in Tab. 10. Specifically, we use the top layer of the Bert where the MM3 denotes that the last 3 layers of Bert are used. Since the multi-modal encoder is not needed in the text-to-video retrieval task, adding more layers seems not helpful for the global video text alignment. For video captioning which requires the multi-modal encoder, higher results are obtained when more layers are adopted. For VQA, the highest accuracy is observed when six Bert layers are used. ### Video Sampling We also study the sampling strategy for merging different videos in Tab. 11. It shows that the model achieves inferior performance when the merged sequence is not long. For the captioning task, the positive frames are significant during pre-training. The model performs better when the maximum number of positive frames is set to a large value. Note there could be less than \(K_{p}\) positive frames since we randomly sample \(k\) positive frames during merging where \(k\leq K_{p}\). ### Video Question Answering We show predicted answers for some video questions on MSR-VTT in Fig. 5. For different types of videos, our model predicts the correct answer even though our model does not use the classification for open-ended question answering. This demonstrates that our model performs well without limiting the range of answers. ### Video Captionining We show the generated captions on MSR-VTT in Fig. 6. It is shown that our model clearly identifies the object and scenes of videos. Although most of the scenes are recognized by our method, the fine-grained details in videos such as the "score moment" is not present in the caption. In the future, we will consider how to generate more details for video captioning to include more fine-grained actions and moment descriptions.
2301.08750
Domain-agnostic and Multi-level Evaluation of Generative Models
While the capabilities of generative models heavily improved in different domains (images, text, graphs, molecules, etc.), their evaluation metrics largely remain based on simplified quantities or manual inspection with limited practicality. To this end, we propose a framework for Multi-level Performance Evaluation of Generative mOdels (MPEGO), which could be employed across different domains. MPEGO aims to quantify generation performance hierarchically, starting from a sub-feature-based low-level evaluation to a global features-based high-level evaluation. MPEGO offers great customizability as the employed features are entirely user-driven and can thus be highly domain/problem-specific while being arbitrarily complex (e.g., outcomes of experimental procedures). We validate MPEGO using multiple generative models across several datasets from the material discovery domain. An ablation study is conducted to study the plausibility of intermediate steps in MPEGO. Results demonstrate that MPEGO provides a flexible, user-driven, and multi-level evaluation framework, with practical insights on the generation quality. The framework, source code, and experiments will be available at https://github.com/GT4SD/mpego.
Girmaw Abebe Tadesse, Jannis Born, Celia Cintas, William Ogallo, Dmitry Zubarev, Matteo Manica, Komminist Weldemariam
2023-01-20T14:32:19Z
http://arxiv.org/abs/2301.08750v1
# Domain-agnostic and Multi-level Evaluation of Generative Models ###### Abstract While the capabilities of generative models heavily improved in different domains (images, text, graphs, molecules, etc.), their evaluation metrics largely remain based on simplified quantities or manual inspection with limited practicality. To this end, we propose a framework for Multi-level Performance Evaluation of Generative mOdes (MPEGO), which could be employed across different domains. MPEGO aims to quantify generation performance hierarchically, starting from a sub-feature-based low-level evaluation to a global features-based high-level evaluation. MPEGO offers great customizability as the employed features are entirely user-driven and can thus be highly domain/problem-specific while being arbitrarily complex (e.g., outcomes of experimental procedures). We validate MPEGO using multiple generative models across several datasets from the material discovery domain. An ablation study is conducted to study the plausibility of intermediate steps in MPEGO. Results demonstrate that MPEGO provides a flexible, user-driven, and multi-level evaluation framework, with practical insights on the generation quality. The framework, source code, and experiments will be available at: [https://github.com/GT4SD/mpego](https://github.com/GT4SD/mpego). Generative models Evaluation Data-centric AI Foundation models ## 1 Introduction Machine Learning (ML) methods, particularly generative models, are effective in addressing critical problems across different domains, which includes material sciences. Examples include the design of novel molecules by combining data-driven techniques and domain knowledge to efficiently search the space of all plausible molecules and generate new and valid ones [1, 2, 3, 4]. Traditional high-throughput wet-lab experiments, physics-based simulations, and bioinformatics tools for the molecular design process heavily depend on human expertise. These processes require significant resource expenditure to propose, synthesize and test new molecules, thereby limiting the exploration space [5, 6, 7]. For example, generative models have been applied to facilitate the material discovery process by employing inverse molecular design problem. This approach transforms the conventional and slow discovery process by mapping the desired set of properties to a set of structures. The generative process is then optimized to encourage the generation of molecules with those selected properties. Countless approaches have been suggested for such tasks, most prominently VAEs with different sampling techniques [8, 9, 10]), GANs [11, 12], diffusion models [13], flow networks [14] and Transformers [15]. Though the generation capability has been tremendously improved recently, the quantitative evaluation of these generative models in different domains remains a grand challenge [16]. Some of the reasons include the multi-objective nature of real discovery problems, the intricacy of evaluating relevant features _in-silico_, and the lack of widely accepted domain- and model-agnostic evaluation frameworks. As a result, existing benchmarks and toolkits in material sciences, such as MOSES [17] or GuacaMol [18] include limited metrics, such as validity and uniqueness, that lack the capacity to evaluate the complex nature of the generation process (e.g., interactions of multiple properties), thereby less effective to provide meaningful insights to subject matter experts (SMEs) to facilitate practical impact. In this paper, we introduce a Multi-level Performance Evaluation of Generative mOdels (MPEGO) framework (see Fig. 1), which aims to hierarchically characterize and quantify the capability of generative models, using material discovery domain as a use case. To that end, MPEGO is a model- and domain-agnostic framework, and its core design is derived from two main requirements: representative _examples_ (of training and generated samples) and one or multiple _features_ (extracted from these samples). Metrics derived from MPEGO are also interpretable and provide multi-level abstractions of the generation process. Specifically, the contributions of this paper are as follows: 1. We provide a multi-level and domain-agnostic evaluation of generative models, starting with sub-feature- or feature-based low-level evaluation to global features based high level-evaluation. 2. We devise a generation frequency analysis that aims to identify and characterize subsets of samples generated with extreme frequencies. 3. We validate MPEGO framework on multiple generative models (e.g., GCPN [19], GraphAF [20], MolGX [21] and Regression Transformer [15], and multiple datasets (e.g., ZINC-250K [22] and MOSES [17] and CIRCA1. 4. We conduct ablation studies to analyze MPEGO's sensitivity to design choices. ## 2 Related Work The state-of-the-art evaluation approaches for generative models aim to quantify pre-determined requirements, such as diversity and validity, using a variety of metrics. Frechet ChemNet Distance (FCD) [23] is one of such metrics, and it measures the distance between hidden representations drawn from sets of generated and training samples in the material discovery domain, which is limited in providing sub-feature or feature-level evaluation of models. GuacaMol [18] is one of the early benchmark platforms for new molecule discovery, which aims to evaluate generative models across different tasks, e.g., fidelity and novelty. Molecular Sets (MOSES) [17] is another benchmarking framework, which provides training and testing datasets, and a set of metrics to evaluate the quality and diversity of generated structures to standardize training and model comparisons. Furthermore, automated characterization of subsets of samples, generated with more or less frequency, i.e., generation frequency analysis, also still remains challenging as the focus is more on latent- or feature-based evaluation. Overall, the challenges associated with evaluating generative models could be summarized as follows. First, multiple evaluation metrics are model-dependent. For example, FCD [23] depends on latent representation, and Maximum-mean discrepancy [24] is more specifically used to evaluate graph-based generative models. State-of-the-art metrics also suffer from limited generalizability (across different levels of feature interactions) and interpretability, e.g., by domain experts, which is critical to achieve trustworthy AI solutions [25]. In addition, existing evaluation metrics are susceptible to potential flaws in predictive models used in goal-oriented or constrained generation. Moreover, existing evaluation strategies lack a generic and standalone evaluation metric that combines both distributional metrics (e.g., uniqueness and diversity) and property-based metrics that score single property.. The dependency on a single-constraint objective lacks a principled approach to incorporate multiple target features. This becomes a significant challenge when a single and inaccurate evaluation metric is used, which oversimplifies real discovery problems and hence less practical. ## 3 Proposed: MPEG Framework The proposed MPEGO framework (see Fig. 1) aims to provide an effective and multi-level characterization of generative models. The multi-level evaluation of MPEGO (see Fig. 2) starts from sub-feature-based low-level evaluation and their step-by-step aggregation to provide high-level evaluations. In this section, we first, formulate the critical research questions MPEGO framework is designed to address, followed by the details on its core components. ### Problem Statement Let \(\mathcal{G}_{1},\mathcal{G}_{2},\cdots,\mathcal{G}_{k},\cdots,\mathcal{G}_{K}\) be datasets comprising samples generated from \(K\) black-box generative models \((\Theta_{1},\Theta_{2},\cdots,\Theta_{k},\cdots,\Theta_{K})\) trained on a dataset \(\mathcal{T}\). Can we evaluate the generation capability of each \(\Theta_{i}\) in a scalable, easily interpretable, and multi-objective manner? Specifically, we aim to address two questions. 1. Given a set of features characterizing the samples, how do we quantify the generation capability of each model compared to another model or the training data, based on one or more of these features, i.e., at different levels of abstractions? 2. What are the characteristics of samples being generated with extreme frequencies (least or most) by each of the generative models, compared with an other model or the training data, i.e., generation frequency analysis? To address Q1, we propose a Hierarchical Independence Evaluation (HIE) that aims to quantify the performance of generative models at different levels of feature interactions hierarchically, starting with a sub-feature level evaluation (e.g., a specific range of a feature) to the global aggregation of multiple features. To address Q2, we employ multi-dimensional subset scanning (MDSS) [26] that aims to automatically identify and characterize over- and under-generated subsets of samples. ### Feature Extraction and Pre-processing Given representative examples of generated and training samples, MPEG starts with the extraction of \(M\) features from these samples, \(\mathcal{F}=\{f_{1},f_{2},\cdots,f_{m},\cdots,f_{M}\}\). The type of feature values could be binary, continuous, or categorical, and a further pre-processing could be applied in the follow up steps. For example, discretization of continuous features is required for sub-feature-level performance evaluation, and MPEG provides different discretization types, e.g., equal width, equal frequency or based on \(k\)-means. ### Hierarchical Independence Evaluation (HIE) HIE follows a bottom-up approach (from a sub-feature to global aggregation levels), as shown in Fig. 2, and it evaluates the performance generative models at different levels of feature interactions. The lowest level of evaluation in HIE is the sub-feature level independence score (SIS), which aims to quantify the performance of generative models across different ranges or unique values per feature, e.g., based on a a molecular weight range of \(200-300\) Daltons. The second layer in HIE represents feature-level independence score (FIS), which quantifies the generation performance for each feature. To this end, FIS could be computed via aggregation of SIS values, thereby providing a weighting strategy for SIS scores per feature. FIS values could also be computed directly, without aggregating SIS values, using a different choice of objective measure is directly applied on the whole feature.The proposed MPEG framework is flexible to utilize different objective measures (see Appendix D of the Supplementary Material for details). Below we describe the computation of SIS and FIS values, using Yule's Y coefficient [27] as selected objective measure. Note that \(\mathcal{G}_{k}\) vs. \(\mathcal{T}\) refers to a case where the comparison is between the \(kth\) generative model and the training set \(\mathcal{T}\). On the other hand, \(\mathcal{G}_{k}\) vs. \(\mathcal{G}_{j}\) refers to a case when the evaluation between the \(jth\) and \(kth\) models, where \(j\neq k\). However, we stick with the \(\mathcal{G}_{k}\) vs. \(\mathcal{T}\) comparison below in order to ease readability. Figure 1: Overview of the MPEG framework Let \(f_{m}\in\mathcal{F}\) is a feature with \(C_{m}\) unique values or ranges, i.e., \(f_{m}=\{f_{m}^{u}\}\), \(u\in[1,2,\cdots,C_{m}]\). Note that \(C_{m}\) is the number of unique values for categorical features or the number of bins after discretization of continuous features. SIS computation requires the stratification of both the generated (\(\mathcal{G}_{k}\)) and training (\(\mathcal{T}\)) datasets per each unique value/range \(f_{m}^{u}\) resulting \(\mathcal{G}_{km}^{u}\) and \(\mathcal{T}_{m}^{u}\), respectively. The complimentary subsets are then \(\widetilde{\mathcal{G}_{km}^{u}}\) and \(\widetilde{\mathcal{T}_{m}^{u}}\), respectively. Note that \(\widetilde{\mathcal{G}_{km}^{u}}=\mathcal{G}_{km}^{u}|(f_{m}\neq f_{m}^{u})= \mathcal{G}_{k}-\mathcal{G}_{km}^{u}\) and \(\widetilde{\mathcal{T}_{m}^{u}}=\mathcal{T}_{m}^{u}|(f_{m}\neq f_{m}^{u})= \mathcal{T}-\mathcal{T}_{m}^{u}\). Accordingly, a \(2\times 2\) pivot table is generated for each \(f_{m}^{u}\) as: \begin{tabular}{l|c|c} & \((f_{m}=f_{m}^{u})\) & \((f_{m}\neq f_{m}^{u})\) \\ \hline \(\mathcal{G}_{km}\) & \(\alpha\) & \(\beta\) \\ \hline \(\mathcal{T}_{m}\) & \(\delta\) & \(\gamma\) \\ \hline \end{tabular} where \(\alpha\) is the number of generated samples in \(\mathcal{G}_{km}^{u}\) that are characterized by the feature value \(f_{m}=f_{m}^{u}\), \(\beta\) is the number of generated samples in \(\widetilde{\mathcal{G}_{km}^{u}}\) with \(f_{m}\neq f_{m}^{u}\). Similarly, \(\delta\) and \(\gamma\) are the numbers of training samples that satisfy \(f_{m}=f_{m}^{u}\) in \(\mathcal{T}_{m}^{u}\), respectively. Note that \(\alpha+\beta\) are the total numbers of generated samples, i.e., \(|\mathcal{G}_{k}|\). Similarly, \(\delta+\gamma\) is the number of training samples, i.e., \(|\mathcal{T}|\). Then Yule's Y coefficient is computed from the pivot table as \(o_{km}^{u}\in[-1,1]\): \[o_{km}^{u}=\frac{\sqrt{P(\mathcal{G}_{km}^{u})P(\widetilde{\mathcal{T}_{m}^{u }})}-\sqrt{P(\widetilde{\mathcal{G}_{km}^{u}})P(\mathcal{T}_{m}^{u})}}{\sqrt{ P(\mathcal{G}_{km}^{u})P(\widetilde{\mathcal{T}_{m}^{u}})}+\sqrt{P( \widetilde{\mathcal{G}_{km}^{u}})P(\mathcal{T}_{m}^{u})}} \tag{1}\] \[o_{km}^{u}=\frac{\sqrt{\alpha\gamma}-\sqrt{\beta\delta}}{\sqrt{\alpha\gamma}+ \sqrt{\beta\delta}} \tag{2}\] SIS is then computed from \(o_{km}^{u}\) value as \(I_{km}^{u}=1-|o_{km}^{u}|\), where \(I_{km}^{u}\in[0,1]\) and higher \(I_{km}^{u}\) reflects higher independence between \(\mathcal{G}_{k}\) and \(\mathcal{T}\), i.e., SIS \(1\) represents complete independence. Feature-level Independence Score (FIS), provides feature-based evaluation, i.e., higher abstraction than SIS. Depending on the objective measure, FIS could be computed as 1) via a weighted aggregation of SIS values, i.e., \(I_{km}=\sum_{u=1}^{C_{m}}\lambda_{km}^{u}I_{km}^{u}\) where \(\sum_{u=1}^{C_{m}}\lambda_{km}^{u}=1\) and each \(\lambda_{km}^{u}\) weights the SIS value of \(f_{m}^{u}\), or 2) via straightforward computation, without using SIS values, when Figure 2: Details of Multi-level evaluation component of the MPEG framework that comprises Hierarchical Independence Evaluation (i.e., SIS, FIS, SAFIS and GAFIS) and Generation Frequency Analysis (GFA). \(A(\cdots)\) represents aggregation operation, and anomalous subset refers to the logical combinations of features that characterize samples generated with extreme frequencies. the objective measure is directly applied on each feature without discretization, e.g., using Wasserstein distance. The third layer of HIE is Selective Aggregation of Feature-level Independence Score (SAFIS), which aims to aggregate FIS values from \(R<M\) selected features in \(\mathcal{F}\). For example, features including scaffolding, fingerprints, aromaticity, and the number of rings could be selected to reflect the structural details of molecules in material discovery domain. The last layer of HIE is Global Aggregation of Feature-level Independence Score (GAFIS), which is computed via a weighted aggregation of all the FIS values in \(\mathcal{F}\). Note that SAFIS and GAFIS are computed as \(\hat{I}=\sum_{r=1}^{R}\eta_{r}I_{kr}\), where \(\sum_{r=1}^{R}\eta_{r}=1\), and \(R<M\) for SAFIS and \(R=M\) for GAFIS computation. ### Generation Frequency Analysis (GFA) Generative models trained on the same dataset will hardly generate samples with exact characteristics. Thus, there is a potential over- or under-generation of samples with certain characteristics. To this end, we employ automated stratification of samples using multi-dimensional subset scanning (MDSS) [26, 28] to identify subset of samples generated with divergent frequencies. Specifically, to identify samples generated with divergent rates by model \(\Theta_{k}\), compared to the training set \(\mathcal{T}\), we first merge the corresponding datasets as \(\mathcal{D}=\mathcal{G}_{k}\cup\mathcal{T}\), and an outcome label (\(y\)) is generated, such that \(y_{i}=1\) for a sample in \(\mathcal{G}_{k}\) and \(y_{i}=0\) for a sample in \(\mathcal{T}\). If there are \(N_{g}=|\mathcal{G}_{k}|\) generated and \(N_{t}=|\mathcal{T}|\) training samples in \(\mathcal{D}\), the expectation of generated samples in \(\mathcal{D}\) is \(e_{g}=\frac{N_{g}}{N_{g}+N_{t}}\). Thus, GFA aims to identify a group of samples with extreme deviations in their generation rate compared to \(e_{g}\). The deviation between the expectation and observation is evaluated by maximizing a Bernoulli likelihood ratio scoring statistic, \(\Gamma(\cdot)\). The null hypothesis assumes that the odds of the generated sample in any subgroup \(\mathcal{S}\) is similar to the expected, i.e., \(H_{0}:odds(\mathcal{S})=\frac{e_{g}}{1-e_{g}}\), while the alternative hypothesis assumes a constant multiplicative increase in the odds of the generated samples in \(\mathcal{S}\), i.e., \(H_{1}:odds(\mathcal{S})=q\frac{e_{g}}{1-e_{g}}\) where \(q\neq 1\). Note that \(q>1\) for over-generated subset, and \(0<q<1\) for under-generated subset. The divergence score for a subgroup (\(\mathcal{S}\)) with reference \(\mathcal{D}\) is formulated as, \(\Gamma(\mathcal{S},\mathcal{D})\) and computed as: \[\Gamma(\mathcal{S},\mathcal{D})=\max_{q}log(q)\sum_{i\in S}y_{i}-N_{s}*log(1-e _{g}+qe_{g}), \tag{3}\] where \(N_{s}\) is the number of samples in \(\mathcal{S}\). The divergent subset, \(\mathcal{S}\), identification is iterated until convergence to a local maximum is found, and the global maximum is subsequently optimized using multiple random restarts. ## 4 Experimental Setup ### Training Datasets We employed three different datasets in material discovery domain to validate our MPEG framework. ZINC-250KWe utilize the publicly available ZINC-250K2 dataset, which contains \(249,455\) small molecules in Simplified Molecular-Input Line-Entry System (SMILES) representation. Details on ZINC tool are available in [22]. MosesThe benchmark platform MOSES [17], besides implementing popular molecular generation models and metrics, MOSES contains a refined dataset from ZINC3. The dataset has approximately 2M molecules in total, filtered by certain parameters such as molecular weight ranges, and number of rotatable bonds, among others. Footnote 2: [https://www.kaggle.com/datasets/basu369victor/zinc250k](https://www.kaggle.com/datasets/basu369victor/zinc250k) Footnote 3: [https://zinc.docking.org/](https://zinc.docking.org/) CIRCAWe used IBM's Chemical Information Resources for Cognitive Analytics (CIRCA) platform4 to construct a dataset of organic salts relevant to production of semiconductors via photolitography with chemical amplification. Finally, we were able to evaluate environmental and toxicological properties of 866 anions, that comprise the dataset used in this study. Steps conducted to obtain final version of the dataset could be found in Appendix A in the Supplementary Material. Footnote 4: [https://circa.res.ibm.com/](https://circa.res.ibm.com/) ### Generative Models We utilized multiple generative models for our validation. Particularly, Graph Convolutional Policy Network (GCPN) [19] and a Flow-based Autoregressive (GraphAF) [20] were trained separately on ZINC-250K and MOSES datasets. Subsequently, \(10,000\) valid molecules were generated from each model. We rely on GT4SD [29] for model implementations (experimental set-ups shown in Appendix B). Similarly, from the MOSES-trained GCPN and GraphAF, \(14665\) and \(1680\) molecules were generated respectively. For the CIRCA data, we employed MoIGX [21] and the Regression Transformer [15]) and generated 5000 new anions with MolGX and 32617 new anions with Transformer. Note that balanced number of samples were selected across datasets during our experimentation, i.e., \(10000\), \(1680\) and \(5000\) samples were randomly selected for ZINC-250K-based, MOSES-based, CIRCA-based evaluations. ### Feature Extraction and Preprocessing We have extracted the following six features from SMILES representations of molecules in ZINK-250K and MOSES datasets and their corresponding generated datasets from GCPN and GpahAF. The following features are selected for the evaluation: _Aromaticity, ESOL, LogP, Weight, QED, SCScore_. On the other hand, we extracted fingerprints (see Fig. 3) that reflect structural details from CIRCA dataset and MolGX and Transformer generated datasets. Full details of features can be found in the Appendix C of the Supplementary Material. In cases when discretization of continuous features is required to compute the SIS values, we employ Yule's Y coefficient as our default objective measure as it satisfies multiple key requirements [30]. We also employ equal frequency discretization type as it better handles outliers in the data, with five bins for ZINC and with three bins for CIRCA based evaluations. While aggregating features for SAFIS and GAFIS computations using normal averaging. Note that the proposed approach is flexible to utilize different discretization types, weighting strategies, and objective measures. ### Evaluation Metrics Our evaluation metrics include HIE's SIS, FIS, SAFIS, and GAFIS values that quantify the generation independence of models. The independence score is obtained by normalizing the objective measure values to \([0,1]\). Note independence score \(=1.0\) represents complete independence. We also utilize the histogram of features to provide a qualitative comparison. The characterization of generation frequency analysis involves using the logical combination of feature values to describe the identified subgroup, the size of the subgroup \(N_{s}\), the odds ratio between \(\mathcal{S}\) and \(\widetilde{\mathcal{S}}=\mathcal{D}-\mathcal{S}\), \(95\%\) Confidence Interval (CI) and empirical \(p\) value. We also reported the divergence score of the identified group from the expectation in GFA and elapsed time to identify the group. All the experiments are conducted on a desktop machine, 2.9 GHz Quad-Core Intel Core i7 (processor), and 16 GB 2133 MHz LPDDR3 (memory). Figure 3: Visualization of the CIRCA dataset and some of the structural fingerprints relevant to MPEG analysis (inset). The CIRCA dataset is represented as a similarity network, where nodes correspond to anions and links connect nodes if Dice similarity of the respective anions is at least 0.5. Node colors encode clustering recovered via modularity analysis and node size encodes the number of connections of the node. Chemical structures are included for the most connected nodes in the main clusters. ## 5 Results and Discussion ### Hierarchical Independence Evaluation Table 1 provides extended FIS values across each of the features considered for comparing GCPN and GraphAF models with the training ZINC-250K (\(\mathcal{G}vs.\mathcal{T}\)) and between each other - head-to-head (\(\mathcal{G}vs.\mathcal{G}\)). GCPN's GAFIS value of \(0.882\), demonstrating competitive generation independence with GraphAF with GAFIS \(=0.821\). This is further shown with their head-to-head comparison, with Aromaticity (\(FIS=1.0\)) and LogP (\(FIS=0.906\)) features. Divergent characteristics are also demonstrated when the two models were evaluated based on QED, Weight and ESOL features, achieving FIS of \(0.853\), \(0.809\), and \(0.662\), respectively in their head-to-head comparison. SAFIS values are shown as an example of aggregation synthetic metrics, QED and LogP, where GCPN model achieves superiority compared to GraphAF. GAFIS values in the bottom row are derived from global aggregation of FIS values above. Overall, the results show the flexibility of the MPEG framework to evaluate models across different levels of feature interactions and comparison baselines, i.e., training data or another generated datasets. Furthermore, Fig. 4 demonstrates the benefits of sub-feature level evaluation (SIS) of GCPN and GraphAF models trained on ZINC-250K, using Molecular Weight as a feature example. Five ranges of the molecular weight in Fig. 4 (a) resulted from the discretization of the feature necessary for SIS evaluations. Similarly to Table 1, the comparison is performed between generative models and with the training dataset. Results demonstrate that GCPN and GraphAF generated molecules with similar weight characteristics but distinctively different at a few particular ranges, i.e., GCPN generated molecules similar to ZINC-250K with range \([258.1,308.39)\) whereas GraphAF showed better resemblance with ZINC-250K at ranges \([201.63,258.1)\) and \(\geq 361.47\) Daltons. Divergent sub-level generation characteristics is also encoded by lower \(\mathcal{G}\) vs. \(\mathcal{G}\) independence score at those ranges. The histogram plot in Fig. 4 (b) qualitatively compliments the insights from SIS values in Fig. 4 (a), where GraphAF is shown to generate more molecules with extreme weight values. Note that such sub-feature level insights in Fig. 4 are unique to our proposed framework as they are currently limited in the state-of-the-art of generative model evaluation. The results of comparisons of GCPN and GraphAF trained on MOSES dataset (Table 2) and MolGX and RT trained on CIRCA dataset (Table 3) demonstrate MPEG's capabilities further. Comparisons between Table 1 with Table 2 show while MOSES is used as a training, GraphAF achieved more independence than when ZINC-250K is used, compared GCPN. Between MolGX and the RT, trained on CIRCA, molecules generated from the RT achieved a higher resemblance with CIRCA than MolGX. In their head-to-head comparison, MolGX and RT generated mostly divergent structural details, as it is demonstrated by very low FIS values for the majority of the fingerprints in \(\mathcal{G}\) vs. \(\mathcal{G}\) column of Table 3. molecules with higher QED (\(\geq 0.64\)) values compared to GraphAF (\(<0.5\)). When MOSES data is used as validation, LogP values become the differentiation factor as GraphAF tends to generate lower values (\(<3.91\)) compared to GCPN (\(\geq 3.91\)). On CIRCA dataset, MolGX generates samples with a higher occurrence of '3217380708' fingerprint compared to the RT model and the training CIRCA. The size of the identified group, along with the multiplicative factor (q) and the odds ratio values, confirm the significance of the identified divergent generation in our generation frequency analysis. ### Ablation Study We conducted ablation studies to validate the robustness of MPEG for different design choices of objective measures (see Fig. 5) and discretization types (see Fig. 6). Though objective measures could vary in whether they require discretization or not, the similar shapes of the plots in Fig. 5 demonstrate similar ranking performance across features. For example, in GCPN vs. ZINC comparison (Fig. 5 (a)), Molecular weight achieved the least score among features under all the different objective measures employed. The same is true for QED in the GraphAF vs. ZINC validation in Fig. 5 (b). We also validated the impact of the discretization types as shown in Fig.6 (a) and (b). In both cases, _equal width_ and \(k\)_-means_ discretizations provide similar patterns while _equal frequency_ discretization demonstrated a slightly different pattern, particularly for features with skewed distribution as Aromaticity. Overall, all three types of discretization provide competitive performance scores, thereby validating the stability of our MPEG framework. More ablation results are in Appendix E of the Supplementary Material. Figure 4: (a) Example of Molecular Weight-based SIS values that provide sub-feature level evaluation of GCPN and GraphAF models trained on ZINC-250K; (b) histogram densities to provide qualitative visualization of the SIS values in (a) ## 6 Conclusion and Future Work We proposed MPEG - a simple, generalizable, and model-agnostic evaluation framework of generative models validated for material discovery domain. MPEG consists of two main performance evaluation blocks: Hierarchical Independence Evaluation (HIE) and Generation Frequency Analysis (GFA). HIE follows a bottom-up approach to quantify the generation performance of a model, starting from per sub-feature level (at the bottom) to the global aggregation of features (at the top). Thus, HIE provides a flexible performance evaluation of generative models. Particularly by evaluating the generation independence of models compared with the training data or other generative models using an objective measure set by a user. GFA is applied to detect and characterize divergent generation characteristics. Different from the existing evaluation platforms, MPEG provides interpretable insights that aim to facilitate interactions with subject matter experts, which is crucial to develop Trustworthy AI solutions. The proposed MPEG toolkit was validated with multiple datasets (ZINC-250K, MOSES, and CIRCA) and generative models trained on these datasets, including GCPN, GraphAF, MolGX, and RT. Conditioned on the training samples in these datasets, GCPN, GraphAF, and RT achieved higher generation independence, compared to their counterparts, GraphAF, GCPN, and MolGX models, respectively. Future work aims to evaluate generative models from different domains to further validate the domain-agnostic nature of the MPEG framework. We also plan to utilize MPEG to improve the efficiency of latent-based analyses, such as creativity characterization [31] and out-of-distribution detection [32]. MPEG will be natively integrated into the GT4SD, the Generative Toolkit for Scientific Discovery [29] and the source code and experiments will be available at: [https://github.com/GT4SD/mpego](https://github.com/GT4SD/mpego). \begin{table} \begin{tabular}{l l c c c} \hline \hline & & \multicolumn{2}{c}{\(\mathcal{G}\) vs. \(\mathcal{T}\)} & \multicolumn{2}{c}{\(\mathcal{G}\) vs. \(\mathcal{G}\)} \\ & & GCPN vs. & GraphAF vs. & GCPN vs. \\ Level & & ZINC & ZINC & GraphAF \\ \hline & Aromaticity & 1.000 & 1.000 & 1.000 \\ & ESOL & 0.419 & 0.672 & 0.491 \\ & LogP & 0.219 & 0.682 & 0.319 \\ FIS & Weight & 0.550 & 0.491 & 0.760 \\ & QED & 0.471 & 0.576 & 0.735 \\ & SCScore & 0.846 & 0.865 & 0.744 \\ \hline SAFIS & QED+LogP & 0.345 & 0.629 & 0.527 \\ \hline GAFIS & & 0.584 & 0.714 & 0.675 \\ \hline \hline \end{tabular} \end{table} Table 2: HIE Scores for GCPN and GraphAF models trained on MOSES dataset \begin{table} \begin{tabular}{l l c c c} \hline \hline & & \multicolumn{2}{c}{\(\mathcal{G}\) vs. \(\mathcal{T}\)} & \multicolumn{2}{c}{\(\mathcal{G}\) vs. \(\mathcal{G}\)} \\ Level & Fingerprint & MolGX vs. & Transformer vs. & MolGX vs. \\ & & MOSES & MOSES & Transformer \\ \hline & 951226070 & 1.000 & 1.000 & 1.000 \\ & 3218693969 & 0.599 & 0.876 & 0.500 \\ & 2968968094 & 1.000 & 1.000 & 1.000 \\ FIS & 882399112 & 1.000 & 1.000 & 1.000 \\ & 2245384272 & 0.998 & 0.847 & 0.845 \\ & 2246703798 & 1.000 & 1.000 & 1.000 \\ & 3217380708 & 0.411 & 0.661 & 0.226 \\ \hline GAFIS & & 0.858 & 0.912 & 0.796 \\ \hline \hline \end{tabular} \end{table} Table 3: HIE Scores for MolGX and Regression Transformer (RT) models trained on CIRCA dataset
2302.11751
Data-Free Diversity-Based Ensemble Selection For One-Shot Federated Learning in Machine Learning Model Market
The emerging availability of trained machine learning models has put forward the novel concept of Machine Learning Model Market in which one can harness the collective intelligence of multiple well-trained models to improve the performance of the resultant model through one-shot federated learning and ensemble learning in a data-free manner. However, picking the models available in the market for ensemble learning is time-consuming, as using all the models is not always the best approach. It is thus crucial to have an effective ensemble selection strategy that can find a good subset of the base models for the ensemble. Conventional ensemble selection techniques are not applicable, as we do not have access to the local datasets of the parties in the federated learning setting. In this paper, we present a novel Data-Free Diversity-Based method called DeDES to address the ensemble selection problem for models generated by one-shot federated learning in practical applications such as model markets. Experiments showed that our method can achieve both better performance and higher efficiency over 5 datasets and 4 different model structures under the different data-partition strategies.
Naibo Wang, Wenjie Feng, Fusheng Liu, Moming Duan, See-Kiong Ng
2023-02-23T02:36:27Z
http://arxiv.org/abs/2302.11751v1
Data-Free Diversity-Based Ensemble Selection For One-Shot Federated Learning in Machine Learning Model Market ###### Abstract The emerging availability of trained machine learning models has put forward the novel concept of _Machine Learning Model Market_ in which one can harness the collective intelligence of multiple well-trained models to improve the performance of the resultant model through one-shot federated learning and ensemble learning in a data-free manner. However, picking the models available in the market for ensemble learning is time-consuming, as using all the models is not always the best approach. It is thus crucial to have an effective _ensemble selection_ strategy that can find a good subset of the base models for the ensemble. Conventional ensemble selection techniques are not applicable, as we do not have access to the local datasets of the parties in the federated learning setting. In this paper, we present a novel _Data-Free Diversity-Based_ method called DeDES to address the ensemble selection problem for models generated by one-shot federated learning in practical applications such as model markets. Experiments showed that our method can achieve both better performance and higher efficiency over 5 datasets and 4 different model structures under the different data-partition strategies. Ensemble Selection, One-Shot Federated Learning, Machine Learning Model Market, Non-IID, Ensemble Learning, Data Privacy. ## I Introduction To address the increasing demands on data privacy protection while satisfying the growing appetites for more data for machine learning tasks, federated learning [1] (FL) has become the mainstay for enabling collaborative machine learning on decentralized devices/parties without seeing any of their data. However, traditional multi-round federated learning training process has its drawbacks: for \(m\) clients and \(n\) training rounds, the server can acquire \(\mathcal{O}(mn)\) gradients or models, which can possibly reveal a great deal of sensitive information of the clients' local data and violate the privacy protection setting [2]. _One-shot federated learning_[3] has been proposed to further protect the privacy of clients, by only requiring the clients to send their final well-trained models to the server once. In this way, not only the privacy of clients can be better protected, the communication costs are also significantly decreased. However, the model generated by one-shot federated learning is often less accurate than the model generated by conventional federated learning. As a result, the one-shot federated learning method is unsuitable for applications such as medical diagnosis, where the model's accuracy is crucial. The emerging availability of pre-trained machine learning models for various machine learning tasks has put forward the novel concept of _Machine Learning Model Market_[4] (beyond model management systems like modelDB [5] or huggingface [6]) to harness the collective intelligence from multiple well-trained models in a data-free manner. Clients can upload their individual well-trained models to the market server, and the server can select multiple models from its database and perform collective machine learning (e.g., ensemble learning or model fusion) to enhance the performance of the targeted machine learning task (e.g. image or text classification). Compared to model fusion, ensemble learning [7] is comparatively straightforward and cost-effective to harness the power of collective machine intelligence to boost task performance in a data-free manner. For example, a classic ensemble learning method is the _Voting_ method by which multiple models will vote together to produce the final classification results. However, selecting all available models from the model market for ensemble learning is not always the most effective strategy. As shown by Zhou et al. [8], **many could be better than all** when ensembling neural networks. In addition, testing each incoming sample \(m\) times, when we have a large number of \(m\) models in an ensemble team, can also be time-consuming and inefficient. As such, we focus on the _ensemble selection_ or _ensemble pruning_[9] problem, which aims to find a good subset of base models for ensemble from the model market. A key consideration for ensemble selection is the _model diversity_. Numerous papers have demonstrated that the more diverse the models, the higher the ensemble's performance will have [10, 11]. While numerous model diversity calculation methods have been proposed to maximize model diversities, they typically require access to the local datasets of the parties which is not possible in the one-shot federated learning setting of model markets. As such, none of the existing methods can be utilized to calculate model diversity within an ensemble team under the one-shot federated learning setting. In this work, we propose a novel _Data-Free Diversity-Based_**E**nsemble **S**election framework called **DeDES** for selecting strong ensemble teams for ensemble learning, whose models are sourced from the machine learning model market and trained by one-shot federated learning. We perform a series of studies to show that our presented method is robust, efficient, and successful for various data partitions (especially non i.i.d data), datasets, and model structures. To the best of our knowledge, this is the first paper to systematically deal with the problem of ensemble selection for one-shot federated learning, which is a valuable application for machine learning model market. Fig. 1 depicts our scenario. Clients will train their models locally by their own dataset until convergence and then upload their models to the model market. To conduct ensemble learning, the server will select, based on our algorithm, a good ensemble team from all models with the same task on the model market. Note that during the whole process, the server has no access to the local datasets of clients at all, which is what we mean by _data-free_. The contributions of our paper are as follows: 1. We proposed a formal formulation of the ensemble selection problem to facilitate a clearer comprehension of the topic; 2. We presented a _Data-Free Diversity-Based_ Ensemble Selection framework DeDES for _One-Shot Federated Learning_ which can evaluate model diversity and conduct ensemble pruning with no data exposure; 3. We proposed a technique for selecting the representative model inside a cluster to improve the performance of the final ensemble learning; and 4. We conducted a set of comprehensive experiments to illustrate the efficacy and efficiency of the proposed ensemble selection approach. Our codes and supplementary material are available online 1. Footnote 1: [https://anonymous.4open.science/r/DeDesForOSFL/](https://anonymous.4open.science/r/DeDesForOSFL/) ## II Related Work Various federated learning systems [12, 13] have been proposed to assist various parties in cooperatively training a global model without disclosing their data. In particular, one-shot federated learning proposes to train a global model using a single round of server-client communication. _FedKT_[4], _Fusion Learning_[14], etc. are all good examples of one-shot federated learning; however, none of them tackle the ensemble selection problem for one-shot federated learning. Lately, with the popularity of utilizing pre-trained models, there is emerging interest in _Machine Learning Model Market_[15] as a platform for users to exchange their trained models from others, and to harness the collective intelligence for the targeted machine learning task by combine the models. Note that the model market is a concept differs from previous concepts such as model management systems like _ModelDB_[5] or huggingface [6] which only include the fundamental model manipulation features of upload, download, and search. Or _TFX_[16] which aims to deploy production ML pipelines. The goal of model market is to enable collaborative machine learning through utilizing the collective intelligence of multiple machine learning models, using model sharing, model unlearning, model pruning, model compression, model evaluation, model recommendation, model ensemble, etc. Compared to federated learning, ensemble learning, which seeks to merge multiple weak learners (base models) into strong learner(s), has been a popular topic for decades. _Voting_[17], _Bagging_[7], _Boosting_[18], and _Stacking_[19] are examples of traditional ensemble learning approaches. _Ensemble Selection_ is an important concern in ensemble learning. There are three major approaches to select a fixed ensemble team for every incoming test sample: _Search-based_[9], _rank-based_[20], and _cluster-based_[21]. Cluster-based ensemble selection approaches are based on model diversity. Classic model diversity calculation methods include _Binary Disagreement_[22], _Cohen's Kappa_[23], _Q Statistics_[24], _Generalized Diversity_[25] and _Kohavi-Wilpert Variance_[22]. All of these methods require access to the local dataset and thus violates the fundamental constraint of federated learning. ## III Problem Definition Assume that there are \(m\) different clients as parties who want to collaborate in machine learning on a given ML task, e.g., classification or regression. Let \(\mathcal{M}:=\left\{M_{1},\ldots,M_{m}\right\}\) be the well-trained models with each \(M_{i}\) trained on \(i\)-th client via the one-shot federated learning strategy over its private dataset \(D_{i}=\left\{(x_{k},y_{k})\right\}_{k=1}^{n_{i}}\) with size \(n_{i}\), where each data is i.i.d. sampled from an unknown distribution \(\mathcal{D}\). \(\mathcal{M}\) will then be uploaded to the central server of the machine learning model market. Our ensemble selection problem can be formulated as: **Problem 1**: **Given:** the model set \(\mathcal{M}\) and a relative small constant \(K<m\), **find** the optimal subset \(\mathcal{M}_{K}^{*}\) of \(\mathcal{M}\) such that \[\mathcal{M}_{K}^{*}=\operatorname*{arg\,min}_{\mathcal{M}\in\mathcal{M},| \mathcal{M}_{K}|=K}\mathbb{E}_{(x,y)\sim\mathcal{D}}\ell(y,f_{\mathcal{M}_{K}} (x))), \tag{1}\] where \(f_{\mathcal{M}_{K}}(\cdot)\) is the prediction function based on \(\mathcal{M}_{K}\) and \(\ell\) is the loss function. Under the ensemble learning setting, \(f_{\mathcal{M}_{K}}\) is the aggregation function to combine the prediction of \(M_{i}\in\mathcal{M}_{K}\) for the final prediction \(\hat{y}=f_{\mathcal{M}_{K}}(x)\); it can be weighted average for regression, or weighted voting-based (e.g., majority or plurality voting) for classification. Under the model fusion setting, \(f_{\mathcal{M}_{K}}\) is the prediction of the fusion model based on all elements in \(\mathcal{M}_{K}\). We focus on the classification task in the following sections and we adopt the weighted voting strategy based on the size of local clients' datasets for ensemble learning. Thus, for a \(C\)-class classification (i.e., the label set is \(\left\{1,\ldots,C\right\}\)) task, with \(\mathbb{I}(\cdot)\) as the indicator function, the prediction \(\hat{y}\) of the input \(x\) is given by \[\hat{y}:=\operatorname*{arg\,max}_{c\in\left\{1,\ldots,C\right\}}\sum_{j=1}^{K }\frac{n_{i}}{\sum_{k=1}^{K}n_{k}}\mathbb{I}\left(M_{j}(x)=c\right), \tag{2}\] ## IV Proposed Framework: DeDES We present our proposed ensemble selection framework, DeDES, to solve Problem 1 without accessing to any dataset from the local clients. Algorithm 1 summarizes the structure of DeDES. (An illustrative view is given in supplementary.) Considering the performance and efficiency of \(\mathcal{M}_{K}^{*}\), it is necessary to choose a small \(K\) and keep the diversity and high-quality among selected elements/models. DeDES achieves such goal via different components, including _model filtering_, _model representation_, _model clustering_, and _representative model selection_, which are explained in detail as follows. Model filtering:Being from multiple parties in FL, the performance of those various models can vary significantly and are out-of-control to the central server. The inferior model may result from different reasons, including low-quality training data, e.g., being unreliable or contaminated, and with much noise, trained with inappropriate parameters, etc. Therefore, it is necessary to filter out such outlier models to eliminate the effect of the noises and help to select high-quality models efficiently. In Alg. 1, we use the OutlierFilter to obtain the outlier models \(\mathcal{O}\) based on the model scores \(\mathcal{S}\) provided from each party, which can be the local validation accuracy or prediction confidence. OutlierFilter can be any score-based unsupervised outlier detection methods [26], we used a variation of the commonly-used box-plot in our experiment (refer to the supplementary). Model representation:Given the model structure and its parameters, generating effective and suitable representation for the models is crucial to measure their properties, like similarity and diversity. Intuitively, we can use all or partial (some layers) of the parameters to represent the model. Considering that all models in \(\mathcal{M}\) are of the same type, we choose to use the parameters of the last layer of the model, which contain individualized and sufficient information about the model behavior (especially for the classifier) and data manifold/space for local training. Besides, to distill compact information and suppress noise for the representation, especially for big models like Resnet-101, dimension reduction (DR) is also applied for the representations; all unsupervised approaches can be adopted here, including the classical PCA, Kernel-PCA, and so on. In the Alg. 1, we obtain the presentation \(R_{i}\) for the model \(M_{i}\) via the function Represention in Line 4, which extracts the parameters of the last layer of \(M_{i}\) to be a flatten vector and conducts dimension reduction for the vector after normalization. The target dimension for DR is set to be \(|\mathcal{M}|\) by default. Model clustering:To guarantee the diversity in \(\mathcal{M}_{K}^{*}\) as mentioned before, we can utilize the clustering method to identify the similarity of different models, where models with similar properties are grouped into the same cluster and different clusters are as different as possible. We can use the traditional clustering approach here, such as K-Means, Hierarchical Clustering, and Spectral Clustering, etc. and set the target number of clusters as \(K\). This process is denoted Fig. 1: Overview of ensemble learning and ensemble selection process on machine learning model market under one-shot federated learning setting. by Clustering in Alg. 1, which leads to \(\mathcal{C}_{\mathcal{M}}\) as the resultant clusters. Representative model selection:To choose exactly \(K\) models with high performance, we elaborately select the representative element in each cluster while keeping the diversity. Among the models in each cluster, we can intuitively select the model with either the highest model score \(s_{i}\in\mathcal{S}\) (provided by the individual party) or the largest training dataset (leading to a better-trained model). Therefore, as the Line 6-13 in Alg. 1 shows, we design a heuristic select strategy to make full use of these two ways, which can choose a better one than any of the fixed way as the experiment results proved. That is, if the amount of training data for the models inside the cluster is balanced (measured by the ratio between the median size and the maximum size), the model with the highest model score is chosen, otherwise, the one with the largest training data is chosen. Inference:After obtaining the optimal \(\mathcal{M}_{K}^{*}\) with Algorithm 1, we will conduct ensemble learning with the weighted voting as Eq. (2). Note that in the whole process of DeDES, we successfully select the ensemble team \(\mathcal{M}_{K}^{*}\) based on the _model diversity_ without accessing to any of _local private data_ of these parties. ## V Experiments ### _Experiment Setup_ To simulate the real scenarios in federated learning as [27] and comprehensively evaluate DeDES, we designed four types of dataset-partition strategies as follows, which lead to different local data distribution to train diverse models \(M_{i}\)s. * Homogeneous (_homo_): the amount of samples and the data distribution keep the same for all parties; * IID but different quantity (_id-dq_): the training data of each party follows the same distribution, but the amount of data is different; * Skewed data distribution (_noniid-lds_): the training data of each party follows different distributions, especially for the label distribution; * Non-IID with \(k\) (\(<C\)) classes (_nonid-\(l^{*}k\)_'): the training data of each party only contains \(k\) of \(C\) classes, which is an extreme Non-IID setting. We used 5 image datasets and 4 types of neural network models (i.e., VGG-5, ResNet-50, DenseNet-121, and Deep Later Aggregation) in our experiments. Table I lists the detailed information about the datasets and configurations. We partition all datasets into different groups based on the above strategies and train the model for each client. Fig. 2 shows an example for the data distribution under the different partition strategies for CIFAR10 with 5 parties. The detailed configuration information of DeDES are elaborated in the supplementary, including the learning rate, model representation strategy, clustering method for different data partitions, etc. ### _Baseline Strategies_ For the model ensemble learning under our problem setting, we follow the designs in [3] and summarize the well-known used selection approaches as follows: * _Cross-validation selection (CV)_: select \(\mathcal{M}_{K}^{*}\) using local validation accuracy; * _Data selection (DS)_: \(\mathcal{M}_{K}^{*}=\{M_{i}\,|\,i\in\texttt{top}(\{n_{1},\cdots,n_{m}\},K)\}\), i.e., the models trained with the top \(K\) size training dataset, which are selected by top; * _Random selection (RS)_: \(\mathcal{M}_{K}^{*}\) consists of model random selected from \(\mathcal{M}\); * _All selection (AS)_: select \(\mathcal{M}\) as the target model set ignoring \(K\), this method will consider all clients' data but will be very time-consuming. Besides, we construct the following baselines in terms of the model fusion, which derives a single model leading to the highest efficiency for inference, as comparison with the traditional federated learning methods. The final model \(M^{*}\) is defined as: * _Federated averaging (FedAvg)_: \(M^{*}=\sum_{i=1}^{m}\frac{n_{i}}{\sum_{j=1}^{m}n_{j}}M_{i}\); * _Mean averaging (MeanAvg)_: \(M^{*}=\frac{1}{m}\sum_{i=1}^{m}M_{i}\). Also, we include the following results as the ground-truths for comparison, * _Label distribution selection (LDS)_: utilizing the label distribution instead of model representation as the input of our method 2; Footnote 2: Note that the label distribution is unavailable in the real federated learning scenarios. * _Oracle_: using the aggregated dataset \(D=\bigcup_{i=1}^{m}D_{i}\) to train a model \(M_{oracle}\), whose performance is the 'oracle'. ### _Performance Analysis_ **The effectiveness of ensemble learning** Figure 7 compared different methods for 4 types of data partition settings, where _TOP 1_ and _TOP 2_ mean a single model who got the best and second best test accuracy on the whole test dataset \(D^{test}\), i.e., \(D^{test}=\bigcup_{i=1}^{m}D_{i}^{test}\), where \(D_{i}^{test}\) is the test set for party/client \(i\). As shown in Fig. 7, the performance of the ensemble methods (such as _AS_ and DeDES) are always better than single models, which validates the effectiveness of ensemble learning under one-shot federated learning settings. **Comparison of DeDES with other methods** For \(m\) models, the number of possible ensemble teams is \(2^{m}\), i.e., the number of possible ensemble teams increases exponentially with \(m\). Since test all teams to get the optimal one is unpractical unless \(m\) is very small, so in our experiment we will compare DeDES with existing methods to validate its superiority. Table IV shows the test performance of selective configurations for different datasets and partition methods. As we can see, the performance of the _Oracle_ method is always the best, since it is the centralized setting and can get all parties' data/information; meanwhile, the performance of the _FedAvg_ or _MeanAvg_ is significantly worst (near random guess), with only test accuracy around 2% for the _EMNIST Balanced_ datasets, which validates that directly average/fuse well-trained models are not suitable for the one-shot federated learning setting. As demonstrated in Table IV, with the _homo_ partition, the accuracy difference between all methods is minimal, making it difficult to determine which method is superior. This is because the _homo_ partition is an IID setting, hence the data distribution of all parties is nearly identical. As a result, each party contains the same information as the others, so there is not a significant difference regardless of which parties we choose; for the _iid-dq_ partition, the _Data Selection (DS)_ is the best method for most of datasets, this is because under this setting, the single TOP 1/2 models as in Fig. 7 (b) have the largest dataset with samples of every class in the label set \(\{1,\dots,C\}\), so the models themselves already have strong generalization ability. Therefore, under this partition, the more data we have, the better performance we will get, hence the best way is to select \(K\) models with top \(K\) largest data sets. When the data partition is Non-IID (_noniid-ld_ and _noniid-lk_), we can see that DeDES achieves the best performance for most of the datasets, with different \(m\) and \(K\) (more \(m\) and \(K\) combinations are in the \(supplementary\)), which validates the effectiveness of our method. DeDES can get the second best performance for the _CIFAR100_ dataset, with the _AS_ method be the best, this is because _CIFAR100_ has 100 labels, thus the data amount of each individual label for local parties is too tiny to train a generalized model. Under this condition, the \(AS\) method will get more information than other methods and therefore have better performance. But for other datasets especially EMNIST where all local models are more generalized, DeDES will get better performance than others. In some case DeDES is even better than the ground-truth label distribution selection (LDS), which validates that our model representation is very effective. **Complete Inspection on ensemble teams** When \(m=10\), we can have \(2^{10}=1024\) ensemble teams to select. Table V enumerated the accuracy of all 1024 teams and the ranking of selected teams generated by different approaches. We can see that the ensemble team selected by DeDES is ranked higher than other baseline methods, which validates the efficacy of our method. \begin{table} \begin{tabular}{c|c|c|c|c|c} \hline \hline Dataset & \(C\) & Size (\(\sum_{i}n_{i}\)) & \(k\) in & Model & \(m\) \\ \hline EMNIST Digits & 10 & 280,000 & 3 & \multirow{2}{*}{VGG-5 (Spinal FC), ResNet-50} & \multirow{2}{*}{100, 200, 400} \\ \cline{1-1} \cline{5-5} & 26 & 145,600 & 8 & & \\ \hline EMNIST Balanced & 47 & 131,600 & 18 & & \\ \hline \multirow{2}{*}{CIFAR10} & \multirow{2}{*}{10} & \multirow{2}{*}{60,000} & \multirow{2}{*}{4} & ResNet-50, DenseNet-121 & \multirow{2}{*}{50, 100, 200} \\ \cline{1-1} \cline{5-5} & & & & Deep Layer Aggregation & \\ \hline \hline \end{tabular} \end{table} TABLE I: Details of experiment configurations Fig. 3: Ensemble Learning (Weighted Voting) performance (test accuracy, %) comparison on _EMNIST Digits_ dataset for \(m\) = 200, \(K\) = 80. Fig. 2: Example distribution of four dataset partition strategies for the _CIFAR10_ dataset with party number \(m=5\). Every color bar shows a different class and the height of the bar represents the number of samples of that class. ### _Impact on Efficiency_ Table. IV shows that in some cases, DeDES is the second best method after _All Selection (AS)_. Note that the efficiency of _AS_ is quite poor, and the performance gap between these two approaches is small, validating that our method can reduce ensemble time to a large extent with minimal performance loss. It is easy to know that the inference time for ensemble learning (weighted voting) increases linearly with \(K\), i.e., the total inference time \(T\) for one test sample is \(K{\times}c\), where \(c\) is constant inference time for one sample by one model. The experimental results depicted in Fig.8 indicate that when \(K\) reaches a certain value, the test accuracy will not increase significantly, sometimes even decrease. Therefore, with a suitable \(K\) (usually 50% of \(m\)), we can substantially reduce our inference time for ensemble learning while achieving good ensemble performance. And we do not need to concern too much about the running duration of DeDES compared to others because the ensemble selection process will only run once and will finish in a few minutes, therefore it is of little consequence. ### _Ablation Studies_ For experiment details of this section, please refer to the supplementary. * **Performance Comparison on different model structures and datasets** Our method is solid for various model structures and datasets. * **Performance Comparison on different model representation** It is better to use the models' later layer's \begin{table} \begin{tabular}{c c c|c c c c c c c|c|c} \hline Dataset & Partition & \(m\) & \(K\) & DeDES & AS & CV & DS & RS & FedAvg & MeanAvg & LD & Oracle \\ \hline \multirow{3}{*}{EMNIST Digits (VGG-5 Spinal FC)} & homo & 400 & 150 & 98.03 & 98.10 & **98.10** & 98.08 & 98.07 & 10.28 & 10.26 & 98.10 & 99.74 \\ \cline{2-13} & iid-dq & 400 & 150 & **99.27** & 98.75 & 98.93 & 98.88 & 98.72 & 10.51 & 10.48 & 99.27 & 99.71 \\ \cline{2-13} & noniid-ld & 400 & 150 & **97.67** & 96.99 & 95.47 & 91.70 & 96.67 & 10.01 & 9.89 & 92.86 & 99.72 \\ \cline{2-13} & noniid-l3 & 400 & 150 & **98.21** & 97.96 & 97.87 & 63.59 & 94.35 & 10.11 & 10.09 & 98.13 & 99.61 \\ \hline \multirow{3}{*}{EMNIST Letters (VGG-5 Spinal FC)} & homo & 200 & 120 & 88.64 & 88.77 & **88.88** & 88.82 & 88.68 & 3.72 & 3.71 & 88.77 & 95.12 \\ \cline{2-13} & iid-dq & 200 & 120 & 92.32 & 92.19 & 91.97 & **92.33** & 92.13 & 3.84 & 3.82 & 92.33 & 95.12 \\ \cline{2-13} & noniid-ld & 200 & 120 & **87.93** & 87.74 & 86.52 & 83.45 & 87.45 & 4.03 & 4.02 & 85.01 & 94.90 \\ \cline{2-13} & noniid-l8 & 200 & 120 & **89.10** & 87.93 & 84.40 & 86.98 & 85.95 & 3.85 & 3.84 & 87.54 & 95.06 \\ \hline \multirow{3}{*}{EMNIST Balanced (VGG-5 Spinal FC)} & homo & 100 & 50 & **85.19** & 84.94 & 85.10 & 84.96 & 84.96 & 2.10 & 2.11 & 84.83 & 89.70 \\ \cline{2-13} & iid-dq & 100 & 50 & **87.34** & 87.28 & 87.31 & **87.35** & 86.90 & 2.04 & 2.04 & 87.35 & 89.25 \\ \cline{2-13} & noniid-ld & 100 & 50 & **83.43** & 82.72 & 78.65 & 79.44 & 81.89 & 2.19 & 2.16 & 77.28 & 89.48 \\ \cline{2-13} & noniid-l18 & 100 & 50 & **85.43** & 82.99 & 81.22 & 81.02 & 81.93 & 2.09 & 2.08 & 82.87 & 89.52 \\ \hline \multirow{3}{*}{CIFAR10 (Resnet-50)} & homo & 200 & 100 & 32.08 & **32.09** & 32.07 & 30.78 & 30.30 & 10.18 & 9.69 & 32.08 & 88.68 \\ \cline{2-13} & iid-dq & 200 & 100 & 36.97 & 38.49 & 38.84 & **39.03** & 36.66 & 10.04 & 10.03 & 38.81 & 88.10 \\ \cline{2-13} & noniid-ld & 200 & 100 & **29.71** & 29.23 & 26.02 & 29.10 & 26.67 & 9.89 & 9.88 & 28.94 & 87.31 \\ \cline{2-13} & noniid-l4 & 200 & 100 & **34.40** & 33.50 & 32.24 & 30.00 & 33.05 & 10.02 & 9.87 & 34.15 & 89.67 \\ \hline \multirow{3}{*}{CIFAR100 (Resnet-50)} & homo & 20 & 12 & 20.84 & **22.84** & 20.58 & 20.65 & 20.48 & 0.99 & 0.99 & 20.85 & 59.81 \\ \cline{2-13} & iid-dq & 20 & 12 & 47.38 & 47.37 & 47.38 & 47.38 & 25.10 & 1.00 & 0.94 & 47.38 & 60.35 \\ \cline{1-1} \cline{2-13} & noniid-ld & 20 & 12 & 16.31 & **18.71** & 15.97 & 16.15 & 15.78 & 0.96 & 0.97 & 15.32 & 60.38 \\ \cline{1-1} \cline{2-13} & noniid-l45 & 20 & 12 & 21.29 & **23.68** & 20.56 & 20.26 & 19.97 & 0.92 & 0.91 & 19.61 & 61.74 \\ \hline \end{tabular} \end{table} TABLE II: Test accuracy (%) comparison for different dataset on different data partitions and model structures. The best and next best methods are **bolded** and underlined, respectively. If our DeDES method is better than the _LD_ ground-truth method, the value of _LD_ method will be marked in skyblue. Fig. 4: The relationship of \(K\) and Ensemble Test Accuracy of DeDES for the _EMNIST Letters_ Dataset when \(m\)=400. parameters for representation than utilizing their front layer's parameters. * **Importance of Dimension Reduction Methods.**_KernelPCA_ is better than other dimension reduction methods such as _PCA_ and _non-compression_. * **Clustering/Diversity validation** Our method can really cluster similar models together and the whole team's diversity is higher than other methods. ## VI Conclusion This paper presents a novel _Data-Free Diversity-Based_ method called DeDES to address the ensemble selection problem for models generated by one-shot federated learning. Experiments demonstrated our method can achieve both better performance and efficiency for various model structures and datasets, especially for non-iid data partitions. To our knowledge, this is the first paper to systematically address the ensemble selection problem for one-shot federated learning, which is essential for applications such as machine learning model markets. In the future, we will focus on the issue of heterogeneous model structures, propose more robust and useful model representation techniques, and better voting method to future improve ensemble performance and efficiency.
2308.12203
A Robust ADMM-Based Optimization Algorithm For Underwater Acoustic Channel Estimation
Accurate estimation of the Underwater acoustic (UWA) is a key part of underwater communications, especially for coherent systems. The severe multipath effects and large delay spreads make the estimation problem large-scale. The non-stationary, non-Gaussian, and impulsive nature of ocean ambient noise poses further obstacles to the design of estimation algorithms. Under the framework of compressed sensing (CS), this work addresses the issue of robust channel estimation when measurements are contaminated by impulsive noise. A first-order algorithm based on alternating direction method of multipliers (ADMM) is proposed. Numerical simulations of time-varying channel estimation are performed to show its improved performance in highly impulsive noise environments.
Tian Tian, Agastya Raj, Bruno Missi Xavier, Ying Zhang, Feiyun Wu, Kunde Yang
2023-08-23T15:39:44Z
http://arxiv.org/abs/2308.12203v2
# A Robust ADMM-Based Optimization Algorithm For Underwater Acoustic Channel Estimation ###### Abstract Accurate estimation of the Underwater acoustic (UWA) is a key part of underwater communications, especially for coherent systems. The severe multipath effects and large delay spreads make the estimation problem large-scale. The non-stationary, non-Gaussian, and impulsive nature of ocean ambient noise poses further obstacles to the design of estimation algorithms. Under the framework of compressed sensing (CS), this work addresses the issue of robust channel estimation when measurements are contaminated by impulsive noise. A first-order algorithm based on alternating direction method of multipliers (ADMM) is proposed. Numerical simulations of time-varying channel estimation are performed to show its improved performance in highly impulsive noise environments. robust channel estimation; compressed sensing; alternating direction method of multiplier (ADMM) ## I Introduction The discrete input-output relationship of transmitting a signal through UWA channel can often be simplified to the following linear expression \[\mathbf{y}=\mathbf{A}\mathbf{x}+\mathbf{n}, \tag{1}\] where \(\mathbf{y}\in\mathbb{C}^{M}\) is the observation vector of the discrete received signal/symbols, \(\mathbf{x}\in\mathbb{C}^{N}\) is the vector containing the unknown channel parameters to be estimated, \(\mathbf{A}\in\mathbb{C}^{M\times N}\) is the matrix constructed by training signal/symbols, and \(\mathbf{n}\) is a vector related to noise. The challenge of solving (1) in UWA environment is that the system is usually underdetermined, which means that the number of training symbols or equations is less than that of unknown parameters. Fortunately, the essentially sparse property of UWA makes it possible to solve (1) under CS framework. One of the most frequently used methods is to convert (1) to the \(\ell_{1}\)-regularized least squares problem \[\min_{\mathbf{x}}\ ||\mathbf{y}-\mathbf{A}\mathbf{x}||_{2}+\lambda|\mathbf{x}|| _{1}, \tag{2}\] where \(\lambda>0\) is the regularization parameter. However, when the distribution of noise \(\mathbf{n}\) is heavy-tailed or measurements \(\mathbf{y}\) contain impulse interference, the performance of sparse estimation algorithms based on (2) will be degraded. In this case, the cost function \[\min_{\mathbf{x}}\ \tau||\mathbf{y}-\mathbf{A}\mathbf{x}||_{1}+||\mathbf{x}|| _{1} \tag{3}\] is more robust because \(\ell_{1}\)-norm is less sensitive to outliers compared with using squared error [1]. Optimizing (3) directly is difficult since both parts are non-differentiable. ADMM provides a feasible framework for solving the above issue [2, 3]. Combined with proximal gradient method (PGM) [4], an algorithm with high computational efficiency and good impulsive interference resisting ability is developed in this work. ## II The Proposed Algorithm ### _General Framework_ The optimization problem in (3) can be equivalently reformulated as \[\min_{\mathbf{x},\mathbf{z}}\tau||\mathbf{z}||_{1}+||\mathbf{x}|| _{1}\] (4) subject to \[\mathbf{A}\mathbf{x}+\mathbf{z}=\mathbf{y}\] by introducing an auxiliary variable \(\mathbf{z}=y-\mathbf{A}\mathbf{x}\). Accordingly, the augmented Lagrangian function (ALF) of (4) is \[L_{\rho}(\mathbf{x},\mathbf{z},\boldsymbol{\gamma})=||\mathbf{x}||_{1}+\tau ||\mathbf{z}||_{1}+\frac{\rho}{2}||\mathbf{z}+\mathbf{A}\mathbf{x}-\mathbf{y}+ \boldsymbol{\gamma}/\rho||_{2}^{2}-\frac{1}{2\rho}||\boldsymbol{\gamma}||_{2}^ {2}, \tag{5}\] where \(\boldsymbol{\gamma}\) is the vector of dual variable associated with equality constraint, and \(\rho>0\) is the penalty factor of the ALF. Under the framework of ADMM, minimization over ALF is decomposed into two subproblems, and the primary variable \(\mathbf{x}\) and auxiliary variable \(\mathbf{z}\) are updated alternately by \[\mathbf{x}^{(k+1)}=\min_{\mathbf{x}}\ f_{1}(\mathbf{x})+f_{2}( \mathbf{x}) \tag{6a}\] \[\mathbf{z}^{(k+1)}=\min_{\mathbf{z}}\ g_{1}(\mathbf{z})+g_{2}( \mathbf{z})\] (6b) \[\boldsymbol{\gamma}^{(k+1)}=\boldsymbol{\gamma}^{(k)}+\rho( \mathbf{z}^{(k+1)}+\mathbf{A}\mathbf{x}^{(k+1)}-\mathbf{y}) \tag{6c}\] with \[f_{1}(\mathbf{x})\triangleq\frac{\rho}{2}||\mathbf{z}^{(k)}+ \mathbf{A}\mathbf{x}-\mathbf{y}+\boldsymbol{\gamma}^{(k)}/\rho||_{2}^{2},\quad f _{2}(\mathbf{x})\triangleq||\mathbf{x}||_{1}, \tag{7}\] \[g_{1}(\mathbf{z})\triangleq\frac{\rho}{2}||\mathbf{z}+\mathbf{A }\mathbf{x}^{(k+1)}-\mathbf{y}+\boldsymbol{\gamma}^{(k)}/\rho||_{2}^{2},\quad g _{2}(\mathbf{z})\triangleq\tau||\mathbf{z}||_{1}.\] One can observe from (7) that the objective functions for both the \(\mathbf{x}\)- and \(\mathbf{z}\)-subproblems consist of a smooth, convex, quadratic function and a convex, non-differentiable, \(\ell_{1}\)-norm term. Such problems can be solved efficiently by PGM. The proximal operator is defined as [4] \[\mathrm{prox}_{f,t}(x)=\underset{x^{+}}{\mathrm{argmin}}\ \frac{1}{2t}||x^{+}-x ||_{2}^{2}+f(x^{+}), \tag{8}\] which is also called the proximal mapping of function \(f\) with step-size parameter \(t\). Comparing (6a), (6b) and (7) with the definition of the proximity operator, one can see that the \(\mathbf{z}\)-subproblem conforms to the proximal mapping, while the \(\mathbf{x}\)-subproblem does not due to the presence of the matrix \(\mathbf{A}\). One commonly-used strategy to address this issue is to linearize the intractable part of the subproblem [5]. Specifically, for the \(\mathbf{x}\)-subproblem here, the differentiable function \(f_{1}(\mathbf{x})\) can be approximated by its first-order Taylor expansion at the previous iteration's solution \(\mathbf{x}^{(k)}\). Omitting the constant terms related to \(\mathbf{x}^{(k)}\), the approximation form of \(f_{1}(\mathbf{x})\) can be expressed as \[\tilde{f}_{1}(\mathbf{x})\triangleq\frac{1}{2t_{x}}||\mathbf{x}-(\mathbf{x}^ {(k)}-t_{x}\nabla f_{1}(\mathbf{x}^{(k)}))||_{2}^{2}, \tag{9}\] where \(\nabla f_{1}(\cdot)\) represents the gradient of function \(f_{1}\).Replacing \(f_{1}(\mathbf{x})\) with \(\tilde{f}_{1}(\mathbf{x})\) and substituting it into (6a), both \(\mathbf{x}\)- and \(\mathbf{z}\)-subproblems can now be solved by PGM \[\mathbf{x}^{(k+1)} =\mathrm{prox}_{f_{2},t_{x}}\left(\mathbf{x}^{(k)}-t_{x}\nabla f _{1}(\mathbf{x}^{(k)})\right) \tag{10}\] \[\mathbf{z}^{(k+1)} =\mathrm{prox}_{g_{2},t_{x}}\left(\mathbf{z}^{(k)}-t_{z}\nabla g _{1}(\mathbf{z}^{(k)})\right) \tag{11}\] with gradient vector defined as \[\nabla f_{1}(\mathbf{x}^{(k)}) =\rho\mathbf{A}^{H}(\mathbf{z}^{(k)}+\mathbf{A}\mathbf{x}^{(k)} -\mathbf{y}+\boldsymbol{\gamma}^{(k)}/\rho) \tag{12}\] \[\nabla g_{1}(\mathbf{z}^{(k)}) =\rho(\mathbf{z}^{(k)}+\mathbf{A}\mathbf{x}^{(k+1)}-\mathbf{y}+ \boldsymbol{\gamma}^{(k)}/\rho). \tag{13}\] Given that \(f_{2}(\mathbf{x})\) and \(g_{2}(\mathbf{z})\) both utilize \(\ell_{1}\)-norm regularization function, the separable property of \(\ell_{1}\)-norm can be leveraged to simplify the evaluation of proximal operators in (10) and (11) to one-dimensional minimization problems. The resulting problems can be solved by using the soft-thresholding operator \[\mathcal{S}_{\alpha}(\beta)\triangleq\frac{\max(|\beta|-\alpha,0)}{\max(| \beta|-\alpha,0)+\alpha}\beta, \tag{14}\] which leads to \[\mathbf{x}^{(k+1)} =S_{t_{x}/\rho}\left(\mathbf{x}^{(k)}-t_{x}/\rho\cdot\nabla f_{1} (\mathbf{x}^{(k)})\right) \tag{15}\] \[\mathbf{z}^{(k+1)} =S_{t_{x}\tau/\rho}\left(\mathbf{z}^{(k)}-t_{z}/\rho\cdot\nabla g _{1}(\mathbf{z}^{(k)})\right). \tag{16}\] ### _Step-Size Parameter Setting_ The step-size parameter \(t\) in PGM is related to the Lipschitz constant (see Sec.4 of [6]) of the differentiable part of the objective function. Let \(L_{x}\) and \(L_{z}\) denote the Lipschitz constant of function \(f_{1}(\mathbf{x})\) and \(g_{1}(\mathbf{z})\), respectively. According to (7), we have \(L_{x}=\lambda_{\max}(\mathbf{A}^{H}\mathbf{A})\) and \(L_{z}=1\), where \(\lambda_{\max}(\cdot)\) represents the maximum eigenvalue of the given matrix. When the Lipschitz constant is known, the step-size parameter of PGM can be set as the reciprocal, i.e., we can assign fixed step-size parameters \(t_{x}=1/\lambda_{\max}(\mathbf{A}^{H}\mathbf{A})\) and \(t_{z}=1\). However, for large-scale problems such as the large delay-spread channel estimation problem in UWA environment, the maximum eigenvalue of \(\mathbf{A}^{H}\mathbf{A}\) might be expensive to evaluate. In this case, adopting a simple backtracking line search strategy to adjust the step-size parameter adaptively is an efficient way to avoid costly computation of the Lipschitz constant. ### _Residues and Stopping Criteria_ Convergence measures of the ADMM algorithm are derived from primal and dual feasibility conditions, while the residuals of these optimality conditions are often used to define the termination criterion for the ADMM iterations. Boyd et al. gave the definitions of primal and dual residues of the standard ADMM algorithm (see Sec. 3.3 of [2]). Instead of minimizing \(L_{\rho}(\mathbf{x},\mathbf{z}^{(k)},\boldsymbol{\gamma}^{(k)})\), \(\mathbf{x}^{(k+1)}\) here minimizes a linearized ALF comprised of \(\tilde{f}_{1}(\mathbf{x})\), as defined in (9). Following the similar derivation process in [2], the primal residue \(\mathbf{r}_{p}\) and dual residue \(\mathbf{r}_{d}\) of the proposed algorithm are defined as \[\mathbf{r}_{p}^{(k+1)} =\mathbf{A}\mathbf{x}^{(k+1)}+\mathbf{z}^{(k+1)}-\mathbf{y} \tag{17}\] \[\mathbf{r}_{d}^{(k+1)} =\rho\mathbf{A}^{H}(\mathbf{r}_{p}^{(k+1)}-\mathbf{r}_{p}^{(k)})- \frac{1}{t_{x}^{(k+1)}}(\mathbf{x}^{(k+1)}-\mathbf{x}^{(k)}). \tag{18}\] The iteration of the proposed algorithm terminates when conditions \[||\mathbf{r}_{p}^{(k)}||_{2}\leq\epsilon_{p}^{(k)}\quad\text{and}\quad|| \mathbf{r}_{d}^{(k)}||_{2}\leq\epsilon_{d}^{(k)} \tag{19}\] are satisfied, where \(\epsilon_{p}^{(k)}\) and \(\epsilon_{d}^{(k)}\) can update by an abosulte and relative criterion: \[\epsilon_{p}^{(k)} =\sqrt{M}\epsilon_{\text{abs}}+\epsilon_{\text{rel}}\max\{|| \mathbf{A}\mathbf{x}^{(k)}||_{2},||\mathbf{z}^{(k)}||_{2},||\mathbf{y}||_{2}\} \tag{20}\] \[\epsilon_{d}^{(k)} =\sqrt{N}\epsilon_{\text{abs}}+\epsilon_{\text{rel}}||\mathbf{A} ^{H}\boldsymbol{\gamma}^{(k)}||_{2}. \tag{21}\] Above, \(\epsilon_{\text{abs}}\) and \(\epsilon_{\text{rel}}\) are absolute and relative tolerances respectively, and \(M\) and \(N\) represent the dimensions of matrix \(\mathbf{A}\) (i.e., \(\mathbf{A}\in\mathbb{C}^{M\times N}\)). ### _Penalty Parameter Tuning_ The penalty parameter \(\rho\) in ALF plays an important role in achieving a good convergence rate of ADMM algorithm. A lager value of \(\rho\) imposes a larger penalty on violations of equality constraint (see (5)), leading to a smaller value of primal residue. Conversely, the definition of dual residue in (18) suggests that a smaller value of \(\rho\) contributes to reducing dual residue. To balance these two residuals and ensure their convergence to zero as the iteration proceeds, as well as make the performance of ADMM algorithm less dependent on the initial choice of \(\rho\), a simple adjustment scheme that often works well in practice is given by [7] \[\rho^{(k+1)}=\left\{\begin{array}{ll}\delta^{\text{incr}}\rho^{(k)}&\text{if }|| \mathbf{r}_{p}^{(k)}||_{2}>\xi||\mathbf{r}_{d}^{(k)}||_{2}\\ \delta^{\text{decr}}\rho^{(k)}&\text{if }||\mathbf{r}_{d}^{(k)}||_{2}>\xi||\mathbf{r}_{p}^{(k)}||_{2}\\ \rho^{(k)}&\text{otherwise.}\end{array}\right. \tag{22}\] A typical choice of constant parameters in (22) is \(\xi=10\) and \(\delta^{\text{incr}}=\delta^{\text{decr}}=2\). Finally, the complete pseudocode of the proposed algorithm is summarized in the table below. Here, \(J(\mathbf{x})\) denotes the cost function given in (3), and \(G(\mathbf{z})=g_{1}(\mathbf{z})+g_{2}(\mathbf{z})\) corresponds to the objective function defined in (6b). ## III Numerical Simulations In this section, the performance of the proposed algorithm is tested in solving the sparse channel estimation problem. The PN sequence modulated at baseband with the rate of 5 kbaud of BPSK scheme is used as probe signal. Two first-order methods widely used in engineering: orthogonal matching pursuit (OMP) [8] and fast iterative shrinkage-thresholding algorithm (FISTA) [9] are adopted as comparisons. Samples of time-varying shallow water channel impulse response (CIR) are generated by the model introduced in [10]. Fig. 1 shows the simulated CIR and the variation of instantaneous channel gain. A two-component Gaussian mixture noise (GMN) model is applied to simulate the received signal contaminated by impulsive noise: \[P(\mathbf{n}[i])=(1-q)\mathcal{N}(0,\sigma_{W}^{2})+q\mathcal{N}(0,\sigma_{I }^{2}),\quad i=1,\cdots,N \tag{23}\] where \(q\) represents the probability of occurrence of impulsive noise, \(\mathcal{N}(\cdot)\) is the complex Gaussian distribution function, and \(\sigma_{W}^{2}\) and \(\sigma_{I}^{2}\) are the variance of the white Gaussian noise (WGN) and impulsive noise, respectively. Let \(\sigma_{S}^{2}\) denote the transmitted signal power, the signal-to-noise ratio (SNR), interference-to-noise ratio (INR), and signal-to-interference-plus-noise ratio (SINR) can be expressed as \(\text{SNR}=10\log_{10}(\sigma_{S}^{2}/\sigma_{W}^{2})\), \(\text{INR}=10\log_{10}(\sigma_{I}^{2}/\sigma_{W}^{2})\), and \(\text{SINR}=10\log_{10}(\sigma_{S}^{2}/(\sigma_{W}^{2}+\sigma_{I}^{2}))\), respectively. To evaluate the performance of the proposed algorithms, we conducted simulations under three noise conditions including the additive white Gaussian noise (AWGN) environment with SNR of 15 dB. For the impulsive noise environment, the WGN level is set to the same level as in the AWGN environment (i.e., with SNR\(=15\)dB), and INR is set to 40 dB and 50 dB. The corresponding SINR values for these two noise environments are approximately 1.83 dB and -8.90 dB, respectively. During the simulations, the number of iterations for the OMP algorithm is set to its optimal value. The regularization parameters of the FISTA and propo Fig. 1: The simulated time-varying shallow water channel. (a) CIR. (b) Instantaneous channel gain. Fig. 2: Estimated CIR matrices under different noise environments. (a) AWGN, (b) INR = 40dB, (c) INR = 50dB. \(\lambda=0.01\lambda_{\infty}\) and \(\tau=1/(0.04\lambda_{\infty})\), respectively, where \(\lambda_{\infty}=||2\mathbf{A}^{H}\mathbf{y}||_{\infty}\). The step-size scale factor \(\eta\) in the backtracking line search is set to 1.5 for both algorithms. For the proposed algorithm, the initial penalty parameter \(\rho^{(0)}\) is set to 1, and \(\epsilon_{\text{abs}}=10^{-3}\) and \(\epsilon_{\text{rel}}=10^{-2}\) are used as stopping criteria. Fig. 2 shows the estimated CIR matrices by the three algorithms under different noise conditions. The corresponding estimated samples obtained from the CIR matrices are shown in Fig. 3. From Fig. 2 and Fig. 3, one can see that all three algorithms perform similarly in the AWGN environment. However, the performance of the OMP and FISTA algorithms degrades significantly in impulsive noise environments. The estimated CIR samples contain a large number of noisy taps that are supposed to be inactive. In contrast, the proposed algorithm maintains stable performance and provides accurate estimates with small errors in all three noise conditions. The NMSD versus iteration curves are plotted in Fig. 4. The NMSD, which measures the estimation accuracy, is defined as \(\text{NMSD}=20\log_{10}(||\mathbf{x}^{*}-\hat{\mathbf{x}}||_{2}/||\mathbf{x}^ {*}||_{2}))\), where \(\mathbf{x}^{*}\) is the true CIR and \(\hat{\mathbf{x}}\) is the estimated channel response. The NMSD curves shows that the proposed algorithm achieves lowest NMSD values of around -16dB across all three noise environments and converges within just a few tens of iterations. In addition, convergence behaviors in terms of the objective function values, as well as the \(\ell_{2}\)-norm of the primal and dual residues of the proposed algorithm are illustrated in Fig. 5. We summarize the results of average number of iterations, runtime for single estimation, and final NMSD values of three algorithms in Tab. I. The results highlight the robustness of the proposed algorithm in impulsive noise environments. ## IV Conclusions This work developed a robust ADMM-based channel estimation algorithm available for impulsive noise environments. The proposed algorithm is low in complexity and easy to be implemented. We evaluate the performance of the proposed algorithm in solving the sparse channel estimation problem and compare it with popular OMP and FISTA algorithms. The results highlight the effectiveness and robustness of the proposed algorithm, making it a practical choice for channel estimation in underwater acoustic communication systems. Fig. 4: Comparison of normalized mean-square deviation (NMSD) curves under different noise conditions, (a) AWGN, (b) INR = 40dB, (c) INR = 50dB Fig. 5: Convergence behavior of the proposed algorithm under different noise conditions. (a) objective function, (b) the norm of the primal residue, and (c) the norm of the dual residue (see (17)-(19)). The grey lines show the convergence behavior from single estimation process, while the red line shows the average over all samples. Fig. 3: Comparison of estimated CIR samples under different noise environments. (a) AWGN, (b) INR = 40dB, (c) INR = 50dB. ## V Acknowledgement This research has been funded in part by the National Natural Science Foundation of China (Project No. 62171369), and in part by the Key Program of the National Natural Science Foundation of China (Grant No. 52231013). Additionally, this work is also sponsored by the China Scholarship Council.
2302.08200
Weak Similarity in Higher-Order Mathematical Operational Semantics
Higher-order abstract GSOS is a recent extension of Turi and Plotkin's framework of Mathematical Operational Semantics to higher-order languages. The fundamental well-behavedness property of all specifications within the framework is that coalgebraic strong (bi)similarity on their operational model is a congruence. In the present work, we establish a corresponding congruence theorem for weak similarity, which is shown to instantiate to well-known concepts such as Abramsky's applicative similarity for the lambda-calculus. On the way, we develop several techniques of independent interest at the level of abstract categories, including relation liftings of mixed-variance bifunctors and higher-order GSOS laws, as well as Howe's method.
Henning Urbat, Stelios Tsampas, Sergey Goncharov, Stefan Milius, Lutz Schröder
2023-02-16T10:31:45Z
http://arxiv.org/abs/2302.08200v4
# Weak Similarity in Higher-Order Mathematical Operational Semantics ###### Abstract Higher-order abstract GSOS is a recent extension of Turi and Plotkin's framework of Mathematical Operational Semantics to higher-order languages. The fundamental well-behavedness property of all specifications within the framework is that coalgebraic strong (bi)similarity on their operational model is a congruence. In the present work, we establish a corresponding congruence theorem for _weak_ similarity, which is shown to instantiate to well-known concepts such as Abramsky's applicative similarity for the \(\lambda\)-calculus. On the way, we develop several techniques of independent interest at the level of abstract categories, including relation liftings of mixed-variance bifunctors and higher-order GSOS laws, as well as Howe's method. ## I Introduction Following the emergence of structural approaches to operational semantics (SOS), e.g. [28, 34], operational reasoning has developed into a widely used methodology in formal reasoning on higher-order languages. Numerous powerful operational techniques have been developed, tested, and refined, such as logical relations [36, 35, 33, 15] and Howe's method [24, 25, 14]. These methods have been found to be quite robust, being capable of providing solutions to challenging problems such as congruence proofs and reasoning about contextual equivalence, even in rather involved settings such as effectful, e.g. nondeterministic, higher-order languages. Unfortunately, such power comes at a price. Operational methods are known to be both complex, requiring a daunting amount of machinery in order to be instantiated, and specialized, in the sense that they need to be developed on a per-case basis, and any small perturbation in the problem setting may break earlier machinery. A key ingredient that is needed to alleviate these issues is a sufficiently general rigorous notion of _SOS specification of programming language semantics_; without it, reasoning is inevitably bound to specific instances of SOS specifications, and the only 'free' mathematical principle is induction on the structure of terms. Capturing the essence of SOS in a single, precise definition in order to reason at a greater level of generality has thus been a topic of lasting interest. _Rule formats_ such as GSOS [5] provide a handle to reason about classes of languages, as opposed to one language at a time. For instance, the property that bisimilarity is a congruence holds for any language adhering to the GSOS format. On a more abstract and conceptual level, Turi and Plotkin's framework of Mathematical Operational Semantics [37], a.k.a. _abstract GSOS_, shows that rule formats such as GSOS are instances of a general principle, namely that operational rules amount to certain natural transformations, so-called _GSOS laws_. Abstract GSOS has been instantiated in quite diverse settings [3, 29, 17, 32, 19]. In recent work [20] we have reconciled Turi and Plotkin's ideas, originally applicable only in first-order settings, with higher-order languages. The main insight is that _dinatural_ transformations are able to express higher-order operational rules in ways that the original approach based on naturality could not. Like a classical GSOS law, a higher-order GSOS law is a form of distributive law of a syntax functor \(\Sigma\) over a behaviour functor \(B\), but in the context of higher-order languages, \(B\) in general needs to be a mixed-variance bifunctor, in the sense that it depends covariantly on the set of states or terms when these appear as results of functions, and contravariantly when they are used as arguments of functions. It is this phenomenon of mixed variance that necessitates the use of dinatural transformations. The main result of [20] is that the operational semantics of a higher-order GSOS law is _compositional_: for the initial (term) model \(\mu\Sigma\), coalgebraic bisimilarity for the endofunctor \(B(\mu\Sigma,-)\) is a congruence. For instance, in the case where \(B(X,Y)\) is the behaviour bifunctor for the \(\lambda\)-calculus, this instantiates to a _strong_ variant of Abramsky's _applicative bisimilarity_[1], which unlike applicative bisimilarity proper makes \(\beta\)-reductions observable. The main contribution of the present paper is a generalization of our previous congruence result [20] from strong bisimilarity to _weak (bi)similarity_. It applies to higher-order GSOS laws whose initial model forms a _higher-order lax bialgebra_, extending the corresponding first-order concept [6]. When instantiated to the call-by-name \(\lambda\)-calculus, weak (bi)similarity amounts to standard applicative (bi)similarity. Hence we obtain a more useful general compositionality theorem, an instance of which is the classical result that applicative bisimilarity (rather than a previously unstudied notion of strong applicative bisimilarity as in [20]) in the call-by-name \(\lambda\)-calculus is a congruence [1]. Our approach is parameterized in such a way that strong similarity is an instance of weak similarity, so our main result subsumes that of [20]. The passage from strong to weak similarity comes with a number of technical challenges; most notably, simple and well-established proof techniques such as coinduction up to congruence now fail. To prove our main theorem, we develop an abstract categorical version of Howe's method (Proposition VIII.5). The abstraction depends centrally on new notions of bifunctorial graph and relation liftings (applied to liftings of the mixed-variance behaviour functor), which may in fact turn out to be of independent interest as generalizations of relation liftings of functors [22, 27] to higher-order behaviours. For full proofs and additional details, see Appendix. Related WorkBorthele et al. [8] and Hirschowitz and Lafont [23] have recently developed a framework for congruence of applicative bisimilarity based on Howe's method. Their approach is conceptually quite different from ours: operational rules are given as endofunctors on a presheaf category of _transition systems_ over models of a signature endofunctor, and the initial algebra for the rule endofunctor represents the induced transition system for the given semantics. Dal Lago et al. [14] propose a generalization of Howe's method for call-by-value \(\lambda\)-calculi with algebraic effects, based on the theory of relators. Their notion of a _computational_\(\lambda\)-calculus is parametrized over a signature \(\Sigma\) and a monad \(T\) on sets, representing syntax and effects of the language. The operational semantics is given in big-step form. Bonchi et al. [6] employ lax bialgebras to establish up-to techniques for weak bisimulations in the context of (first-order) abstract GSOS. Besides the differences in scope, two approaches diverge also in the way the are based on relation liftings: Bonchi et al. lift endofunctors from sets to preorders and further to up-closed relations, while we lift bifunctors from an abstract category \(\mathbb{C}\) to relations over \(\mathbb{C}\), the up-closure being replaced with the abstract _good-for-simulations_ condition (Definition IV.5). ## II Preliminaries ### _Category Theory_ We assume familiarity with basic category theory. In the following we recall some relevant terminology and notation. Products and coproductsGiven objects \(X_{1},X_{2}\) in a category \(\mathbb{C}\), we write \(X_{1}\times X_{2}\) for the product and \(\langle f_{1},f_{2}\rangle\colon X\to X_{1}\times X_{2}\) for the pairing of morphisms \(f_{i}\colon X\to X_{i}\), \(i=1,2\). We let \(X_{1}+X_{2}\) denote the coproduct, \(\mathsf{inl}\colon X_{1}\to X_{1}+X_{2}\) and \(\mathsf{inr}\colon X_{2}\to X_{1}+X_{2}\) the injections, \([g_{1},g_{2}]\colon X_{1}+X_{2}\to X\) the copairing of morphisms \(g_{i}\colon X_{i}\to X\), \(i=1,2\), and \(\nabla=[\mathsf{id}_{X},\mathsf{id}_{X}]\colon X+X\to X\) the codiagonal. Locally distributive categoriesA category \(\mathbb{C}\) is distributive if it has finite products and coproducts, and for each \(X\in\mathbb{C}\) the endofunctor \(X\times(-)\) on \(\mathbb{C}\) preserves finite coproducts. It is _locally distributive_ if for each \(X\in\mathbb{C}\) the slice category \(\mathbb{C}/X\) is distributive. Recall that \(\mathbb{C}/X\) has as objects all pairs \((Y,p_{Y})\) of an object \(Y\in\mathbb{C}\) and a morphism \(p_{Y}\colon Y\to X\), and a morphism from \((Y,p_{Y})\) to \((Z,p_{Z})\) is a morphism \(f\colon Y\to Z\) of \(\mathbb{C}\) such that \(p_{Y}=p_{Z}\cdot f\). The coslice category \(X/\mathbb{C}\) is defined dually. **Example II.1**.: Examples of locally distributive categories include the category \(\mathbf{Set}\) of sets and functions, the category \(\mathbf{Set}^{\mathbb{C}}\) of presheaves on a small category \(\mathbb{C}\) and natural transformations, and the categories of posets and monotone maps, nominal sets and equivariant maps, metric spaces and non-expansive maps. In fact, they are all _lexensive_[11, Cor 4.9]. AlgebrasGiven an endofunctor \(F\) on a category \(\mathbb{C}\), an \(F\)_-algebra_ is a pair \((A,a)\) which consists of an object \(A\) (the _carrier_ of the algebra) and a morphism \(a\colon FA\to A\) (its _structure_). A _morphism_ from \((A,a)\) to an \(F\)-algebra \((B,b)\) is a morphism \(h\colon A\to B\) of \(\mathbb{C}\) such that \(h\cdot a=b\cdot Fh\). Algebras for \(F\) and their morphisms form a category \(\mathbf{Alg}(F)\), and an _initial_\(F\)-algebra is simply an initial object in that category. We denote the initial \(F\)-algebra by \(\mu F\) if it exists, and its structure by \(\iota\colon F(\mu F)\to\mu F\). If \(\mathbb{C}\) has binary products, initial algebras entail a useful definition principle known as _primitive recursion_: for every morphism \(a\colon F(\mu F\times A)\to A\) there exists a unique morphism \(\mathsf{pr}\,a\) making the square below commute. \[\begin{CD}F(\mu F)@>{\iota}>{}>\mu F\\ @V{F(\mathsf{id},\,\mathsf{pr}\,a)}V{}V@V{}V{\mathsf{pr}\,a}V\\ F(\mu F\times A)@>{a}>{}>A\end{CD}\] (II.1) More generally, a _free \(F\)-algebra_ on an object \(X\) of \(\mathbb{C}\) is an \(F\)-algebra \((F^{*}X,\iota_{X})\) together with a morphism \(\eta_{X}\colon X\to F^{*}X\) of \(\mathbb{C}\) such that for every algebra \((A,a)\) and every morphism \(h\colon X\to A\) in \(\mathbb{C}\), there exists a unique \(F\)-algebra morphism \(h^{*}\colon(F^{*}X,\iota_{X})\to(A,a)\) such that \(h=h^{*}\cdot\eta_{X}\). If free algebras exist on every object, their formation induces a monad \(F^{*}\colon\mathbb{C}\to\mathbb{C}\), the _free monad_ generated by \(F\). (Conversely, in complete and well-powered categories, existence of a free monad implies existence of free algebras [30, Thm. 4.2.15].) For every \(F\)-algebra \((A,a)\), we obtain an Eilenberg-Moore algebra \(\widehat{a}\colon F^{*}A\to A\) as the free extension of \(\mathsf{id}_{A}\colon A\to A\). The most familiar example of functor algebras are algebras for a signature. An _algebraic signature_ consists of a set \(\Sigma\) of operation symbols together with a map \(\mathsf{ar}\colon\Sigma\to\mathbb{N}\) associating to every \(\mathsf{f}\in\Sigma\) its _arity_\(\mathsf{ar}(\mathsf{f})\). Symbols of arity \(0\) are called _constants_. Every signature \(\Sigma\) induces the polynomial set functor \(\coprod_{\in\Sigma}(-)^{\mathsf{ar}(\mathsf{f})}\), which we denote by the same letter \(\Sigma\). An algebra for the functor \(\Sigma\) is precisely an algebra for the signature \(\Sigma\), viz. a set \(A\) equipped with an operation \(\mathsf{f}^{A}\colon A^{n}\to A\) for every \(n\)-ary operation symbol \(\mathsf{f}\in\Sigma\). Morphisms of \(\Sigma\)-algebras are maps respecting the algebraic structure. Given a set \(X\) of variables, the free algebra \(\Sigma^{*}X\) is the \(\Sigma\)-algebra of \(\Sigma\)-terms with variables from \(X\). In particular, the free algebra on the empty set is the initial algebra \(\mu\Sigma\); it is formed by all _closed terms_ of the signature. For every \(\Sigma\)-algebra \((A,a)\), the induced Eilenberg-Moore algebra \(\widehat{a}\colon\Sigma^{*}A\to A\) is given by the map evaluating terms over \(A\) in the algebra. A relation \(R\subseteq A\times A\) on a \(\Sigma\)-algebra \(A\) is called a _congruence_ if for every \(n\)-ary \(\mathsf{f}\in\Sigma\) and elements \(R(a_{i},a_{i}^{\prime})\), \(i=1,\ldots,n\), one has \(R(\mathsf{f}^{A}(a_{1},\ldots,a_{n}),\mathsf{f}^{A}(a_{1}^{\prime},\ldots,a_{n }^{\prime}))\). Note that we do not require \(R\) to be an equivalence relation. _Coalgebras_: Dual to the notion of algebra, a _coalgebra_ for an endofunctor \(F\) on \(\mathbb{C}\) is a pair \((C,c)\) of an object \(C\) (the _carrier_) and a morphism \(c\colon C\to FC\) (its _structure_). ### _Higher-Order Abstract GSOS_ We review the core principles behind _higher-order abstract GSOS_[20], a categorical framework modelling the operational semantics of higher-order languages. It is parametric in 1. a category \(\mathbb{C}\) with finite products and coproducts; 2. an object \(V\in\mathbb{C}\) of _variables_; 3. two functors \(\Sigma\colon\mathbb{C}\to\mathbb{C}\) and \(B\colon\mathbb{C}^{\mathsf{op}}\times\mathbb{C}\to\mathbb{C}\), where \(\Sigma=V+\Sigma^{\prime}\) for some functor \(\Sigma^{\prime}\colon\mathbb{C}\to\mathbb{C}\), and free \(\Sigma\)-algebras exist on every object (hence \(\Sigma\) generates a free monad \(\Sigma^{\star}\)). Informally, the functors \(\Sigma\) and \(B\) represent the _syntax_ and the _behaviour_ of a higher-order language. The initial algebra \(\mu\Sigma\) is the object of programs, and the requirement that \(\Sigma=V+\Sigma^{\prime}\) asserts that variables are programs. An object of \(V/\mathbb{C}\), the coslice category of \(V\)_-pointed objects_, is thought of as a set \(X\) of programs with an embedding \(p_{X}\colon V\to X\) of the variables. **Example II.2**.: A simple instantiation is given by \(V=\emptyset\), a polynomial functor \(\Sigma\) and the bifunctor \(B_{0}(X,Y)=Y+Y^{X}\) on \(\mathbf{Set}\). A map \(\gamma_{0}\colon\mu\Sigma\to\mu\Sigma+\mu\Sigma^{\mu\Sigma}\), that is, a \(B_{0}(\mu\Sigma,-)\)-coalgebra with carrier \(\mu\Sigma\), can be thought of as a description of the operational behaviour of deterministic higher-order programs: every program \(p\in\mu\Sigma\) either performs a silent computation step reducing \(p\) to \(\gamma(p)\in\mu\Sigma\), or it acts as a function \(\gamma(p)\in\mu\Sigma^{\mu\Sigma}\) mapping programs to programs. In order to actually construct coalgebras \(\gamma_{0}\) as in the above example, we use the following concept: **Definition II.3**.: A _(\(V\)-pointed) higher-order GSOS law_ of \(\Sigma\) over \(B\) is a family of morphisms \[\varrho_{(X,p_{X}),Y}\colon\Sigma(X\times B(X,Y))\to B(X,\Sigma^{\star}(X+Y))\] (II.2) dinatural in \((X,p_{X})\in V/\mathbb{C}\) and natural in \(Y\in\mathbb{C}\). **Notation II.4**.: (1) We usually write \(\varrho_{X,Y}\) for \(\varrho_{(X,p_{X}),Y}\), as the point \(p_{X}\colon V\to X\) will always be clear from the context. 2. For every \(\Sigma\)-algebra \((A,a)\), we regard \(A\) as \(V\)-pointed by \[p_{A}=\big{(}V\stackrel{{\text{\tiny inl}}}{{\longrightarrow}}V+ \Sigma^{\prime}A=\Sigma A\stackrel{{ a}}{{\longrightarrow}}A\big{)}.\] **Definition II.5**.: The _operational model_ of a higher-order GSOS law \(\varrho\) in (II.2) is the \(B(\mu\Sigma,-)\)-coalgebra \[\gamma\colon\mu\Sigma\to B(\mu\Sigma,\mu\Sigma)\] obtained via primitive recursion as the unique morphism making the diagram (II.3) in Figure 1 commute. Here we regard the initial algebra \(\mu\Sigma\) as \(V\)-pointed as in Notation II.4, and \(\widehat{\iota}\) is the \(\Sigma^{\star}\)-algebra corresponding to \(\iota\colon\Sigma(\mu\Sigma)\to\mu\Sigma\). **Remark II.6**.: The commutative diagram (II.3) states that \((\mu\Sigma,\iota,\gamma)\) forms a _bialgebra_ for the higher-order GSOS law \(\varrho\); in fact, it is the initial such bialgebra [20, Prop. 4.20]. An important difference to first-order abstract GSOS [37] is that a final bialgebra usually does not exist even for simple deterministic behaviour functors [20, Ex. 4.21]. This in part explains why higher-order compositionality results are technically involved and first-order proof methods fail. Let us illustrate the above concepts in the setting of Example II.2. A higher-order GSOS law of a polynomial functor \(\Sigma\) over \(B_{0}(X,Y)=Y+Y^{X}\) is a family of maps \[\varrho_{X,Y}^{0}\colon\Sigma(X\times(Y+Y^{X}))\to\Sigma^{\star}(X+Y)+( \Sigma^{\star}(X+Y))^{X}\] dinatural in \(X\in\mathbf{Set}\) and natural in \(Y\in\mathbf{Set}\). Intuitively, on input \(\mathsf{f}((p_{1},b_{1}),\dots,(p_{n},b_{n}))\) for \(\mathsf{f}\in\Sigma\), the map \(\varrho_{X,Y}^{0}\) specifies the behaviour of the program \(\mathsf{f}(p_{1},\dots,p_{n})\) in terms of the behaviours \(b_{1},\dots,b_{n}\in Y+Y^{X}\) of its subprograms \(p_{1},\dots,p_{n}\). (Di)naturality of \(\varrho^{0}\) ensures that the maps \(\varrho_{X,Y}^{0}\) are parametrically polymorphic, that is, they do not look into the structure of their arguments. This can be made formal via the following syntactic representation of higher-order GSOS laws. Fix metavariables \(x\), \(x_{i}\), \(y_{i}\) and \(y_{i}^{z}\) for \(i\in\mathbb{N}\) and \(z\in\{x,x_{1},x_{2},x_{3},\dots\}\). An \(\mathcal{HO}\)_rule_ is an expression of the form (II.4) or (II.5), where \(\mathsf{f}\in\Sigma\), \(n=\mathsf{ar}(\mathsf{f})\), \(W\subseteq\{1,\dots,n\}\), \(\overline{W}=\{1,\dots,n\}\smallsetminus W\), and \(t\) is a \(\Sigma\)-term in the variables appearing in the premise, and additionally in \(x\) for (II.5). \[\frac{(x_{j}\to y_{j})_{j\in W}\quad(x_{k}\stackrel{{ z}}{{ \longrightarrow}}y_{k}^{z})_{k\in\overline{W},\,z\in\{x_{1},\dots,x_{n}\}}}{ \mathsf{f}(x_{1},\dots,x_{n})\to t}\] (II.4) \[\frac{(x_{j}\to y_{j})_{j\in W}\quad(x_{k}\stackrel{{ z}}{{ \longrightarrow}}y_{k}^{z})_{k\in\overline{W},\,z\in\{x_{1},\dots,x_{n},x\}}}{ \mathsf{f}(x_{1},\dots,x_{n})\stackrel{{ x}}{{ \longrightarrow}}t}\] (II.5) An \(\mathcal{HO}\)_specification_ is a complete set \(\mathcal{R}\) of \(\mathcal{HO}\) rules, that is, for each \(n\)-ary operation symbol \(\mathsf{f}\in\Sigma\) and \(W\subseteq\{1,\dots,n\}\) there is exactly one rule of the form (II.4) or (II.5) in \(\mathcal{R}\). **Example II.7**.: The _extended \(\operatorname{SKI}\) calculus_, previously termed _unary \(\operatorname{SKI}\) calculus_[20], is a combinatory logic expressively equivalent to Curry's \(\operatorname{SKI}\)_calculus_[13], hence to the untyped \(\lambda\)-calculus. Its signature is given by \(\Sigma=\{S/0,K/0,I/0,S^{\prime}/1,K^{\prime}/1,S^{\prime\prime}/2,\circ/2\}\) with arities as indicated. Informally, the operator - \(\circ\) - corresponds to function application (we write \(s\,t\) for \(s\circ t\)), and the constants \(S,K,I\) represent the functions \((s,t,u)\mapsto(s\,u)\,(t\,u)\), \((s,t)\mapsto s\), and \(s\mapsto s\). The operators \(S^{\prime},S^{\prime\prime},K^{\prime}\) serve auxiliary purposes. The operational semantics is given by an \(\mathcal{HO}\) specification [20, Fig. 1]. For instance, the rules for application are \[\frac{x_{1}\to y_{1}}{x_{1}\,x_{2}\to y_{1}\,x_{2}}\qquad\frac{x_{1}\stackrel{{ x_{2}}}{{\longrightarrow}}x_{1}^{x_{2}}}{x_{1}\,x_{2}\to x_{1}^{x_{2}}}\] (II.6) **Remark II.8**.: By convention, a rule with incomplete premises represents the set of \(\mathcal{HO}\) rules obtained by adding missing premises in every feasible way. For example, in the first rule of (II.6) we can add \(x_{2}\to y_{2}\), or \(x_{2}\stackrel{{ x_{1}}}{{\longrightarrow}}y_{2}^{x_{1}}\) and \(x_{2}\stackrel{{ x_{2}}}{{\longrightarrow}}y_{2}^{x_{2}}\). **Proposition II.9**[20].: _Higher-order GSOS laws of \(\Sigma\) over \(B_{0}\) correspond bijectively to \(\mathcal{HO}\) specifications._ The bijection is based on the Yoneda lemma, and maps an \(\mathcal{HO}\) specification \(\mathcal{R}\) to the higher-order GSOS law \(\varrho^{0}\) defined as follows. Given \(X,Y\in\mathbf{Set}\) and \[w=\mathsf{f}((p_{1},b_{1}),\dots,(p_{n},b_{n}))\in\Sigma(X\times B_{0}(X,Y)),\] consider the unique rule in \(\mathcal{R}\) matching \(\mathsf{f}\) and \(W=\{j\in\{1,\ldots,n\}:b_{j}\in Y\}\). If the rule is of the form (II.4), then \[\varrho_{X,Y}^{0}(w)\in\Sigma^{\star}(X+Y)\subseteq B_{0}(X,\Sigma^{\star}(X+Y))\] is the term obtained by taking the term \(t\) in (II.4) and applying the following substitutions for \(i\in\{1,\ldots,n\}\), \(j\in W\), \(k\in\overline{W}\): \[x_{i}\mapsto p_{i},\qquad y_{j}\mapsto b_{j},\qquad y_{k}^{x_{i}}\mapsto b_{k} (p_{i}).\] If the rule is of the form (II.5), then \[\varrho_{X,Y}^{0}(w)\in(\Sigma^{\star}(X+Y))^{X}\subseteq B_{0}(X,\Sigma^{ \star}(X+Y))\] is the map \(e\mapsto t_{e}\), where \(t_{e}\) is obtained by taking the term \(t\) in (II.5) and applying the above substitutions along with \[x\mapsto e\qquad\text{and}\qquad y_{k}^{\overline{r}}\mapsto b_{k}(e)\quad(k \in\overline{W}).\] Instantiating Definition II.5, the operational model of a higher-order GSOS law \(\varrho^{0}\) is the \(B_{0}(\mu\Sigma,-)\)-coalgebra \[\gamma_{0}\colon\mu\Sigma\to\mu\Sigma+\mu\Sigma^{\mu\Sigma}\] (II.7) that runs programs in \(\mu\Sigma\) according to the rules in the corresponding \(\mathcal{HO}\) specification. ## III Compositionality for \(\mathcal{HO}\) Specifications Our eventual goal is to reason about weak simulations and their congruence properties on operational models of higher-order GSOS laws. The required categorical machinery is developed from Section IV onwards. In the present section we motivate the categorical abstractions by again investigating the special case of \(\mathcal{HO}\) specifications, that is, we continue to work in the setting of Example II.2. **Notation III.1**.: (1) In addition to the polynomial functor \(\Sigma\) and \(B_{0}(X,Y)=Y+Y^{X}\), we will also consider the bifunctor \[B(X,Y)=\mathcal{P}B_{0}(X,Y)=\mathcal{P}(Y+Y^{X})\colon\mathbf{Set}^{\text{ op}}\times\mathbf{Set}\to\mathbf{Set},\] where \(\mathcal{P}\colon\mathbf{Set}\to\mathbf{Set}\) is the powerset functor. (2) Given \(X\in\mathbf{Set}\), a coalgebra \(c\colon C\to C+C^{X}\) for the functor \(B_{0}(X,-)\) and \(p\in C\), we write \[\begin{array}{r@{\qquad}l@{\qquad}l}p\to\overline{p}\qquad\text{if}\qquad&c( p)\in C\text{ and }\overline{p}=c(p),\\ p\not\to\qquad\text{if}\qquad&c(p)\not\in C\text{ (that is, }c(p)\in C^{X}),\\ p\xrightarrow{x}p_{x}\qquad\text{if}\qquad&c(p)\in C^{X}\text{, }x\in X\text{, and }p_{x}=c(p)(x).\end{array}\] In the first case, we say that \(p\)_reduces_. Moreover, we put \[\begin{array}{r@{\qquad}l@{\qquad}l}p\Rightarrow\overline{p}\quad\text{if} \quad&\exists k\geq 0.\,\exists p_{0},\ldots,p_{k}\colon p=p_{0}\to\cdots\to p_{k}= \overline{p},\\ p\Downarrow\overline{p}\quad\text{if}\quad&p\Rightarrow\overline{p}\text{ and }\overline{p}\not\to.\end{array}\] (II.3) The _weak transition system_ of \(c\colon C\to C+C^{X}\) is the coalgebra \(\widetilde{c}\colon C\to\mathcal{P}(C+C^{X})\) for the functor \(B(X,-)\) where \[\widetilde{c}(p)=\{\,\overline{p}\in C:p\Rightarrow\overline{p}\,\}\,\cup\, \{c(\overline{p}):p\Downarrow\overline{p}\,\}.\] **Definition III.2**.: A _weak simulation_ on a \(B_{0}(X,-)\)-coalgebra \(c\colon C\to C+C^{X}\) is a relation \(R\subseteq C\times C\) such that for every \(R(p,q)\) and \(\overline{p}\in C\), the following conditions hold: \[\begin{array}{r@{\qquad}l@{\qquad}l}p\Rightarrow\overline{p}\qquad&\Longrightarrow \qquad&\exists\overline{q}\in C.\,q\Rightarrow\overline{q}\,\wedge\,R( \overline{p},\overline{q});\\ p\Downarrow\overline{p}\qquad&\Longrightarrow\qquad&\exists\overline{q}\in C.\,q\Downarrow\overline{q}\,\wedge\,\forall x\in X.\,R(\overline{p}_{x}, \overline{q}_{x}).\end{array}\] _Weak similarity_ is the greatest weak simulation on \((C,c)\), viz. the union of all weak simulations, denoted \(\lesssim_{(C,c)}\) or just \(\lesssim\). Note that dropping the first condition leads to the same weak similarity relation. We include it to match the abstract view on weak simulations in Remark III.3(2) below. **Remark III.3**.: We make some observations that will be key to our categorical generalization of weak simulations in Section VI and VIII. (1) From a conceptual perspective, weak simulations can be understood in terms of _relation liftings_ of the involved functors. Let \(\mathbf{Rel}\) denote the category whose objects are pairs \((X,R)\) of a set \(X\) and a binary relation \(R\subseteq X\times X\), and whose morphisms \(h\colon(X,R)\to(Y,S)\) are maps \(h\colon X\to Y\) such that \((h\times h)[R]\subseteq S\). The functors \(\mathcal{P}\), \(B_{0}\) and \(B=\mathcal{P}\cdot B_{0}\) lift to functors \(\overline{\mathcal{P}}\), \(\overline{B}_{0}\) and \(\overline{B}\) on \(\mathbf{Rel}\) making the diagram below commute, where \(|-|\) is the forgetful functor \((X,R)\mapsto X\). (a) The lifting \(\overline{\mathcal{P}}\) of \(\mathcal{P}\) is given by \[\overline{\mathcal{P}}(X,R)=(\mathcal{P}X,S_{R}),\qquad\overline{\mathcal{P}}h =\mathcal{P}h,\] where \(S_{R}\) is the (one-sided) _Egli-Milner relation_ on \(\mathcal{P}X\): \[S_{R}(U,V)\qquad\Longleftrightarrow\quad\forall u\in U.\,\exists v\in V.\,R(u,v).\] (b) The lifting \(\overline{B}_{0}\) of \(B_{0}\) is given by \[\overline{B}_{0}((X,R),(Y,S)) =(B_{0}(X,Y),E_{R,S}^{0}),\] \[\overline{B}_{0}(h,k) =B_{0}(h,k),\] where \(E_{R,S}^{0}(u,v)\) holds for \(u,v\in B_{0}(X,Y)=Y+Y^{X}\) whenever either of the following conditions is satisfied: Fig. 1: Operational model of a higher-order GSOS law * \(u,v\in Y\,\wedge\,S(u,v)\); * \(u,v\in Y^{X}\,\wedge\,\forall x,x^{\prime}\in X.\,(R(x,x^{\prime})\implies S(u(x),v (x^{\prime})))\). We note that \(((X,R),(Y,S))\mapsto(Y^{X},E_{R,S}^{0}\cap Y^{X}\times Y^{X})\) is the internal hom-functor of the cartesian closed category \(\mathbf{Rel}\). Finally, we put \(\overline{B}=\overline{\mathcal{P}}\cdot\overline{B}_{0}\). More explicitly, \[\overline{B}((X,R),(Y,S))=(B(X,Y),E_{R,S}),\quad\overline{B}(h,k)=B(h,k),\] where \(E_{R,S}\) is the relation on \(\mathcal{P}(Y+Y^{X})\) defined as follows: \[E_{R,S}(U,V)\quad\iff\quad\forall u\in U.\,\exists v\in V.\,E_{R,S}^{0}(u,v).\] (2) A relation \(R\subseteq C\times C\) forms a weak simulation on the coalgebra \(c\colon C\to C+C^{X}\) iff there exists a map \(\widetilde{c}_{R}\) making the diagram (III.1) commute, where outl and outr are the left and right projections and \(\Delta_{X}\subseteq X\times X\) is the identity relation. (III.1) (3) For a relation \(R\subseteq C\times C\) to be a weak simulation, it suffices to restrict the premises of the two weak simulation conditions to strong transitions: for every \(R(p,q)\) and \(\overline{p}\in C\), \[\begin{array}{ccc}p\to\overline{p}&\implies&\exists\overline{q}\in C.\,q \Rightarrow\overline{q}\,\wedge\,R(\overline{p},\overline{q});\\ p\not\to&\implies&\exists\overline{q}\in C.\,q\Downarrow\overline{q}\, \wedge\,\forall x\in X.\,R(p_{x},\overline{q}_{x}).\end{array}\] This amounts to the existence of a map \(\widetilde{c}_{R}\) making the diagram (III.2) commute. Here we regard \(c\colon C\to C+C^{X}\) as a map \(c\colon C\to\mathcal{P}(C+C^{X})\) by postcomposing with \(b\mapsto\{b\}\). (III.2) The compositionality theorem for \(\mathcal{H}\mathcal{O}\) specifications [20, Prop. 3.2] asserts that strong similarity on the operational model (II.7) is a congruence with respect to the operations from the signature \(\Sigma\). (It is worth noting here that strong similarity coincides with strong bisimilarity because reductions are deterministic.) However, for weak similarity that result fails: **Example III.4**.: Consider the signature \(\Sigma=\{c,d,u\}\) where \(c,d\) are constants and \(u\) is unary, along with the \(\mathcal{H}\mathcal{O}\) specification given by the following four rules: \[\begin{array}{ccc}\includegraphics[width=142.26378pt]{images/.eps}& \includegraphics[width=142. Then the rules of \(\mathcal{R}\) are sound for weak transitions iff the diagram (III.6) commutes laxly. Here \(\widetilde{\gamma}\) is the weak transition system of \(\gamma\), and the partial order \(\preceq\) on a hom-set \(\mathbf{Set}(X,\mathcal{P}Y)\) is given by \(f\preceq g\) iff \(f(x)\subseteq g(x)\) for all \(x\in X\). In the terminology introduced later (Definition VIII.4), \(\iota\) and \(\widetilde{\gamma}\) thus form a _lax bialgebra_ for the higher-order GSOS law \(\varrho\). **Theorem III.8**.: _For every \(\mathcal{HO}\) specification whose rules are sound for weak transitions, the weak similarity relation \(\lesssim\) on the canonical model \(\gamma\colon\mu\Sigma\to\mu\Sigma+\mu\Sigma^{\mu\Sigma}\) is a congruence._ The proof uses Howe's method [25], a standard technique for establishing higher-order congruence results. **Notation III.9**.: The _Howe closure_ of a relation \(R\subseteq\mu\Sigma\times\mu\Sigma\) is the relation \[\widehat{R}=\bigcup_{m\in\mathbb{N}}\widehat{R}_{m}\] on \(\mu\Sigma\) where \(\widehat{R}_{0}\subseteq\widehat{R}_{1}\subseteq\widehat{R}_{2}\subseteq\cdots\) are defined inductively: \(\widehat{R}_{0}=R\) and for every \(m\in\mathbb{N}\) and \(p,r\in\mu\Sigma\), one has \(\widehat{R}_{m+1}(p,r)\) whenever \(\widehat{R}_{m}(p,r)\) or \[\exists\mathsf{f}\in\Sigma,\vec{p},\vec{q}\in(\mu\Sigma)^{\mathsf{ar}( \mathsf{f})}\cdot p=\mathsf{f}(\vec{p})\,\wedge\,\widehat{R}_{m}(\vec{p}, \vec{q})\,\wedge\,R(\mathsf{f}(\vec{q}),r).\] Here \(\widehat{R}_{m}(\vec{p},\vec{q})\) means \(\widehat{R}_{m}(p_{i},q_{i})\) for \(i=1,\ldots,\mathsf{ar}(\mathsf{f})\). **Remark III.10**.: (1) If \(R\) is reflexive, then the Howe closure \(\widehat{R}\) is a congruence: put \(r=f(\vec{q})\) in the definition of \(\widehat{R}_{m+1}\). (2) If \(R\) is transitive, then \(\widehat{R}\) satisfies a weak transitivity property: \(\widehat{R}(p,r)\) and \(R(r,r^{\prime})\) implies \(\widehat{R}(p,r^{\prime})\) for all \(p,r,r^{\prime}\in\mu\Sigma\). This follows by induction on the least \(m\) such that \(\widehat{R}_{m}(p,r)\). (3) Thus, if \(R\) is both reflexive and transitive (in particular, if it is some weak similarity relation), then \(\widehat{R}\) is the least weakly transitive congruence containing \(R\). Proof of Theorem iii.8.: Form the Howe closure \(\widehat{\lesssim}\) of \(\lesssim\). Since \(\widehat{\lesssim}\) is a congruence, it suffices to prove \(\widehat{\lesssim}=\lesssim\). The inclusion \(\widehat{\lesssim}\subseteq\widehat{\lesssim}\) is clear. For the inclusion \(\widehat{\lesssim}\subseteq\widehat{\lesssim}\) we show that \(\widehat{\lesssim}\) is a weak simulation; then the inclusion holds because \(\widehat{\lesssim}\) is the greatest weak simulation. By Remark III.3(3), we need to establish the following for every \(p\lesssim r\) and \(\overline{p}\in\mu\Sigma\): \[p\to\overline{p} \implies\exists\overline{r}\in\mu\Sigma.\,r\Rightarrow\overline{r }\,\wedge\,\overline{p}\lesssim\overline{r};\] (III.7) \[p\not\to \implies\exists\overline{r}\in\mu\Sigma.\,r\Downarrow\overline{r }\wedge\forall e\in\mu\Sigma.\,p_{e}\lesssim\overline{r}_{e}.\] (III.8) In lieu of (III.8) we will actually prove a stronger statement: \[p\not\to \implies\exists\overline{r}\in\mu\Sigma.\,r\Downarrow\overline{r }\,\wedge\,\forall d\gtrsim e.\,p_{d}\lesssim\overline{r}_{e}.\] (III.9) The proof is by induction on the least \(m\) such that \(p\lesssim_{m}r\). **Induction base (\(m=0\)).** Suppose that \(p\lesssim_{0}r\), that is, \(p\lesssim r\). Proof of (III.7).: If \(p\to\overline{p}\), since \(\lesssim\) is a weak simulation, there exists \(\overline{r}\in\mu\Sigma\) such that \(r\Rightarrow\overline{r}\) and \(\overline{p}\lesssim\overline{r}\), hence also \(\overline{p}\lesssim\overline{r}\). Proof of (III.9).: If \(p\not\to\), since \(\lesssim\) is a weak simulation, there exists \(\overline{r}\in\mu\Sigma\) such that \(r\Downarrow\overline{r}\) and \(p_{e}\lesssim\overline{r}_{e}\) for \(e\in\mu\Sigma\). By definition of the \(\mathcal{HO}\) format, there exists a term \(t_{p}(x)\) in a single variable \(x\) such that \(p_{e}=t_{p}(e)\) for \(e\in\mu\Sigma\). Since \(\widehat{\lesssim}\) is a congruence, it follows that, for \(d\lesssim e\), \[p_{d}=t_{p}(d)\lesssim t_{p}(e)=p_{e}\lesssim\overline{r}_{e}\] Thus \(p_{d}\lesssim\overline{r}_{e}\) by weak transitivity of the relation \(\lesssim\). **Induction step (\(m\to m+1\)).** Suppose that \(p\lesssim_{m+1}r\). We only verify condition (III.9), the argument for (III.7) is analogous. Thus suppose that \(p\not\to\). If \(p\lesssim_{m}r\), we are done by induction. Otherwise, by definition of \(\lesssim_{m+1}\), there exists an \(n\)-ary operation symbol \(\mathsf{f}\in\Sigma\) and \(\vec{p},\vec{q}\in(\mu\Sigma)^{n}\) such that \[p=\mathsf{f}(\vec{p}),\qquad\vec{p}\lesssim_{m}\vec{q},\qquad q:=\mathsf{f} (\vec{q})\lesssim r.\] To avoid bulky notation, we consider the representative case of a binary operator \(\mathsf{f}\) where \(p_{1}\) reduces (say \(p_{1}\to\overline{p}_{1}\)) and \(p_{2}\not\to\). Then we know by induction that * \(\overline{\mathfrak{gl}}_{1}\in\mu\Sigma.\,q_{1}=\overline{q}_{1}\,\wedge\, \overline{p}_{1}\lesssim\overline{q}_{1}\); * \(\overline{\mathfrak{gl}}_{2}\in\mu\Sigma.\,q_{2}\Downarrow\overline{q}_{2}\, \wedge\,\forall d\lesssim e.\,(p_{2})_{d}\lesssim\overline{(q}_{2})_{e}\). Since \(p=\mathsf{f}(p_{1},p_{2})\not\to\), the rule applying to \(p\) has the form \[\begin{CD}x_{1}\to y_{1}&x_{2}\xrightarrow{x_{1}}y_{2}^{x_{1}}&x_{2} \xrightarrow{x_{2}}y_{2}^{x_{2}}&x_{2}\xrightarrow{x}y_{2}^{x}\\ \hline\mathsf{f}(x_{1},x_{2})\xrightarrow{x}t(x_{1},x_{2},x,y_{1},y_{2}^{x_{1} },y_{2}^{x_{2}},y_{2}^{x})\end{CD}.\] Thus, for every \(d\in\mu\Sigma\), \[p\xrightarrow{d}p_{d}=t(p_{1},p_{2},d,\overline{p}_{1},(p_{2})_{p_{1}},(p_{2})_{ p_{2}},(p_{2})_{d}).\] The above rule is sound for weak transitions, and we have \(q_{1}\Rightarrow\overline{q}_{1}\) and \(q_{2}\Downarrow\overline{q}_{2}\), so there exists \(\overline{q}\in\mu\Sigma\) such that \[q\Downarrow\overline{q}\quad\text{and}\quad\overline{q}\xrightarrow{e}\overline{ q}_{e}=t(q_{1},q_{2},e,\overline{q}_{1},(\overline{q}_{2})_{q_{1}},(\overline{q}_{2})_{q_{2}},( \overline{q}_{2})_{e})\] for all \(e\in\mu\Sigma\). Thus for \(d\lesssim e\) we have \(p_{d}\lesssim\overline{q}_{e}\) because \(\widehat{\lesssim}\) is a congruence and the terms substituted in \(t\) for the variables are related by \(\lesssim\). Moreover, since \(\lesssim\) is a weak simulation and \(q\lesssim r\), there exists \(\overline{r}\in\mu\Sigma\) such that \(r\Downarrow\overline{r}\) and \(\overline{q}_{e}\lesssim\overline{r}_{e}\) for all \(e\in\mu\Sigma\). It follows that \(p_{d}\lesssim\overline{r}_{e}\) for \(d\lesssim e\) because \(p_{d}\lesssim\overline{q}_{e}\lesssim\overline{r}_{e}\) and the relation \(\lesssim\) is weakly transitive. **Remark III.11**.: (1) The strengthening (III.9) of the induction hypothesis is required, for otherwise the proof gets stuck: the argument in the induction step showing \(p_{d}\lesssim\overline{q}_{e}\) for \(d\lesssim e\) (or even \(p_{e}\lesssim\overline{q}_{e}\) for \(e\in\mu\Sigma\)) relies on relations such as \((p_{2})_{p_{1}}\lesssim\overline{(q_{2})_{q_{1}}}\), which only hold by (III.9), not by (III.8). (2) The strengthened induction hypothesis (III.7) + (III.9) can be expressed via the relation lifting of the bifunctor \(\vec{B}\), see Remark III.3(1): It amounts to the existence of a map \(\delta\) making the diagram below commute. induction base fails, as the argument requires weak transitivity of \(\lesssim\). If \(\lesssim\) is taken to be the least transitive congruence containing \(\lesssim\), it is no longer clear how to construct \(\lesssim\) as a union of inductively defined relations \(\lesssim_{m}\) in a way that makes the induction step work. It thus appears that Howe's method is the simplest and most natural approach to the present result. We conclude this section by identifying a natural class of \(\mathcal{HO}\) specifications, the _cool \(\mathcal{HO}\) specifications_, whose rules are sound for weak transitions. It resembles first-order formats such as _cool GSOS_[4, 38] for labelled transition systems, and _cool stateful SOS_[19] for stateful computations. **Definition III.12**.: (1) An \(n\)-ary operator \(\mathsf{f}\in\Sigma\) is _passive_ if it is specified by a premise-free rule (cf. Remark II.8) \[\overline{\mathsf{f}(x_{1},\ldots,x_{n})\to t}\quad\text{or}\quad\overline{ \mathsf{f}(x_{1},\ldots,x_{n})\xrightarrow{x}t}\] (III.10) where \(t\) is a term in the variables \(x_{1},\ldots,x_{n}\) or \(x_{1},\ldots,x_{n},x\), resp. Thus the behaviour of \(\mathsf{f}\) does not depend on the behaviour of its subterms. An _active_ operator is one which is not passive. (2) An \(\mathcal{HO}\) specification is _cool_ if for every active \(n\)-ary operator \(\mathsf{f}\) there exists \(j\in\{1,\ldots,n\}\) (called the _receiving position of \(\mathsf{f}\)_) such that all rules for \(\mathsf{f}\) are of the form \[\begin{split}\frac{x_{j}\to y_{j}}{\mathsf{f}(x_{1},\ldots,x_{j}, \ldots x_{n})\to\mathsf{f}(x_{1},\ldots,y_{j},\ldots x_{n})}\\ \frac{(x_{j}\xrightarrow{z}y_{j}^{z})_{\in\{x_{1},\ldots,x_{n} \}}}{\mathsf{f}(x_{1},\ldots,x_{n})\to t}\,\text{or}\,\frac{(x_{j} \xrightarrow{z}y_{j}^{z})_{\in\{x_{1},\ldots,x_{n},x\}}}{\mathsf{f}(x_{1}, \ldots,x_{n})\xrightarrow{x}t}\end{split}\] (III.11) where \(t\) is a term in the variables \(x_{i}\) and \(y_{j}^{x_{i}}\) (\(i\in\{1,\ldots,n\}\smallsetminus\{j\}\)), and moreover in \(x\) and \(y_{j}^{x}\) for the third rule in (III.11). Coolness thus asserts that for active \(\mathsf{f}\), a program \(p=\mathsf{f}(p_{1},\ldots,p_{n})\) must run its \(j\)-th subprogram \(p_{j}\) (for some fixed \(j\) depending only on \(\mathsf{f}\)) until it does not further reduce, correctly propagate all reduction steps of \(p_{j}\) to \(p\), and continue the computation as a program \(t\) that no longer refers to \(p_{j}\). **Proposition III.13**.: _For cool \(\mathcal{HO}\) specifications, all rules are sound for weak transitions._ Thus, we obtain as an instance of Theorem III.8: **Corollary III.14**.: _For cool \(\mathcal{HO}\) specifications, the weak similarity relation on the operational model is a congruence._ This generalizes corresponding congruence results for cool first-order specifications [4, 38, 19]. **Example III.15**.: The extended \(\operatorname{SKI}\) calculus (Example II.7) has application \(-\circ-\) as its only active operator, whose rules (II.6) are cool. Therefore weak similarity on the operational model is a congruence. This means that, for instance, \(p\lesssim q\) implies \(p\,r\lesssim q\,r\) and \(r\,p\lesssim r\,q\) for all \(r\in\mu\Sigma\). The aim of the following sections is to generalize the congruence result of Theorem III.8 to the level of abstract higher-order GSOS laws. The technical key lies in the construction of relation liftings of bifunctors (Section V), along with a suitable categorification of Howe's method (Section VII). ## IV Graphs, Relations, and Preorders For our categorical account of weak similarity we will need to restrict to base categories where operations on relations, such as union or composition, are well-behaved and interact with each other in a way familiar from the category of sets. Therefore, we work under the following global assumptions: **Assumptions IV.1**.: From now on, fix a category \(\mathbb{C}\) such that 1. \(\mathbb{C}\) is complete, cocomplete, and well-powered; 2. \(\mathbb{C}\) is locally distributive; 3. for every commutative diagram (IV.1), if the outside and the inner square are pullbacks and \(e_{0},e_{1}\) are strong epimorphisms, then \(e\) is a strong epimorphism. (IV.1) All categories of Example II.1 satisfy these assumptions. Since \(\mathbb{C}\) is complete and well-powered, the subobjects of a fixed object form a complete lattice, and every morphism has a (strong epi, mono)-factorization [7, Prop. 4.4.3]. All our results easily generalize to arbitrary proper factorization systems. ### _Graphs and Relations_ We review some terminology for the categorical version of graphs (more precisely, directed multigraphs) and relations. #### Iv-A1 Graphs in a category A _graph_ in \(\mathbb{C}\) is a quadruple \((X,R,\mathsf{out}\mathsf{l}_{R},\mathsf{out}\mathsf{r}_{R})\) given by two objects \(X,R\in\mathbb{C}\) and a parallel pair of morphisms \(\mathsf{out}\mathsf{l}_{R},\mathsf{out}\mathsf{r}_{R}\colon R\to X\). A graph is usually denoted by its pair \((X,R)\) of objects. A _morphism_ from \((X,R)\) to a graph \((Y,S)\) is a pair \(h=(h_{0},h_{1})\) of \(\mathbb{C}\)-morphisms making the diagram below commute: (IV.2) We let \(\mathbf{Gra}(\mathbb{C})\) denote the category of graphs in \(\mathbb{C}\) and their morphisms. For every \(X\in\mathbb{C}\) we write \(\mathbf{Gra}_{X}(\mathbb{C})\hookrightarrow\mathbf{Gra}(\mathbb{C})\) for the non-full subcategory consisting of all graphs of the form \((X,R)\) and graph morphisms \(h\) such that \(h_{0}=\mathsf{id}_{X}\). #### Iv-A2 Relations in a category A graph \((X,R)\in\mathbf{Gra}(\mathbb{C})\) is a _relation_ if \(\mathsf{out}_{R}\) and \(\mathsf{outr}_{R}\) are jointly monic, or equivalently if the morphism \((\mathsf{out}_{R},\mathsf{outr}_{R})\colon R\to X\times X\) is monic. We let \(\mathbf{Rel}(\mathbb{C})\hookrightarrow\mathbf{Gra}(\mathbb{C})\) and \(\mathbf{Rel}_{X}(\mathbb{C})\hookrightarrow\mathbf{Gra}_{X}(\mathbb{C})\) denote the full subcategories given by relations; note that \(\mathbf{Rel}_{X}(\mathbb{C})\) is thin, i.e. an ordered set, and a complete lattice when isomorphic relations are identified. Both subcategories are reflective: The reflection of a graph \((X,R)\) is given by \((\mathsf{id}_{X},e_{R})\colon(X,R)\twoheadrightarrow(X,R^{\dagger})\) where \(e_{R}\) and \(R^{\dagger}\) are obtained via the (strong epi, mono)-factorization of \(\langle\mathsf{out}_{R},\,\mathsf{outr}_{R}\rangle\): (IV.3) The various categories are connected by the functors where \((-)^{\dagger}\) denotes the reflector and \(|-|\) is the projection functor given by \((X,R)\mapsto X\) and \(h\mapsto h_{0}\). We regard \(\mathbb{C}\) as a full subcategory of \(\mathbf{Rel}(\mathbb{C})\) by identifying \(X\in\mathbb{C}\) with the _identity relation_\((X,X,\mathsf{id}_{X},\mathsf{id}_{X})\in\mathbf{Rel}(\mathbb{C})\), which we simply denote by \((X,X)\). #### Iv-A3 Limits and colimits The categories \(\mathbf{Gra}(\mathbb{C})\), \(\mathbf{Gra}_{X}(\mathbb{C})\), \(\mathbf{Rel}(\mathbb{C})\), \(\mathbf{Rel}_{X}(\mathbb{C})\) are complete and cocomplete. Coproducts in \(\mathbf{Gra}(\mathbb{C})\) and \(\mathbf{Gra}_{X}(\mathbb{C})\), denoted by \((X,R)+(Y,S)\) and \((X,R)+_{X}(X,S)\), are formed using \(\mathbb{C}\)-coproducts. Coproducts in \(\mathbf{Rel}(\mathbb{C})\) are given by \((X,R)\vee(Y,S)=((X,R)+(Y,S))^{\dagger}\) and in \(\mathbf{Rel}_{X}(\mathbb{C})\) by \((X,R)\vee_{X}(X,S)=((X,R)+_{X}(X,S))^{\dagger}\). Products \((X,R)\times(Y,S)\) in both \(\mathbf{Gra}(\mathbb{C})\) and \(\mathbf{Rel}(\mathbb{C})\) are formed in \(\mathbb{C}\). The product \((X,R)\times_{X}(X,S)\) in \(\mathbf{Gra}_{X}(\mathbb{C})\) and \(\mathbf{Rel}_{X}(\mathbb{C})\) is the pullback of \(\langle\mathsf{out}_{R},\mathsf{outr}_{R}\rangle\) and \(\langle\mathsf{outl}_{S},\mathsf{outr}_{S}\rangle\). #### Iv-A4 Composition of graphs and relations The _composite_\((X,R)\);\((X,R^{\prime})\) of two graphs \((X,R)\) and \((X,R^{\prime})\) is the graph \((X,R\,;R^{\prime})\) defined via the following pullback: (IV.4) The _composite_ of two relations \((X,R),(X,R^{\prime})\), given by \[(X,R)\bullet(X,R^{\prime})\ =\ ((X,R)\,;(X,R^{\prime}))^{\dagger},\] defines a bifunctor \((-)\bullet(-)\) on \(\mathbf{Rel}_{X}(\mathbb{C})\) (that is, composition is a monotone map on the ordered set of relations). Using Assumptions IV.1(2),(3), relation composition can be shown to distribute over coproducts. This is the key property of relations needed for our account of Howe's method in Section VII. #### Iv-A5 Reflexive and transitive relations Given graphs \((X,R)\) and \((X,R^{\prime})\) in \(\mathbf{Gra}_{X}(\mathbb{C})\), we put \((X,R)\leq(X,R^{\prime})\) if there exists a \(\mathbf{Gra}_{X}(\mathbb{C})\)-morphism from \((X,R)\) to \((X,R^{\prime})\). For relations, \((X,R)\leq(X,R^{\prime})\leq(X,R)\) implies \((X,R)\cong(X,R^{\prime})\). A relation \((X,R)\) is _reflexive_ if \((X,X)\leq(X,R)\), and _transitive_ if \((X,R)\bullet(X,R)\leq(X,R)\). #### Iv-A6 Reindexing Every morphism \(f\colon X\to Y\) in \(\mathbb{C}\) induces a functor \(f_{\star}\colon\mathbf{Gra}_{X}(\mathbb{C})\hookrightarrow\mathbf{Gra}_{Y}( \mathbb{C})\) given by \[(X,R,\mathsf{out}_{R},\mathsf{outr}_{R})\ \mapsto\ (Y,R,f\cdot\mathsf{out}_{R},f \cdot\mathsf{outr}_{R}).\] Readers familiar with the language of fibrations may note that \(|-|\colon\mathbf{Gra}(\mathbb{C})\to\mathbb{C}\) is a bifibration with fibres \(\mathbf{Gra}_{X}(\mathbb{C})\), and \(f_{\star}\) is the reindexing functor induced by operatseal lifts. ### _Preorders_ We extend some of the above terminology to graphs over preordered objects. Recall that a _preorder_ on a set \(X\) is a reflexive and transitive relation \(\preceq\subseteq X\times X\). Replacing elements \(1\to X\) with "generalized elements" \(Y\to X\), one obtains a categorical notion of preorder. #### Iv-B1 Preorders in a category A _preordered object_ in \(\mathbb{C}\) is a pair \((X,\preceq)\) of an object \(X\in\mathbb{C}\) and a family \(\preceq=(\preceq_{Y})_{Y\in\mathbb{C}}\) where \(\preceq_{Y}\) is a preorder on the hom-set \(\mathbb{C}(Y,X)\) satisfying \[f\preceq_{Y}g\quad\implies\quad f\cdot h\preceq_{Z}g\cdot h\quad\text{for all }h\colon Z\to Y.\] We usually drop subscripts and write \(\preceq\) for \(\preceq_{Y}\), \(\preceq_{Z}\), etc. **Example IV.2**.: (1) Every preordered set \((X,\preceq)\) in the usual order-theoretic sense can be regarded as a preordered object in \(\mathbf{Set}\) by taking the pointwise preorder on \(\mathbf{Set}(Y,X)\): \[f\preceq g\quad\iff\quad\forall y\in Y.\,f(y)\preceq g(y).\] (2) On every \(X\in\mathbb{C}\), one has the _discrete_ preordered object \((X,=)\), where \(=\) is the equality preorder. #### Iv-B2 Preordered functors A _preordered functor_ is a functor \(F\colon\mathbb{D}\to\mathbb{C}\) equipped with a preorder \((FD,\preceq)\) for all \(D\in\mathbb{D}\). **Example IV.3**.: The powerset functor \(\mathcal{P}\colon\mathbf{Set}\to\mathbf{Set}\) is preordered by taking the inclusion preorder \(\subseteq\) on \(\mathcal{P}X\). #### Iv-B3 Right-lax morphisms Given a preordered object \((Y,\preceq)\), a _right-lax morphism_ from a graph \((X,R)\) to a graph \((Y,S)\) is a pair \(h=(h_{0},h_{1})\) of \(\mathbb{C}\)-morphisms such that \[\begin{CD}X@>{\mathsf{out}_{R}}>{}>\mathsf{X}\\ @V{h_{0}}V{}V@V{}V{}V{}V\\ Y@V{\mathsf{out}_{S}}>{}>\mathsf{S}\end{CD}\] For preordered \((X,\preceq)\) we put \((X,R)\preceq(X,S)\) if there exists a right-lax morphism \(h\colon(X,R)\to(X,S)\) where \(h_{0}=\mathsf{id}_{X}\). **Example IV.4**.: Given relations \((X,R)\), \((X,S)\) on a preordered set \((X,\preceq)\), regarded as a preordered object as in Example IV.2(1), we have \((X,R)\preceq(X,S)\) iff for \(x,y\in X\), \[R(x,y)\quad\implies\quad\exists z\in X.\,S(x,z)\wedge z\preceq y.\] Graphs over a fixed preordered object \((X,\preceq)\) and right-lax morphisms \(h\) satisfying \(h_{0}=\mathsf{id}_{X}\) form a category, but in contrast to the unordered case, the full subcategory of relations is usually not reflective. This turns out to be the main technical challenge for our preorder-based approach to simulations. The key concept to overcome this issue is as follows: **Definition IV.5**.: Let \((X,\preceq)\) be a preordered object. A relation \((X,S)\) is _good for simulations_ if, for all \((X,R)\in\mathbf{Gra}_{X}(\mathbb{C})\), \[(X,R)\preceq(X,S)\quad\implies\quad(X,R)\leq(X,S).\] Note that \((X,R)\) ranges over graphs, not just relations, and that the implication "\(\Longleftarrow\)" also holds trivially. The good-for-simulations condition thus ensures that right-lax graph morphisms into \((X,S)\) can be turned into strict ones. **Example IV.6**.: For every relation \((X,R)\), the relation \((\mathcal{P}X,S_{R})\) is good for simulations, where \(\mathcal{P}X\) is equipped with the inclusion preorder and \(S_{R}\) is the Egli-Milner relation (Remark III.3(1)). This follows from the observation that \(S_{R}\) is up-closed: \(S_{R}(A,B)\) and \(B\subseteq B^{\prime}\) implies \(S_{R}(A,B^{\prime})\). ## V Lifting (Bi-)Functors and Higher-Order GSOS Laws As pointed out in Remark III.11, the compositionality proof for \(\mathcal{H}\mathcal{O}\) specifications implicitly relies on the fact that the behaviour bifunctor \(B(X,Y)=\mathcal{P}(Y+Y^{X})\) admits a lifting to the category of relations. We next study liftings of endofunctors, mixed-variance bifunctors, and higher-order GSOS laws on \(\mathbb{C}\) to the categories \(\mathbf{Gra}(\mathbb{C})\) and \(\mathbf{Rel}(\mathbb{C})\) of graphs and relations. We start with the case of endofunctors, which is straightforward and well-known: **Definition V.1**.: Let \(\Sigma\colon\mathbb{C}\to\mathbb{C}\) be an endofunctor. 1. A _graph lifting_ of \(\Sigma\) is a functor \(\overline{\Sigma}\colon\mathbf{Gra}(\mathbb{C})\to\mathbf{Gra}(\mathbb{C})\) making the diagram on the left below commute. 2. A _relation lifting_ of \(\Sigma\) is a functor \(\overline{\Sigma}\colon\mathbf{Rel}(\mathbb{C})\to\mathbf{Rel}(\mathbb{C})\) making the diagram on the right below commute. **Construction V.2**.: Every functor \(\Sigma\colon\mathbb{C}\to\mathbb{C}\) admits a _canonical graph lifting_\(\overline{\Sigma}_{\mathbf{Gra}}\colon\mathbf{Gra}(\mathbb{C})\to\mathbf{Gra}(\mathbb{C})\) and a _canonical relation lifting_\(\overline{\Sigma}_{\mathbf{Rel}}\colon\mathbf{Rel}(\mathbb{C})\to\mathbf{ Rel}(\mathbb{C})\) defined as follows: 1. The functor \(\overline{\Sigma}_{\mathbf{Gra}}\) is given on objects and morphisms by \[(X,R)\mapsto(\Sigma X,\Sigma R,\mathcal{E}\mathsf{out}_{R},\mathcal{E} \mathsf{out}_{R}),\quad h\mapsto(\Sigma h_{0},\Sigma h_{1}).\] 2. The functor \(\overline{\Sigma}_{\mathbf{Rel}}\) is the composite \[\mathbf{Rel}(\mathbb{C})\xleftrightarrow{\mathbf{Gra}(\mathbb{C})} \xrightarrow{\overline{\Sigma}_{\mathbf{Gra}}}\mathbf{Gra}(\mathbb{C}) \xrightarrow{(-)^{\dagger}}\mathbf{Rel}(\mathbb{C}).\] (This is similar to the usual Barr extension [2], except that relations are treated as objects rather than as morphisms.) **Example V.3**.: For a polynomial functor \(\Sigma\) on \(\mathbf{Set}\), the canonical relation lifting is the restriction of the canonical graph lifting to \(\mathbf{Rel}\). Thus \(\overline{\Sigma}_{\mathbf{Rel}}(X,R)=(\Sigma X,\Sigma R)\) where \(\Sigma R(\mathsf{f}(x_{1},\dots,x_{n}),\mathsf{f}(x_{1}^{\prime},\dots,x_{n}^{ \prime}))\) iff \(R(x_{i},x_{i}^{\prime})\) for all \(i\). **Proposition V.4**.: _Suppose that \(\Sigma\colon\mathbb{C}\to\mathbb{C}\) preserves strong epimorphisms and generates a free monad \(\Sigma^{\star}\). Then \(\overline{\Sigma}_{\mathbf{Gra}}\) and \(\overline{\Sigma}_{\mathbf{Rel}}\) generate free monads satisfying_ \[(\overline{\Sigma}_{\mathbf{Gra}})^{\star}=(\overline{\Sigma^{\star}})_{ \mathbf{Gra}}\qquad\text{and}\qquad(\overline{\Sigma}_{\mathbf{Rel}})^{\star} =(\overline{\Sigma^{\star}})_{\mathbf{Rel}}.\] Next we turn to liftings of mixed-variance bifunctors. **Definition V.5**.: A _relation lifting_ of a functor \(B\colon\mathbb{C}^{\mathsf{op}}\times\mathbb{C}\to\mathbb{C}\) is a functor \(\overline{B}\) such that the diagram below commutes. Every bifunctor admits a canonical relation lifting, generalizing the lifting \(\overline{B}_{0}\) of Remark III.3(1). Since the construction is more involved than for endofunctors, and our compositionality result works with any lifting, we refer to the Appendix (Section C). Finally, we lift higher-order GSOS laws: **Definition V.6**.: Let \(\Sigma\colon\mathbb{C}\to\mathbb{C}\) and \(B\colon\mathbb{C}^{\mathsf{op}}\times\mathbb{C}\to\mathbb{C}\) be functors with relation liftings \(\overline{\Sigma}\) and \(\overline{B}\), respectively, where \(\Sigma\) preserves strong epimorphisms and \(\overline{\Sigma}=\overline{\Sigma}_{\mathbf{Rel}}\) is the canonical lifting. Given a \(V\)-pointed higher-order GSOS law \[\varrho_{X,Y}\colon\Sigma(X\times B(X,Y))\to B(X,\Sigma^{\star}(X+Y))\] of \(\Sigma\) over \(B\), a _relation lifting_ of \(\varrho\) is a \((V,V)\)-pointed higher-order GSOS law \[\overline{\Sigma}((X,R)\times\overline{B}((X,R),(Y,S)))\] \[\Big{\downarrow}\overline{\varrho}_{(X,R),(Y,S)}\] \[\overline{B}((X,R),\overline{\Sigma}^{\star}((X,R)\vee(Y,S)))\] of \(\overline{\Sigma}\) over \(\overline{B}\) such that \[(\overline{\varrho}_{(X,R),(Y,S)})_{0}=\varrho_{X,Y}\] for \(((X,R),p_{(X,R)})\in(V,V)/\mathbf{Rel}(\mathbb{C})\) and \((Y,S)\in\mathbf{Rel}(\mathbb{C})\). Here we regard \(X\) as \(V\)-pointed by \(p_{X}=(p_{(X,R)})_{0}\colon V\to X\). **Remark V.7**.: (1) Recall that for a \(V\)-pointed higher-order GSOS law \(\varrho\) we assume the functor \(\Sigma\) to be of the form \(V+\Sigma^{\prime}\). This implies \(\overline{\Sigma}=(V,V)\vee\overline{\Sigma^{\prime}}\), as required. (2) The product \(\times\) and coproduct \(\vee\) in \(\mathbf{Rel}(\mathbb{C})\) are formed as explained in Section IV-A3, and we have \(\overline{\Sigma}^{\star}=\overline{\Sigma^{\star}}\) by Proposition V.4. It follows that \((\overline{\varrho}_{(X,R),(Y,S)})_{0}\) is a \(\mathbb{C}\)-morphism of type \(\Sigma(X\times B(X,Y))\to B(X,\Sigma^{\star}(X+Y))\). (3) Since \(\mathbf{Rel}(\mathbb{C})\)-morphisms are uniquely determined by their \((-)_{0}\)-component, a higher-order GSOS law \(\varrho\) admits at most one lifting \(\overline{\varrho}\). The requirement that the morphisms \(\overline{\varrho}_{(X,R),(Y,S)}\) form a higher-order GSOS law of \(\overline{\Sigma}\) over \(\overline{B}\) is thus vacuuous: the (di-)naturality of \(\overline{\varrho}\) is implied by that of \(\varrho\). For the canonical relation lifting \(\overline{B}\) of \(B\), every higher-order GSOS law admits a relation lifting (see Appendix, Section D). ## VI Weak Simulations We next introduce the notion of weak simulation featuring in our abstract congruence result. **Notation VI.1**.: Fix a functor \(F\colon\mathbb{C}\to\mathbb{C}\) and a relation lifting \(\widetilde{F}\). We denote the relation \(\widetilde{F}(X,R)\) by \((FX,E_{R})\). We recall the notion of _(lifting) bisimulation_[27] for coalgebras. We use the term _simulation_ instead, as this is what the concept amounts to in our applications, due to the use of asymmetric liftings such as the one-sided Egli-Milner lifting. An alternative approach to simulations uses lax liftings [26]. **Definition VI.2**.: Let \((C,c)\) be an \(F\)-coalgebra. A relation \((C,R)\) is a _simulation_ on \((C,c)\) if \(c_{\star}(C,R)\leq\widetilde{F}(C,c)\), that is, there exists a morphism \(c_{R}\) making (VI.1) commute. (VI.1) If it exists, the greatest simulation with respect to the partial order \(\leq\) on \(\mathbf{Rel}_{C}(\mathbb{C})\) is called the _similarity relation_ on \((C,c)\). **Lemma VI.3**.: _Suppose that the functor \(\widetilde{F}\) satisfies the following conditions for all \(X\in\mathbb{C}\) and \((X,R),(X,S)\in\mathbf{Rel}_{X}(\mathbb{C})\):_ 1. _the relation_ \(\widetilde{F}(X,X)\) _is reflexive;_ 2. \(\widetilde{F}(X,R)\bullet\widetilde{F}(X,S)\leq\widetilde{F}((X,R)\bullet(X, S))\)_._ _Then for every \(F\)-coalgebra \((C,c)\) the similarity relation exists, and it is reflexive and transitive._ The conditions in the above lemma are similar to ones occurring in work on _lax extensions_, e.g. by Marti and Venema [31]. In the setting of \(\mathcal{HO}\) specifications, where \(F=\mathcal{P}B_{0}(X,-)\), a weak simulation on a \(B_{0}(X,-)\)-coalgebra \((C,c)\) as per Definition III.2 is precisely a simulation on the weak transition system \((C,\widetilde{c})\). As observed in Remark III.3(3), in order to check the weak simulation conditions for \(R(p,q)\), it suffices to show that strong transitions from \(p\) are simulated by weak transitions from \(q\). This turns out to be the only property of weak simulations needed for our categorical congruence proof, and so we take it as our abstract definition: **Definition VI.4**.: A _weakening_ of a coalgebra \(c\colon C\to FC\) is a coalgebra \(\widetilde{c}\colon C\to FC\) such that for every relation \((C,R)\) the following two statements are equivalent: 1. \((C,R)\) is a simulation on \((C,\widetilde{c})\); 2. there exists a morphism \(\widetilde{c}_{R}\) making (VI.2) commute. (VI.2) A _weak simulation_ on \((C,c)\), with respect to a given weakening \((C,\widetilde{c})\), is a relation \((C,R)\) satisfying the two equivalent properties above. If it exists, the greatest weak simulation is called _weak similarity_, denoted \(\lesssim_{(C,c)}\) or just \(\lesssim\). **Remark VI.5**.: (1) For the trivial weakening \(\widetilde{c}=c\), weak simulations are just (strong) simulations. (2) The above definition is agnostic about how the weakening \(\widetilde{c}\) is actually constructed from \(c\). The construction of weak coalgebras has been studied in specific order-enriched settings [9, 10, 21]. Our present abstract approach is flexible in the choice of \(\widetilde{c}\). For example, the weak transition system \(\widetilde{c}\) of Notation III.1 is an instance of the framework of [9], but the choice \(\widetilde{c}=c\) as in part (1) above is not. ## VII Howe's Method, Categorically Next, we set up our version of Howe's method, which regards Howe closures abstractly as initial algebras. In a restricted setting of presheaf categories, this idea already appears in the work of Borthelle et al. [8] and Hirschowitz and Lafont [23]. **Notation VII.1**.: Let \(\Sigma\colon\mathbb{C}\to\mathbb{C}\) be an endofunctor with its canonical relation lifting \(\overline{\Sigma}=\overline{\Sigma}_{\mathbf{Rel}}\) (Construction V.2). For every \((X,R)\in\mathbf{Rel}_{X}(\mathbb{C})\) and every \(\Sigma\)-algebra \((X,\xi)\) with monic structure \(\xi\colon\Sigma X\to X\), let \[\overline{\Sigma}_{R,\xi}\colon\mathbf{Rel}_{X}(\mathbb{C})\to\mathbf{Rel}_{ X}(\mathbb{C})\] be the endofunctor (= monotone map) given by \[(X,S)\quad\mapsto\quad(X,R)\lor_{X}\big{(}(\xi_{\star}\overline{\Sigma}(X,S)) \bullet(X,R)\big{)},\] see Section IV-A for the notation. (The assumption that \(\xi\) is monic ensures that \(\xi_{\star}\) maps relations to relations.) Since \(\mathbf{Rel}_{X}(\mathbb{C})\) is equivalent to a complete lattice, the initial algebra of \(\overline{\Sigma}_{R,\xi}\) exists, and we denote it by \[(X,R)\lor_{X}\big{(}(\xi_{\star}\overline{\Sigma}(X,\widehat{R}))\bullet(X,R) \big{)}\overset{\alpha_{R,\xi}}{\longrightarrow}(X,\widehat{R}).\] (VII.1) The relation \((X,\widehat{R})\) is called the _Howe closure_ of \((X,R)\) with respect to the algebra \((X,\xi)\). **Remark VII.2**.: We will instantiate the above to the initial algebra \((X,\xi)=(\mu\Sigma,\iota)\); note that the structure \(\iota\) is an isomorphism. For \(\mathbb{C}=\mathbf{Set}\) and \(\Sigma\) a polynomial functor, the above definition of \(\widehat{R}\) is equivalent to the one of Notation III.9. Lemma VII.4 below establishes some basic properties of Howe closures, generalizing Remark III.10. For that purpose let us recall the notion of _congruence_ for functor algebras: **Definition VII.3**.: A _congruence_ on a \(\Sigma\)-algebra \((A,a)\) is a relation \((A,R)\) such that \(a_{\star}\overline{\Sigma}_{\mathbf{Rel}}(A,R)\leq(A,R)\). For a polynomial functor \(\Sigma\) on \(\mathbf{Set}\), this matches the definition of congruence from universal algebra (cf. Section II-A). **Lemma VII.4**.: _Let \(\Sigma\colon\mathbb{C}\to\mathbb{C}\) be an endofunctor. Then for each \((X,R)\in\mathbf{Rel}(\mathbb{C})\) and each monic algebra \(\xi\colon\Sigma X\to X\),_ (1) _if_ \((X,R)\) _is reflexive, then_ \((X,\widehat{R})\) _is reflexive and a congruence on_ \((X,\xi)\)_;_ (2) _if_ \((X,R)\) _is transitive, then_ \((X,\widehat{R})\) _is_ weakly transitive, that is,_ \((X,\widehat{R})\bullet(X,R)\leq(X,\widehat{R})\)_._ ## VIII Compositionality We proceed to establish our main theorem, which asserts that under natural conditions, weak similarity is a congruence on the operational model of a higher-order GSOS law. **Assumptions VIII.1**.: In this section we fix the following data: (1) a functor \(\Sigma=V+\Sigma^{\prime}\colon\mathbb{C}\to\mathbb{C}\) that preserves strong epimorphisms and generates a free monad \(\Sigma^{\star}\); (2) a preordered bifunctor \(B\colon\mathbb{C}^{\mathsf{op}}\times\mathbb{C}\to\mathbb{C}\) with a relation lifting \(\overline{B}\) that is _good for simulations_ (Definition VIII.2); (3) a \(V\)-pointed higher-order GSOS law \(\varrho\) of \(\Sigma\) over \(B\) that admits a (necessarily unique) relation lifting \(\overline{\varrho}\). It remains to explain Assumption VIII.1(2): **Definition VIII.2**.: A relation lifting \(\overline{B}\) of \(B\) is _good for simulations_ if, for \(X,Y\in\mathbb{C}\) and \((X,R),(Y,S),(Y,S^{\prime})\in\mathbf{Rel}(\mathbb{C})\), (G1) the relation \(\overline{B}((X,R),(Y,S))\) is good for simulations; (G2) the relation \(\overline{B}((X,X),(Y,Y))\) is reflexive; (G3) \(\overline{B}((X,R),(Y,S))\bullet\overline{B}((X,X),(Y,S^{\prime}))\leq\\ \overline{B}((X,R),(Y,S)\bullet(Y,S^{\prime}))\). **Remark VIII.3**.: To motivate Assumption VIII.1(3), let us revisit the setting of \(\mathcal{HO}\) specifications, where \(B(X,Y)=\mathcal{P}(Y+Y^{X})\) and \(\varrho\) is given by (III.5). Existence of a relation lifting of \(\varrho\) means that for \((X,R)\), \((Y,S)\in\mathbf{Rel}\) the map \(\varrho_{X,Y}\) is a \(\mathbf{Rel}\)-morphism w.r.t. the lifting \(\overline{B}\) of Remark III.3(1). In the proof of Theorem III.8 (induction base for (III.9)) a syntactic argument shows that \(p_{d}\lesssim p_{e}\) for \(d\lesssim e\), which amounts to the above property for \((X,R)=(Y,S)=(\mu\Sigma,\lesssim)\). Hence, the purpose of Assumption VIII.1(3) is to replace the syntactic part of that proof by an abstract condition on the law \(\varrho\). In the following we study weak simulations on the operational model \((\mu\Sigma,\gamma)\) of the higher-order GSOS law \(\varrho\), understood w.r.t. to the relation lifting \(\overline{B}((\mu\Sigma,\mu\Sigma),-)\colon\mathbf{Rel}(\mathbb{C})\to\mathbf{ Rel}(\mathbb{C})\) of the endfunctor \(B(\mu\Sigma,-)\colon\mathbb{C}\to\mathbb{C}\) and a given weakening \((\mu\Sigma,\widetilde{\gamma})\) of \((\mu\Sigma,\gamma)\). By (G2) and (G3) the lifted endfunctor satisfies the conditions of Lemma VI.3, hence the weak similarity relation on \((\mu\Sigma,\gamma)\) exists. The core ingredient for our congruence theorem is a higher-order variation of _lax models_ for monotone GSOS laws [6]: **Definition VIII.4**.: A _lax \(\varrho\)-bialgebra_\((X,a,c)\) is given by an object \(X\in\mathbb{C}\) and morphisms \(a\colon\Sigma X\to X\) and \(c\colon X\to B(X,X)\) such that the diagram below commutes laxly. Note that \(X\) is \(V\)-pointed; the point \(p_{X}\colon V\to X\) is induced by the algebra \(a\colon\Sigma X\to X\) (Notation II.4). This generalizes the notion of \(\varrho\)_-bialgebra_[20] which requires strict commutativity of the above diagram. Our congruence theorem rests on the assumption that \((\mu\Sigma,\iota,\widetilde{\gamma})\) is a lax \(\varrho\)-bialgebra. As indicated in Remark III.7, this expresses in abstract terms that the operational rules encoded by the higher-order GSOS law \(\varrho\) are sound for weak transitions in the operational model. In the setting of \(\mathcal{HO}\) specifications, we proved that this entails the congruence property for weak similarity (Theorem III.8). The next proposition is key to our categorical generalization of that result. **Proposition VIII.5**.: _Suppose that \(\widetilde{\gamma}\) is a weakening of the operational model \((\mu\Sigma,\gamma)\) such that \((\mu\Sigma,\iota,\widetilde{\gamma})\) is a lax \(\varrho\)-bialgebra. Then for every reflexive and transitive weak simulation \((\mu\Sigma,R)\) on \((\mu\Sigma,\gamma)\), the Howe closure \((\mu\Sigma,\widehat{R})\) w.r.t. \((\mu\Sigma,\iota)\) is a weak simulation._ In the proof below we denote the relation \(\widetilde{B}((X,S),(Y,T))\) by \((B(X,Y),E_{S,T})\) and its projections by \(\mathsf{out}_{S,T},\mathsf{out}_{S,T}\). Proof sketch.: Form the relation \((\mu\Sigma,P)\) via the pullback The crucial step is to show existence of \(\mathbf{Rel}_{\mu\Sigma}(\mathbb{C})\)-morphisms \[\beta^{!}\colon(\mu\Sigma,R)\to(\mu\Sigma,P),\] \[\beta^{!}\colon\iota_{\star}\overline{\Sigma}((\mu\Sigma,\widehat{ R})\times_{\mu\Sigma}(\mu\Sigma,P))\bullet(\mu\Sigma,R)\to(\mu\Sigma,P),\] where \(\times_{\mu\Sigma}\) is the product in \(\mathbf{Rel}_{\mu\Sigma}(\mathbb{C})\) (Section IV-A3). Their construction imitates the arguments for the induction base and induction step, respectively, in the proof of Theorem III.8. Once this is achieved, we can conclude the proof as follows. By copairing \(\beta^{!}\) and \(\beta^{\!}\) we obtain the \(\mathbf{Rel}_{\mu\Sigma}(\mathbb{C})\)-morphism \[\beta=[\beta^{!},\beta^{!}]\colon\overline{\Sigma}_{R,\iota}\big{(}(\mu\Sigma, \widehat{R})\times_{\mu\Sigma}(\mu\Sigma,P)\big{)}\to(\mu\Sigma,P)\] (cf. Notation VII.1) and thus primitive recursion (II.1) yields the \(\mathbf{Rel}_{\mu\Sigma}(\mathbb{C})\)-morphism \(\mathsf{pr}\,\beta\colon\mu\overline{\Sigma}_{R,\iota}=(\mu\Sigma,\widehat{R}) \to(\mu\Sigma,P)\). Choose a morphism \(r\colon(\mu\Sigma,\mu\Sigma)\to(\mu\Sigma,\widehat{R})\) witnessing that \((\mu\Sigma,\widehat{R})\) is reflexive (Lemma VII.4). Then the commutative diagram below proves \((\mu\Sigma,\widehat{R})\) to be a weak simulation. We are ready to state our main result. Recall that we work under the Assumptions IV.1 and VIII.1. **Theorem VIII.6** (Compositionality).: _Suppose that \(\widetilde{\gamma}\) is a weakening of the operational model \((\mu\Sigma,\gamma)\) such that \((\mu\Sigma,\iota,\widetilde{\gamma})\) is a lax \(\varrho\)-bialgebra. Then the weak similarity relation on \((\mu\Sigma,\gamma)\) is a congruence._ Proof.: Let \((\mu\Sigma,\lesssim)\) be the weak similarity relation on \((\mu\Sigma,\gamma)\). Its Howe closure \((\mu\Sigma,\widehat{\lesssim})\) satisfies \[(\mu\Sigma,\lesssim)\leq(\mu\Sigma,\widehat{\lesssim})\leq(\mu\Sigma,\lesssim).\] The first inequality is witnessed by the morphism \(\alpha_{\lesssim,\iota}\cdot\text{inl}\), for \(\alpha_{\lesssim,\iota}\) from (VII.1). For the second one we use that the relation \((\mu\Sigma,\widehat{\lesssim})\) is a weak simulation by Proposition VIII.5 (note that it is reflexive and transitive by Lemma VI.3, so the proposition applies) and that \(\lesssim\) is the greatest weak simulation. Thus \((\mu\Sigma,\widehat{\lesssim})\cong(\mu\Sigma,\widehat{\lesssim})\), and since \((\mu\Sigma,\widehat{\lesssim})\) is a congruence by Lemma VII.4, we conclude that so is \((\mu\Sigma,\widehat{\lesssim})\). By choosing the trivial weakening \(\widetilde{\gamma}=\gamma\) and equipping \(B\) with the equality preorder, we obtain similarity as an instance of weak similarity (Remark VI.5(1)), and the laxness condition on the bialgebra \((\mu\Sigma,\iota,\gamma)\) holds trivially by (II.3). We obtain **Corollary VIII.7**.: _Similarity on \((\mu\Sigma,\gamma)\) is a congruence._ This is a variant of the main result of [20]. In fact, the present version is more general since its notion of similarity is parametric in a lifting of \(B\), while the result in _op. cit._ is about coalgebraic behavioural equivalence, which corresponds to the _canonical_ lifting of \(B\) (see Appendix, Section C). ## IX Applications We conclude with two applications of Theorem VIII.6. ### \(\mathcal{HO}\)_Specifications_ To recover the results of Section III, fix an \(\mathcal{HO}\) specification \(\mathcal{R}\) over the signature \(\Sigma\), corresponding to a higher-order GSOS law \(\varrho^{0}\) of \(\Sigma\) over \(B_{0}(X,Y)=Y+Y^{X}\). We take \(\mathbb{C}=\mathbf{Set}\) and instantiate the data of Assumptions VIII.1 to 1. the given polynomial functor \(\Sigma\); 2. the behaviour functor \(B(X,Y)=\mathcal{P}(Y+Y^{X})\), preordered by inclusion, with its relation lifting \(\vec{B}\) as in Remark III.3(1); 3. the higher-order GSOS law \(\varrho\) of \(\Sigma\) over \(B\) given by (III.5). It is not difficult to verify that the above data satisfies the Assumptions VIII.1. Then by choosing the weakening \(\widetilde{\gamma}\) to be the weak transition system associated to \(\gamma_{0}\), see Notation III.1, we recover Theorem III.8 as a special case of Theorem VIII.6. ### _The \(\lambda\)-Calculus_ We briefly sketch how our framework applies to the \(\lambda\)-calculus, building on ideas from the work of Fiore et al. [16] and our previous work [20]. The (untyped call-by-name) \(\lambda\)-calculus is given by the small-step operational rules shown below, where \(s,s^{\prime},t\) range over possibly open \(\lambda\)-terms and \([t/x]\) denotes capture-avoiding substitution. \[\begin{array}{l}\texttt{app1}\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\ **Theorem IX.2**.: _The open extension of applicative similarity is a congruence: for all \(\lambda\)-terms \(s,t,t^{\prime}\), one has_ \[t\lesssim^{\mathrm{ap}}t^{\prime}\implies s\,t\lesssim^{\mathrm{ap}}s\,t^{\prime} \,\wedge\,t\,s\lesssim^{\mathrm{ap}}t^{\prime}\,s\,\wedge\,\lambda x.t\lesssim ^{\mathrm{ap}}\lambda x.t^{\prime}.\] It follows that the open extension of applicative _bi_similarity, viz. the relation \(\approx^{\mathrm{ap}}=\lesssim^{\mathrm{ap}}\cap\gtrsim^{\mathrm{ap}}\), is also a congruence. ## X Conclusions and Future Work We have developed relation liftings of bifunctors and an abstract analogue of Howe's method to prove congruence of coalgebraic weak similarity for higher-order GSOS laws. We have thus taken the first steps towards operational reasoning in the higher-order abstract GSOS framework. Logical relations [36, 35, 33, 15] are another important operational reasoning technique that we would like to cover in the future. Logical relations are typically type-indexed, while higher-order abstract GSOS has so far been applied to untyped languages. We aim to investigate typed languages in the context of higher-order abstract GSOS and develop abstract analogues of logical relations. It is worth noting that, even in the untyped setting, relation liftings of bifunctors already share a key characteristic with logical relations, namely that functions send related inputs to related outputs (Remark III.3(1)). Another goal is to apply our methods to call-by-value languages. As already noted in our previous work [20, Sec. 5.4], this appears to be more subtle than the call-by-name case. We envision a multi-sorted setting as a possible approach. Finally, we aim to explore effectful languages. For instance, by taking the behaviours \(\mathcal{P}(Y+Y^{X})\) or \(\mathcal{S}(Y+Y^{X})\), where \(\mathcal{S}\) is the subdistribution functor, our results already yield a form of compositionality for nondeterministic and probabilistic combinatory logic. For the latter, exploring behavioural distances instead of (bi)similarity is also a natural direction; we expect that existing work on probabilistic \(\lambda\)-calculi [12, 18] can provide some guidance.
2302.11989
Metric-oriented Speech Enhancement using Diffusion Probabilistic Model
Deep neural network based speech enhancement technique focuses on learning a noisy-to-clean transformation supervised by paired training data. However, the task-specific evaluation metric (e.g., PESQ) is usually non-differentiable and can not be directly constructed in the training criteria. This mismatch between the training objective and evaluation metric likely results in sub-optimal performance. To alleviate it, we propose a metric-oriented speech enhancement method (MOSE), which leverages the recent advances in the diffusion probabilistic model and integrates a metric-oriented training strategy into its reverse process. Specifically, we design an actor-critic based framework that considers the evaluation metric as a posterior reward, thus guiding the reverse process to the metric-increasing direction. The experimental results demonstrate that MOSE obviously benefits from metric-oriented training and surpasses the generative baselines in terms of all evaluation metrics.
Chen Chen, Yuchen Hu, Weiwei Weng, Eng Siong Chng
2023-02-23T13:12:35Z
http://arxiv.org/abs/2302.11989v1
# Metric-Oriented Speech Enhancement Using Diffusion ###### Abstract Deep neural network based speech enhancement technique focuses on learning a noisy-to-clean transformation supervised by paired training data. However, the task-specific evaluation metric (e.g., PESQ) is usually non-differentiable and can not be directly constructed in the training criteria. This mismatch between the training objective and evaluation metric likely results in sub-optimal performance. To alleviate it, we propose a metric-oriented speech enhancement method (MOSE), which leverages the recent advances in the diffusion probabilistic model and integrates a metric-oriented training strategy into its reverse process. Specifically, we design an actor-critic based framework that considers the evaluation metric as a posterior reward, thus guiding the reverse process to the metric-increasing direction. The experimental results demonstrate that MOSE obviously benefits from metric-oriented training and surpasses the generative baselines in terms of all evaluation metrics. Chen Chen, Yuchen Hu, Weiwei Weng, Eng Siong Chng School of Computer Science and Engineering, Nanyang Technological University, Singapore [email protected] Diffusion probabilistic model, speech enhancement, reinforcement learning ## 1 Introduction Recent advances in deep learning has brought remarkable success to the speech enhancement technique, where a noisy-to-clean transformation is learned to remove additive noises by a supervised learning manner [1, 2, 3, 4]. However, this paradigm suffers from a mismatch between training and evaluation: the training criterion (e.g., Mean Square Error) must be differentiable for gradient calculation [5], while the evaluation metric (e.g. PESQ) are usually non-differentiable, thus can not be directly modeled in loss function as minimized objective. Consequently, the optimized model after training can not achieve best performance in terms of evaluation metric. This mismatch is also reported in other supervised learning tasks, such as machine translation [6, 7] and automatic speech recognition [8, 9, 10]. Prior works have utilized reinforcement learning (RL) based algorithms to harmonize the mismatch using metric-based training approach [11], as these tasks contain a sequential decoding process that can be naturally viewed as Markov Decision Process (MDP) [12]. Nevertheless, as a regression task, mainstream SE approaches train a one-shot discriminative model without the time-step concept for MDP, which is infeasible for RL-based optimization. Diffusion probabilistic model [13], showing outstanding results in generative tasks [14, 15], brings possibility for metric-based optimization of SE task, as it inherently consists of MDP-based diffusion and reverse processes [16]. More specifically, an isotropic Gaussian distribution is added to the clean speech during step-by-step diffusion process, and in the reverse process, gradually estimates and subtracts additive noise to restore the clean input [17]. In this work, we present a metric-oriented speech enhancement method called MOSE, which effectively constructs the non-differentiable metric into the training objective. Inspired by actor-critic based algorithm [18], we design a value-based neural network that is updated by Bellman Error [19] to evaluate current policy in terms of metric-related reward function, then it guides the prediction of subtracted noise in a reverse process by the differentiable manner. In this way, the original policy is optimized to the metric-increasing direction, while the value-based network is trained to provide reasonable feedback. Experimental results demonstrate that MOSE obviously benefit from metric-oriented training and beat other generative methods in terms of all metrics. Furthermore, it shows better generalization in face of unseen noises with large domain mismatch. ## 2 Preliminaries We first define noisy speech as \(y\) and define its corresponding ground-truth clean speech as \(x_{0}\). The speech enhancement task aims to learn a transformation \(f\) that converts the noisy input to clean signal: \(x_{0}=f(y),\ x_{0},y\in\mathbb{R}^{L}\). ### Diffusion Probabilistic Model In this part, we briefly introduce the diffusion process and the reverse process of the typical diffusion probabilistic model. **Diffusion process** is formulated as a \(T\)-step Markov chain that gradually adds Gaussian noise to the clean signal \(x_{0}\) in each step \(t\). The Gaussian model is denoted as \(\mathcal{N}(x_{t};\,\sqrt{1-\beta_{t}}x_{t-1},\beta_{t}I)\), where \(\beta_{t}\) is a small positive constant that serve as a pre-defined schedule. With enough diffusion step \(T\), the latent variable \(x_{T}\) can be finally converted to an isotropic Gaussian distribution \(p_{latent}(x_{T})=\mathcal{N}(0,I)\). Therefore, based on \(x_{0}\), the sampling distribution of each step in the Markov chain can be derived as the following: \[q(x_{t}|x_{0})=\mathcal{N}(x_{t};\sqrt{\bar{\alpha}_{t}}x_{0},(1-\bar{\alpha}_ {t})I), \tag{1}\] where \(\alpha_{t}=1-\beta_{t}\) and \(\bar{\alpha}_{t}=\prod_{s=1}^{t}\alpha_{s}\). **Reverse process** aims to restore the \(x_{0}\) from the latent variable \(x_{T}\) along another Markov chain, which is denoted as \(p_{\theta}(x_{t-1}|x_{t})\), where \(\theta\) is learnable parameters. As marginal likelihood \(p_{\theta}(x_{0})=\int p_{\theta}(x_{0},\cdots,x_{T-1}|x_{T})\cdot p_{\text{ latent}}(x_{T})dx_{1:T}\) is intractable for calculation, the ELBO [13] is utilized to approximate a learning objective for neural model training. Therefore, the equation of the reverse process can be denoted as: \[\begin{split} p_{\theta}(x_{t-1}|x_{t})&=\mathcal{ N}(x_{t-1};\mu_{\theta}(x_{t},t),\tilde{\beta}_{t}I),\\ \text{where}\quad\mu_{\theta}(x_{t},t)&=\frac{1}{ \sqrt{\alpha_{t}}}(x_{t}-\frac{\beta_{t}}{\sqrt{1-\bar{\alpha}_{t}}}\epsilon_ {\theta}(x_{t},t))\end{split} \tag{2}\] Here \(\mu_{\theta}(x_{t},t)\) denotes the mean of \(x_{t-1}\), which is obtained by subtracting the estimated Gaussian noise \(\epsilon_{\theta}(x_{t},t)\) in the \(x_{t}\). Furthermore, the variance is derived to a constant \(\tilde{\beta}_{t}=\frac{1-\tilde{\alpha}_{t-1}}{1-\tilde{\alpha}_{t}}\beta_{t}\). ### Reinforcement Learning Reinforcement learning (RL) is typically formulated as a Markov Decision Process (MDP) that includes a tuple of trajectories \(\langle\mathcal{S},\mathcal{A},\mathcal{R},\mathcal{T}\rangle\). For each time step \(t\), the agent considers state \(s_{t}\in S\) to generate an action \(a_{t}\in\mathcal{A}\) which interacts with environment. The transition dynamics \(\mathcal{T}(s_{t+1}|s_{t},a_{t})\) is defined as transition probability from current state \(s_{t}\) to next state \(s_{t+1}\), and gain an instant reward \(r_{t}(s_{t},a_{t})\). The objective of RL is to learn optimal policy to maximize the cumulative reward \(\mathcal{R}\) along all time steps. Since the diffusion probabilistic model formulates speech enhancement task as MDP in section 2.1, the RL algorithm can be integrated in the reverse process to explore optimal policy. More specifically, given the current state \(x_{t}\), the policy network is supposed to predict a Gaussian noise \(\epsilon_{t}\) as the current action. After subtracting the \(\epsilon_{t}\) in \(x_{t}\), the \(x_{t-1}\) is obtained as next state, as step number \(t\) is decreasing during reverse process. Furthermore, the instant reward \(r_{t}\) is calculated by comparison of \(x_{t}\) and \(x_{t-1}\), which guides the update of parameters \(\theta\) during model training. ## 3 Methodology In this section, we introduce our proposed MOSE, which integrates the metric-oriented training into the reverse process of a conditional diffusion probabilistic model. The overview of MOSE is shown in Fig. 1. ### Conditional Diffusion Probabilistic Model As real-world noises usually does not obey the Gaussian distribution, we incorporate noisy speech \(y\) into the procedures as a conditioner in this part. Specifically, a dynamic weight \(w_{t}\in[0,1]\) is employed for linear interpolation from \(x_{0}\) to \(x_{T}\). Therefore, as shown in Fig. 1, each latent variable \(x_{t}\) consists of three parts: clean component \((1-w_{t})\times x_{0}\), noisy component \(w_{t}\times y\), and Gaussian Noise \(\epsilon\). Furthermore, the diffusion process in Eq. (1) can be rewritten as: \[q(x_{t}|x_{0},y) =\mathcal{N}(x_{t};(1-w_{t})\sqrt{\bar{\alpha}_{t}}x_{0}+w_{t} \sqrt{\bar{\alpha}_{t}}y,\delta_{t}I), \tag{3}\] \[\text{where}\quad\delta_{t} =(1-\bar{\alpha}_{t})-w_{t}^{2}\bar{\alpha}_{t} \tag{4}\] The conditional reverse process starts from \(x_{T}\) with \(w_{T}=1\), which is denoted as \(\mathcal{N}(x_{T},\sqrt{\bar{\alpha}_{T}}y,\delta_{T}I)\). Referring to Eq. (2), we denoted the conditional reverse process as: \[p(x_{t-1}|x_{t},y)=\mathcal{N}(x_{t-1};\mu_{\theta}(x_{t},y,t),\tilde{\delta}_{ t}I), \tag{5}\] where \(\mu_{\theta}(x_{t},y,t)\) is the predicted mean of variance \(x_{t-1}\). It means that the neural model \(\theta\) considers both variance \(x_{t}\) and noisy conditioner \(y\) during its prediction. Therefore, similar to Eq. (2), we define the mean of \(\mu_{\theta}\) as a linear combination of \(x_{t}\), \(y\), and \(\epsilon_{\theta}\): \[\mu_{\theta}(x_{t},y,t)=c_{xt}x_{t}+c_{yt}y-c_{t}\epsilon_{\theta}(x_{t},y,t), \tag{6}\] Figure 1: The conditional diffusion probabilistic model (A) and metric-oriented training (B). The red and blue arrows respectively denotes the diffusion and reverse process. \(w_{t}\) is the weight of linear interpolation, and \(m_{t}\) is the task-specific metric. where the coefficients \(c_{xt},c_{yt},\) and \(c_{et}\) can be derived from the ELBO optimization criterion in [20]. Finally, we combine Gaussian noise \(\epsilon\) and non-Gaussian noise \(y-x_{0}\) as ground-truth \(C_{t}^{noise}\): \[C_{t}^{noise}(x_{0},y,\epsilon)=\frac{m_{t}\sqrt{\bar{\alpha}_{t }}}{\sqrt{1-\bar{\alpha}_{t}}}(y-x_{0})+\frac{\sqrt{\delta_{t}}}{\sqrt{1-\bar{ \alpha}_{t}}}\epsilon \tag{7}\] \[\nabla_{\theta}\mathcal{L}_{1}=\parallel C_{t}^{noise}(x_{0},y, \epsilon)-\nabla_{\theta}\epsilon_{\theta}(x_{t},y,t)\parallel_{1} \tag{8}\] where \(C_{t}^{noise}\) provides supervision information, and \(\mathcal{L}_{1}\) is calculated for back propagation of neural network. ### Metric-oriented Training Given the task-specific evaluation metric \(m\), each \(t\)-step variable can calculate the \(m_{t}\) by \(x_{t}\) and \(x_{0}\) as they are in same shape. In order to directly optimize \(m_{t}\), an actor-critic RL algorithm is integrated into conditional reverse process, as shown in the Fig. 1 (B). Since we hope that the latent variable is iterated toward the metric-increasing direction in the reverse process, the reward function is customized as: \(r_{t}=m_{t-1}-m_{t}\), where \(t\) starts from \(T\) to 0. However, posterior \(r_{t}\) is obviously non-differentiable for \(\theta\), thus failing to propagate gradient. To this end, we further employ a **Value network**\(V\) with parameter \(\theta_{v}\) as the blue box in Fig. 3, and the original network is denoted as **Diffusion network**\(D\) with parameter \(\theta_{d}\) for distinction. In general, The Diffusion network consumes \(x_{t}\) to predict the subtracted noise \(\epsilon_{t}\) as action, while the Value network generates an score \(v_{t}\) to evaluate this \(\epsilon_{t}\) based on \(x_{t}\). The training strategy of MOSE is explained in Algorithm 1. ``` 1:Randomly initialize the Diffusion network \(D(x|\theta_{d})\) and Value network \(V(x,\epsilon|\theta_{v})\). 2:Initialize \(N_{total}\), \(N_{th}\), \(\gamma\), and \(\alpha\) 3:for\(i=1,2,\cdots,N_{total}\)do 4: Sample \((x_{0},y)\) from Dataset 5: Sample \(\epsilon\)\(\sim\)\(\mathcal{N}(0,1)\) and \(t\)\(\sim\)Uniform\((\{1,\cdots,T\})\) 6: Set \(x_{t}=((1-m_{t})\sqrt{\bar{\alpha}_{t}}x_{0}+m_{t}\sqrt{\bar{\alpha}_{t}}y)+ \sqrt{\delta_{t}}\epsilon\) 7: Calculate \(C_{t}^{noise}\) according to Eq. (7) 8:if\(i<N_{th}\)then 9: Update network \(D\) by minimizing \(\mathcal{L}_{1}\) in Eq. (8) 10:else 11: Calculate \(\epsilon_{t}=D(x_{t},y,t|\theta_{d})\) as action 12: Calculate \(\nabla_{\theta_{d}}\mathcal{L}_{2}=-V(x_{t},\nabla_{\theta_{d}}\epsilon_{t},x_ {0}|\theta_{v})\) 13: Update \(D\) by minimizing \(\mathcal{L}=\mathcal{L}_{1}+\alpha\cdot\mathcal{L}_{2}\) 14: Calculate \(x_{t-1}\) according to Eq. 6 as next state 15: Calculate \(r_{t}=m_{t-1}(x_{t-1},x_{0})-m_{t}(x_{t-1},x_{0})\) 16: Set \(\mathcal{V}_{t}=r_{t}+\gamma V(x_{t-1},D(x_{t-1},y,t-1),x_{0}|\theta_{v})\) 17: Calculate \(\nabla_{\theta_{v}}\mathcal{L}_{3}=(\mathcal{V}_{t}-\nabla_{\theta_{v}}V(x_{t },\epsilon_{t},x_{0}|\theta_{v}))^{2}\) 18: Update network \(V\) by minimizing \(\mathcal{L}_{3}\) 19:endif 20:endfor ``` **Algorithm 1** MOSE Training MOSE starts training with conventional ELBO optimization, as explained from line 3\(\sim\)9 in Algorithm 1, only Diffusion network \(D\) is trained for \(N_{th}\) iterations. Then we present joint training of Diffusion network \(D\) and Value network \(V\) from lines 10\(\sim\)18. Minimizing \(\mathcal{L}_{2}=-V(x_{t},\epsilon_{t},x_{0}|\theta_{v})\) indicates that \(D\) tends to gain higher score from \(V\), and \(\mathcal{L}_{2}\) is simultaneously incorporated with a weight \(\alpha\) to stabilize training. In order to encourage Value network \(V\) to provide reasonable evaluation, we employ widely used Bellman Error [19] (line 17) to update \(V\), where \(\gamma\) is a decay factor for future reward. Consequently, the output score \(v_{t}\) both considers current and future rewards based on the task-specific metric. For inference, we adopt a fast sampling scheme as same as in [15]. ## 4 Experiment ### Experimental Setup **Database**. We choose the publicly available VoiceBank DE-MAND dataset [21] for SE training and evaluation. Specifically, the training set contains 11,572 noisy utterances from 28 speakers and is mixed by 10 different types with four SNR levels (0, 5, 10, and 15 dB) at a sampling rate of 16 kHz, as well as their corresponding clean utterances. The test set contains 5 types of unseen noise in SNR levels (2.5, 7.5, 12.5, and 17.5 dB). To evaluate the performance of a model in unseen noises, we further mix the test set of TIMIT [22] and "helicopter" and "babycry" noises with different SNR levels (-6, -3, 0, 3, 6 dB), where a large domain mismatch exists between training and testing. **Configuration**. The internal structure of MOSE is shown in Fig. 3. We employ 30 residual blocks with 64 channels in Diffusion Net. MLP block contains 4 linear layers with ReLU activation function. For training, MOSE takes 50 diffusion steps with training noise schedule \(\beta_{t}\in[1\times 10^{-4},0.035]\), and the interpolation weight \(m_{t}=\sqrt{(1-\bar{\alpha}_{t})/\sqrt{\bar{\alpha}_{t}}}\). The \(N_{total}\), \(N_{th}\), and \(\gamma\) in Algorithm 1 are respectively set as 40k, 30k, and 0.95. The initial learning rate of Diffusion network is set as \(2\times 10^{-4}\) for first \(N_{th}\) iterations, and decrease to \(1\times 10^{-4}\) Figure 2: The main structure of MOSE. Dashed line stands for back propagation for neural network. for \(N_{th}\sim N_{total}\) iterations. The learning rate of the Value network is set as \(1\times 10^{-5}\). Both networks are optimized by Adam with a batch size of 32. The fast sampling method keeps the same schedule with [20]. **Metric**. We select the perceptual evaluation of speech quality (PESQ) as the task-specific metric of optimization objective due to its universality. Furthermore, prediction of the signal distortion (CSIG), prediction of the background intrusiveness (CBAK), and prediction of the overall speech quality (COVL) are also reported as references. ### Result and Analysis #### 4.2.1 Experimental validation of mismatch We first design an experiment to verify the mismatch problem between the training objective and evaluation metric, and illustrate how we mitigate it. To this end, we train a typical diffusion probabilistic model, where \(\mathcal{L}_{1}\) in Eq (8) is set as the only training objective. Then we sample 10 utterances and add up their \(\mathcal{L}_{1}\) (50 steps), as well as calculate the improvement of PESQ (\(\Delta\)PESQ). The comparison is visualized in the left part of Fig. 3, and we observe that there is no correlation between \(\mathcal{L}_{1}\) and \(\Delta\)PESQ, which indicates that SE model trained only by \(\mathcal{L}_{1}\) will lead to sub-optimal performance in terms of PESQ. Meanwhile, we calculate the cumulatively gained reward \(\mathcal{R}\) of these utterances after metric-oriented training and visualize in the right of Fig. 3, where an obvious positive correlation can be observed between \(\Delta\)PESQ and \(\mathcal{R}\). #### 4.2.2 Effect of metric-oriented training We then examine the effect of proposed metric-oriented training, and the results are reported in Table 1. "Unprocessed" denotes direct evaluation based on noisy data, and \(\alpha\) is the weight of \(\mathcal{L}_{2}\) in Algorithm 1. When \(\alpha=0\), SE model are only trained by \(\mathcal{L}_{1}\) loss. We observe that system 3\(\sim\)5 all surpass system 2 with help of metric-oriented training. When \(\alpha=1\), the SE model achieves the best performance. In addition, Table 2 summarizes the comparison between MOSE and other competitive SE methods, which contains 3 generative models and 2 discriminative methods. We observe that MOSE surpasses generative baselines in terms of all metrics, however, the best performance is still achieved by discriminative method. #### 4.2.3 Generalization on unseen noise We evaluate our trained model in unseen noisy condition with a wide range of SNR levels, where Conv-TasNet method is reproduced for comparison. The PESQ results are shown in Table 3. Despite gaining outstanding performance on the matched test set, we observed that the PESQ of Conv-TasNet dramatically degrades due to noise domain mismatch. However, the MOSE performs better than Conv-TasNet in terms of PESQ, especially in low-SNR conditions. ## 5 Conclusion In this paper, we propose a speech enhancement method, called MOSE, which addresses the mismatch problem between training objective and evaluation metric. The probabilistic diffusion model is leveraged as MDP based framework, where metric-oriented training is presented in the reverse process. The experimental results demonstrate that MOSE beats other generative baselines in terms of all metrics, and show better generalization on unseen noises. \begin{table} \begin{tabular}{c|c|c|c c c c} \hline \hline ID & System & \(\alpha\) & PESQ & CSIG & CBAK & COVL \\ \hline 1 & Unprocessed & - & 1.97 & 3.35 & 2.44 & 2.63 \\ \hline 2 & MOSE & 0 & 2.44 & 3.65 & 2.87 & 3.01 \\ \hline 3 & & 0.1 & 2.48 & 3.66 & 2.90 & 3.06 \\ 4 & MOSE & 1 & **2.54** & **3.73** & **2.93** & **3.12** \\ 5 & & 5 & 2.51 & 3.69 & 2.91 & 3.08 \\ \hline \hline \end{tabular} \end{table} Table 1: Result of metric-oriented training. \begin{table} \begin{tabular}{c|c|c|c c c c} \hline \hline System & Type & PESQ & CSIG & CBAK & COVL \\ \hline Unprocessed & - & 1.97 & 3.35 & 2.44 & 2.63 \\ \hline DSEGAN [23] & Gen. & 2.39 & 3.46 & 3.11 & 2.90 \\ SE-Flow [24] & Gen. & 2.28 & 3.70 & 3.03 & 2.97 \\ CDiffuSE [20] & Gen. & 2.52 & 3.72 & 2.91 & 3.10 \\ \hline WaveCRN [25] & Dis. & 2.64 & 3.94 & **3.37** & 3.29 \\ Conv-TasNet [26] & Dis. & **2.67** & **3.94** & 3.31 & **3.30** \\ \hline MOSE (ours) & Gen. & 2.54 & 3.72 & 2.93 & 3.06 \\ \hline \hline \end{tabular} \end{table} Table 2: MOSE _vs._ other methods. “Gen.” and “Dis.” respectively denote generative and discriminative models. Figure 3: The relationship between \(\Delta\)PESQ and training loss \(-\mathcal{L}_{1}\), as well as gained reward \(\mathcal{R}\).
2309.02339
LDP polygons and the number 12 revisited
We give a combinatorial proof of a lattice point identity involving a lattice polygon and its dual, generalizing the formula $area(\Delta) + area(\Delta^*) = 6$ for reflexive $\Delta$. The identity is equivalent to the stringy Libgober-Wood identity for toric log del Pezzo surfaces.
Ulrike Bücking, Christian Haase, Karin Schaller, Jan-Hendrik de Wiljes
2023-09-05T15:57:07Z
http://arxiv.org/abs/2309.02339v1
# LDP polygons and the number 12 revisited ###### Abstract. We give a combinatorial proof of a lattice point identity involving a lattice polygon and its dual, generalizing the formula \(\operatorname{area}\left(\Delta\right)+\operatorname{area}\left(\Delta^{*} \right)=6\) for reflexive \(\Delta\). The identity is equivalent to the stringy Libgober-Wood identity for toric log del Pezzo surfaces. ## 1. Introduction The goal of this article is to give a combinatorial proof of the following combinatorial identity: Consider the convex hull \(\Delta\subseteq\mathbb{R}^{2}\) of \(n_{1},\ldots,n_{k}\in\mathbb{Z}^{2}\). Assume that each \(n_{i}\) has coprime coordinates and that the origin is an interior point of \(\Delta\). This data defines a piecewise linear function \(\kappa_{\Delta}\colon\mathbb{R}^{2}\to\mathbb{R}\) via \[\kappa_{\Delta}(x)=-\min\left\{\lambda\in\mathbb{R}_{\geq 0}\,|\,x\in\lambda \Delta\right\}\,.\] The dual polygon is denoted by \(\Delta^{*}\). Then \[6\sum_{n\in\Delta\cap\mathbb{Z}^{2}}\left(\kappa_{\Delta}(n)+1\right)^{2}= \operatorname{area}\left(\Delta\right)+\operatorname{area}\left(\Delta^{*} \right)\,.\] This identity has been proven by Batyrev and the third author [1, Corollary 4.5] using a string-theoretic (allowing mild singularities) variant of the Libgober-Wood identity for compact complex manifolds [13, Proposition 2.3]. Conversely, we can obtain the stringy Libgober-Wood identity for toric surfaces as a corollary of our combinatorial proof. In our proof, we reduce this (global) statement to a local and cone-wise statement, whose algebraic geometry analogue could be of independent interest. The formula as well as its variants in higher dimensions have a rich and colorful history. In the reflexive case where both \(\Delta\) and \(\Delta^{*}\) have only integral vertices the formula reduces to \(\operatorname{area}\left(\Delta\right)+\operatorname{area}\left(\Delta^{*} \right)=6\). Rodriguez-Villegas & Poonen [20] as well as Hille & Skarke [14] prove non-convex and group theoretic generalizations of that latter formula, Kasprzyk & Nill [11] relax the reflexivity hypothesis. Still in dimension two, Haase & Schicho [14] and also Kolodziejczyk & Olszewska [15] prove refinded inequalities, taking additional invariants into account. In the open problem collection [1], an equation for 3-dimensional reflexive polytopes is stated, a combinatorial proof was sought. Godinho, von ###### Contents * 1 Introduction * 2 Notation and Preliminaries * 3 The \(\mathbb{ In general, the vertices of the dual polygon \(\Delta^{*}\subseteq M_{\mathbb{R}}\) to an LDP polygon \(\Delta\) are not lattice points in \(M\), _i.e._, \(\Delta^{*}\) is in general a rational polygon. If \(\Delta\subseteq N_{\mathbb{R}}\) is a reflexive polygon, then the origin \(0\in N\) is the only interior lattice point of \(\Delta\). Hence, all vertices of \(\Delta\) are primitive lattice points in \(N\), _i.e._, LDP polygons build a superclass of reflexive polygons. Let \(\Delta\subseteq N_{\mathbb{R}}\) be a lattice polygon. Then we define \(\operatorname{v}(\Delta)\) to be the _normalized volume of \(\Delta\)_, _i.e._, the positive integer \[\operatorname{v}(\Delta):=2!\cdot\operatorname{vol}_{2}(\Delta)\,,\] where \(\operatorname{vol}_{2}(\Delta)\) denotes the \(2\)-dimensional volume of \(\Delta\) with respect to the lattice \(N\). Note that \(\operatorname{vol}_{2}(\Delta)=\operatorname{area}(\Delta)\) if \(N=\mathbb{Z}^{2}\). Similarly, we define the positive integer \(\operatorname{v}(\theta):=k!\cdot\operatorname{vol}_{k}(\theta)\) for a \(k\)-dimensional face \(\theta\preceq\Delta\) of \(\Delta\), where \(\operatorname{vol}_{k}(\theta)\) denotes the \(k\)-dimensional volume of \(\theta\) with respect to the sublattice \(\langle\theta\rangle_{\mathbb{R}}\cap N\). If \(\Delta\) has vertices in \(N_{\mathbb{Q}}:=N\otimes_{\mathbb{Z}}\mathbb{Q}\), _i.e._, \(\Delta\) is a _rational polygon_, then we can similarly define the positive rational number \(\operatorname{v}(\theta)\) for any face \(\theta\preceq\Delta\). For this purpose, we consider an integer \(l\) such that \(l\Delta\) is a lattice polygon and define for a \(k\)-dimensional face \(\theta\preceq\Delta\) its normalized volume as \(\operatorname{v}(\theta):=\frac{1}{l^{k}}\operatorname{v}(l\theta)\). Let \(U\subseteq N_{\mathbb{R}}\) be a finite set. Then a (_convex polyhedral_) _cone_ generated by \(U\) is defined as the set \[\sigma:=\operatorname{cone}(U)=\{\sum_{u\in U}\lambda_{u}u\,|\, \lambda_{u}\geq 0\}\,.\] If \(U\) consists of \(\mathbb{R}\)-linear independent lattice vectors, then the corresponding (half-open) _fundamental parallelogram_ of \(U\) is \[\Pi:=\Pi(U)=\{\sum_{u\in U}\lambda_{u}u\,|\,0\leq\lambda_{u}<1\}\,.\] Note that the normalized volume of the fundamental parallelogram equals the number of lattice points contained in it. Moreover, a \(2\)-dimensional cone \(\sigma\) is called _unimodular_ if its ray generators \(u_{1},u_{2}\) form a part of a \(\mathbb{Z}\)-basis of \(N\). Note that in this case the fundamental parallelogram \(\Pi(u_{1},u_{2})\) contains only one lattice point. **Definition 2.2**.: Let \(\sigma\subseteq N_{\mathbb{R}}\) be a \(2\)-dimensional cone. We define \(\nabla_{\sigma}\) to be the convex hull of the origin and the primitive ray generators \(u_{1}\) and \(u_{2}\) of the given cone \(\sigma\), _i.e._, \[\nabla_{\sigma}:=\operatorname{conv}(0,u_{1},u_{2})\,.\] We denote the _relative interior of \(\nabla_{\sigma}\)_ by \(\nabla_{\sigma}^{\circ}\). The _sail of \(\sigma\)_ is the non-convex half-open lattice polygon defined as \[\operatorname{sail}_{\sigma}:=\nabla_{\sigma}\setminus\operatorname{conv}( \nabla_{\sigma}\cap N\setminus\{0\})\] and its closure denoted by \(\overline{\operatorname{sail}}_{\sigma}\). The _normalized volume \(\mathrm{v}(\sigma)\) of_ a \(2\)-dimensional cone \(\sigma\) is defined to be the normalized volume of the lattice polygon \(\theta_{\sigma}\) obtained as the convex hull of the origin and all primitive ray generators of the given cone \(\sigma\), _i.e._, \[\mathrm{v}(\sigma):=\mathrm{v}(\theta_{\sigma})\,.\] ### Toric surfaces A _toric surface_\(X\) is a normal variety of dimension \(2\) over the field of complex numbers \(\mathbb{C}\) containing a torus \(\mathbb{T}\cong(\mathbb{C}^{*})^{2}\) as a Zariski open set such that the action of \((\mathbb{C}^{*})^{2}\) on itself extends to an action on \(X\). **Definition 2.3**.: Let \(\Delta\subseteq N_{\mathbb{R}}\) be a lattice polygon with \(0\in\Delta^{\circ}\cap N\). We define \(\Sigma_{\Delta}\) to be the _spanning fan_ of \(\Delta\) in \(N_{\mathbb{R}}\), _i.e._, \(\Sigma_{\Delta}:=\{\sigma_{\theta}\,|\,\theta\preceq\Delta\}\), where \(\sigma_{\theta}\) is the cone \(\mathbb{R}_{\geq 0}\theta\) spanned by the face \(\theta\preceq\Delta\) of \(\Delta\) with \(\dim(\sigma_{\theta})=\dim(\theta)+1\). In particular, the spanning fan is a fan associated with an in general non-smooth normal projective toric surface \(X_{\Sigma_{\Delta}}\). Moreover, one obtains a resolution of singularities of \(X_{\Sigma_{\Delta}}\) through the toric morphism \(X_{\Sigma^{\prime}_{\Delta}}\to X_{\Sigma_{\Delta}}\), where \(\Sigma^{\prime}_{\Delta}\) is a suitable refinement of \(\Sigma_{\Delta}\). In our \(2\)-dimensional case, the rays of \(\Sigma^{\prime}_{\Delta}\) are spanned by all lattice points lying on the boundary of \(\cup_{\sigma\in\Sigma_{\Delta}[2]}\operatorname{\mathrm{sail}}_{\sigma}\), where \(\Sigma_{\Delta}[i]\) denotes the set of \(i\)-dimensional cones in the fan \(\Sigma_{\Delta}\). We briefly recap that a normal projective surface is a _log del Pezzo surface_ if it has at worst log-terminal singularities and if its anticanonical divisor is an ample \(\mathbb{Q}\)-Cartier divisor. Moreover, toric log del Pezzo surfaces one-to-one correspond to LDP polygons. The fan \(\Sigma\) defining a toric log del Pezzo surface \(X\) is the spanning fan \(\Sigma_{\Delta}\) of the corresponding LDP polygon \(\Delta\). In particular, any LDP polygon \(\Delta\) is the convex hull of all primitive ray generators of elements in \(\Sigma_{\Delta}[1]\). Let \(\Delta\subseteq N_{\mathbb{R}}\) be a lattice polygon with \(0\in\Delta^{\circ}\cap N\). Then there exists a \(\Sigma_{\Delta}\)-piecewise linear function \(\kappa_{\Delta}:N_{\mathbb{R}}\to\mathbb{R}\) corresponding to the anticanonical divisor on \(X_{\Sigma_{\Delta}}\) that is linear on each cone \(\sigma\) of \(\Sigma_{\Delta}\) and has value \(-1\) on every primitive ray generator of \(1\)-dimensional cones of \(\Sigma_{\Delta}\). In the rest of this subsection, we aim for introducing the stringy version of the Libgober-Wood identity from a geometric point of view, but restricted to log del Pezzo surfaces: If \(V\) is an arbitrary smooth projective surface, the _\(E\)-polynomial_ of \(V\) is defined as \[E\left(V;u,v\right):=\sum_{0\leq p,q\leq 2}(-1)^{p+q}h^{p,q}\left(V\right)u^{p}v ^{q}\,,\] where \(h^{p,q}\left(V\right)\) denote the Hodge numbers of \(V\). The _stringy \(E\)-function_ of a normal projective \(\mathbb{Q}\)-Gorenstein variety \(X\) with at worst log-terminal singularities is a rational algebraic function in two variables \(u,v\) defined by the formula \[E_{\mathrm{str}}(X;u,v):=\sum_{\emptyset\subset J\subset I}E(D_{J};u,v)\prod_{ j\in J}\left(\frac{uv-1}{(uv)^{a_{j}+1}-1}-1\right),\] where \(\rho:Y\to X\) is some desingularization of \(X\), whose exceptional locus is a union of smooth irreducible divisors \(D_{1},\dots,D_{s}\) with only simple normal crossings and \(K_{Y}=\rho^{*}K_{X}+\sum_{i=1}^{s}a_{i}D_{i}\) for some rational numbers \(a_{i}>-1\). For any non-empty subset \(J\subseteq I:=\{1,\dots,s\}\), we define \(D_{J}\) to be the smooth subvariety \(\cap_{j\in J}D_{j}\). As a special case, this formula implies \(E_{\mathrm{str}}(X;u,v)=E(X;u,v)\) if \(X\) is smooth. Let \(X\) be a toric log del Pezzo surface associated with a fan \(\Sigma\). Then the _stringy version of the Libgober-Wood identity_ is given as \[\frac{d^{2}}{du^{2}}E_{\mathrm{str}}\left(X;u,1\right)\Big{|}_{u=1}=\frac{1}{6 }c_{2}^{\mathrm{str}}(X)+\frac{1}{6}c_{1}(X).c_{1}^{\mathrm{str}}(X)=\frac{1}{ 6}c_{2}^{\mathrm{str}}(X)+\frac{1}{6}c_{1}(X)^{2}\,,\] where \(c_{k}^{\mathrm{str}}(X)\) denotes the _\(k\)-th stringy Chern class_ introduced in [1, 1]. In particular, the \(k\)-th stringy Chern class of \(X\) can be computed purely combinatorial via \[c_{k}^{\mathrm{str}}(X)=\sum_{\sigma\in\Sigma(k)}\mathrm{v}(\sigma)\cdot[X_{ \sigma}]\] [2], where \([X_{\sigma}]\) denotes the class of the closed torus orbit \(X_{\sigma}\) corresponding to a given cone \(\sigma\in\Sigma\). The general stringy version of the Libgober-Wood identity holding for any projective variety with at worst log-terminal singularities can be found in [1]. ## 3. Main theorem and its reduction to a local version We present a purely combinatorial proof of the following combinatorial identity that is equivalent to the stringy Libgober-Wood identity for log del Pezzo surfaces and relates LDP polygons to the number \(12\): **Theorem 3.1** ([2, Corollary 4.5]).: _Let \(\Delta\subseteq N_{\mathbb{R}}\) be an LDP polygon. Then_ \[12\sum_{n\in\Delta\cap N}\left(\kappa_{\Delta}(n)+1\right)^{2}=\mathrm{v} \left(\Delta\right)+\mathrm{v}\left(\Delta^{*}\right)\,,\] _where \(\kappa_{\Delta}:N_{\mathbb{R}}\to\mathbb{R}\), \(x\mapsto-\min\left\{\lambda\in\mathbb{R}_{\geq 0}\,|\,x\in\lambda\Delta\right\}\). In particular, one always has \(\mathrm{v}\left(\Delta\right)+\mathrm{v}\left(\Delta^{*}\right)\geq 12\) and equality holds if and only if \(\Delta\) is reflexive._ **Example 3.2**.: Let \(\Delta\subseteq N_{\mathbb{R}}\) be the LDP polygon given as the convex hull of \((0,-1)\), \((3,2)\), and \((-1,2)\) (cf. Figure 1A). Then Theorem 3.1 yields \[12\sum_{n\in\Delta\cap N}\left(\kappa_{\Delta}(n)+1\right)^{2}=12\cdot(1^{2}+ 0.5^{2}+0.5^{2})=18=12+6=\mathrm{v}\left(\Delta\right)+\mathrm{v}\left(\Delta ^{*}\right)\,,\] where the dual rational polygon \(\Delta^{*}\subseteq M_{\mathbb{R}}\) is the convex hull of \((0,-0.5)\), \((3,1)\), and \((-1,1)\) (cf. Figure 1B). Our strategy relies on a decomposition of the identity in Theorem 3.1 using the spanning fan \(\Sigma_{\Delta}\) of the given LDP polygon \(\Delta\) and considering its \(2\)-dimensional cones separately. To do so, we need the following **Definition 3.3**.: Let \(\sigma\in\Sigma_{\Delta}[2]\) be a \(2\)-dimensional cone with primitive ray generators \(u_{1}\) and \(u_{2}\in N\). Moreover, let \(m_{\sigma}\in M_{\mathbb{R}}\) be the vector dual to the edge \(u_{1}-u_{2}\), _i.e._, \(\langle m_{\sigma},u_{1}\rangle=-1=\langle m_{\sigma},u_{2}\rangle\). We consider all \(2\)-dimensional cones \(\sigma_{1},\ldots,\sigma_{k_{\sigma}}\) in the refined fan \(\Sigma_{\Delta}^{\prime}\) of the given spanning fan \(\Sigma_{\Delta}\) that are contained in \(\sigma\). Therefore, we enumerate the corresponding \(k_{\sigma}\) edges of the \(\operatorname{sail}_{\sigma}\) from \(u_{2}\) to \(u_{1}\) consecutively, denote the corresponding dual vectors by \(m_{\sigma,1},\ldots,m_{\sigma,k_{\sigma}}\), and define \[\operatorname{conv}[m_{\sigma},i]:=\operatorname{conv}(m_{\sigma},m_{\sigma,i },m_{\sigma,i+1})\,.\] Note that the rays of all cones in \(\Sigma_{\Delta}^{\prime}\) lying in \(\sigma\) are spanned by the non-zero lattice points of \(\operatorname{sail}_{\sigma}\). Moreover, \(m_{\sigma,1},\ldots,m_{\sigma,k_{\sigma}}\) have integer coordinates as the resolved cones are unimodular while \(m_{\sigma}\) may have rational coordinates. **Theorem 3.4**.: _Let \(\Delta\subseteq N_{\mathbb{R}}\) be an LDP polygon and \(\sigma\in\Sigma_{\Delta}[2]\) a \(2\)-dimensional cone of the spanning fan \(\Sigma_{\Delta}\). Then_ \[12\sum_{n\in\nabla_{\sigma}^{2}\cap N}(\kappa(n)+1)^{2}=\operatorname{v}\left( \nabla_{\sigma}\setminus\operatorname{sail}_{\sigma}\right)+\sum_{i=1}^{k_{ \sigma}-1}\operatorname{v}(\operatorname{conv}[m_{\sigma},i])\,, \tag{1}\] _where \(\kappa:=\kappa_{\Delta}|_{\sigma}\) is a linear function given as the restriction of the piecewise linear function \(\kappa_{\Delta}\) to the cone \(\sigma\)._ **Example 3.5**.: Let \(\Delta\subseteq N_{\mathbb{R}}\) be the LDP polygon given in Example 3.2 and \(\sigma\in\Sigma_{\Delta}[2]\) the \(2\)-dimensional cone of the spanning fan \(\Sigma_{\Delta}\) with primitive ray generators Figure 1. **LDP polygon \(\boldsymbol{\Delta}\).** The origin is highlighted with a gray background. (A) Lattice polygon \(\Delta\) with \(\operatorname{v}(\Delta)=12\). (B) Dual rational polygon \(\Delta^{*}\) with rational vertex (blue) and \(\operatorname{v}(\Delta^{*})=6\). \(u_{1}=(3,2)\) and \(u_{2}=(-1,2)\in N\) (cf. Figure 2A). Then Theorem 3.4 yields \[12\sum_{n\in\nabla_{\sigma}^{\circ}\cap N}(\kappa(n)+1)^{2} =12\sum_{n\in\{(0,1),(1,1)\}}(\kappa(n)+1)^{2}=6=5+0.5+0.5\] \[=\mathrm{v}\left(\nabla_{\sigma}\setminus\mathrm{sail}_{\sigma} \right)+\mathrm{v}(\mathrm{conv}[m_{\sigma},1])+\mathrm{v}(\mathrm{conv}[m_{ \sigma},2])\] \[=\mathrm{v}\left(\nabla_{\sigma}\setminus\mathrm{sail}_{\sigma} \right)+\sum_{i=1}^{k_{\sigma}-1}\mathrm{v}(\mathrm{conv}[m_{\sigma},i])\,,\] where the \(\nabla_{\sigma}=\mathrm{conv}((0,0),u_{1},u_{2})\), \(\nabla_{\sigma}\setminus\mathrm{sail}_{\sigma}=\mathrm{conv}(u_{1},u_{2},(0,1 ),(1,1))\), \(k_{\sigma}=3\) (cf. Figure 2A, dotted edges), and \(m_{\sigma}=(0,-0.5)\), \(m_{\sigma,1}=(-1,-1)\), \(m_{\sigma,2}=(0,-1)\), \(m_{\sigma,3}=(1,-2)\) (cf. Figure 2B). Theorem 3.4, which we prove combinatorially in Section 4, is our main ingredient for our combinatorial proof of the identity in Theorem 3.1. Summing up Equation (1) over all 2-dimensional cones of the spanning fan \(\Sigma_{\Delta}\) of our given LDP polygon \(\Delta\), we obtain \[12\sum_{n\in(\Delta\cap N)\setminus\{0\}}(\kappa(n)+1)^{2}=\mathrm{v}(\Delta) -\mathrm{v}(\bigcup_{\sigma\in\Sigma_{\Delta}[2]}\overline{\mathrm{sail}}_{ \sigma})+\sum_{\sigma\in\Sigma_{\Delta}[2]}\sum_{i=1}^{k_{\sigma}-1}\mathrm{v} (\mathrm{conv}[m_{\sigma},i])\,.\] Comparing this identity with the one in Theorem 3.1, it suffices to show \[12=\mathrm{v}\left(\Delta^{*}\right)+\mathrm{v}(\bigcup_{\sigma\in\Sigma_{ \Delta}[2]}\overline{\mathrm{sail}}_{\sigma})-\sum_{\sigma\in\Sigma_{\Delta}[ 2]}\sum_{i=1}^{k_{\sigma}-1}\mathrm{v}(\mathrm{conv}[m_{\sigma},i])\,. \tag{2}\] We will consider the union \(\cup_{\sigma\in\Sigma_{\Delta[2]}}\overline{\operatorname{sail}}_{\sigma}\) of all closed sails as a non-convex polygon and denote it by \(\Delta_{\operatorname{sails}}\). Furthermore, we associate with it a fan \(\Sigma_{\Delta_{\operatorname{sails}}}\) having rays that are spanned by the boundary lattice points of \(\Delta_{\operatorname{sails}}\). Note that this fan is unimodular, as all cones are unimodular cones by construction. In addition, this fan is _complete_, meaning the union of its cones is the whole space \(\mathbb{R}^{2}\). For such a fan, every \(1\)-dimensional cone \(\tau\) with primitive ray generator \(v\) is contained in precisely two \(2\)-dimensional cones \(\sigma_{l}=\operatorname{conv}(v,v_{l})\) and \(\sigma_{r}=\operatorname{conv}(v,v_{r})\) of this fan, where \(v_{l}\) and \(v_{r}\) are also primitive ray generators. Moreover, there exists a unique integer \(a_{\tau}\) such that \(v_{l}+v_{r}=a_{\tau}v\). Now we apply the following **Theorem 3.6** (Pvv00, Subsection 8.1).: _Let \(\Sigma\) be a complete unimodular fan in \(\mathbb{R}^{2}\). Then_ \[12=\sum_{\tau\in\Sigma[1]}(3-a_{\tau})\,.\] Combining this theorem with the \(2\)-dimensional property \[\operatorname{v}(\bigcup_{\sigma\in\Sigma_{\Delta[2]}}\overline{\operatorname {sail}}_{\sigma})=\sum_{\tau\in\Sigma_{\Delta_{\operatorname{sails}}}[1]}1\] and Equation (2), we arrive at \[\sum_{\tau\in\Sigma_{\Delta_{\operatorname{sails}}}[1]}(2-a_{\tau})= \operatorname{v}\left(\Delta^{*}\right)-\sum_{\sigma\in\Sigma_{\Delta}[2]} \sum_{i=1}^{k_{\sigma}-1}\operatorname{v}(\operatorname{conv}[m_{\sigma},i])\,.\] In order to verify this identity, we again use the fact that \(\Sigma_{\Delta_{\operatorname{sails}}}\) is a complete unimodular fan in \(\mathbb{R}^{2}\). The reasoning at the end of Section 8.1 in [4] can be applied to our case and states in particular that the sum \[\sum_{\tau\in\Sigma_{\Delta_{\operatorname{sails}}}[1]}(2-a_{\tau})\] equals the sum of signed lengths of dual edges \(\tau^{*}\) corresponding to \(\tau\). Furthermore, the proof also shows that the sum of signed lengths of dual edges can be expressed as a sum of signed volumes. In particular, \[\sum_{\tau\in\Sigma_{\Delta_{\operatorname{sails}}}[1]}(2-a_{\tau})=\sum_{ \tau\in\Sigma_{\Delta_{\operatorname{sails}}}[1]}\det(\tau^{*})\,,\] where \(\det(\tau^{*})\) is the determinant of the \(2\times 2\) matrix with the two vertices of \(\tau^{*}\) as columns (respecting the direction of the edges in the chain) so that \(|\det(\tau^{*})|=\operatorname{v}(\operatorname{conv}(0,\tau^{*}))\). It remains to deduce \[\sum_{\tau\in\Sigma_{\Delta_{\operatorname{sails}}}[1]}\det(\tau^{*})= \operatorname{v}\left(\Delta^{*}\right)-\sum_{\sigma\in\Sigma_{\Delta}[2]} \sum_{i=1}^{k_{\sigma}-1}\operatorname{v}(\operatorname{conv}[m_{\sigma},i])\,. \tag{3}\] Let \(\gamma\) be the closed curve corresponding to the chain of dual edges \(\tau^{*}\) whose orientation is induced by the signs. Observe that \(\sum_{\tau\in\Sigma_{\Delta_{\operatorname{sails}}}[1]}\det(\tau^{*})=\int_{ \gamma}\alpha\), where \(\alpha\) is a \(1\)-form such that \(\frac{1}{2}d\alpha\) is the standard volume form on \(\mathbb{R}^{2}\), see literature on differential forms, _e.g._, [11, Section 37.3]. We split the curve \(\gamma\) into simple closed curves \(\gamma_{0}\) and \(\gamma_{\sigma}\) for \(\sigma\in\Sigma_{\Delta}[2]\): \(\gamma_{0}\) runs through the boundary of \(\Delta^{*}\) and \(\gamma_{\sigma}\) through \(m_{\sigma},m_{\sigma,1},\ldots,m_{\sigma,k_{\sigma}},m_{\sigma}\). The integral splits into a sum of integrals over these simple closed curves, where \(\int_{\gamma_{0}}\alpha=\mathrm{v}(\Delta^{*})\) and \(\int_{\gamma_{\sigma}}\alpha\) is the negative normalized volume of the area bounded by \(\gamma_{\sigma}\) (the winding number is \(-1\)). This area is subdivided into the triangles \(\mathrm{conv}[m_{\sigma},i]\) (cf. Definition 3.3). This shows Equation (3) and thus finishes our combinatorial proof of the identity in Theorem 3.1. **Example 3.7**.: We continue with the LDP polygon \(\Delta\) studied in Example 3.2 and 3.5 and consider the dual polygon to \(\cup_{\sigma^{\prime}\in\Sigma_{\Delta}[2]}\overline{\mathrm{sail}}_{\sigma^ {\prime}}\) (cf. Figure 3). Equation (3) holds because \[\sum_{\tau\in\Sigma_{\Delta_{\mathrm{sals}}}[1]}\det(\tau^{*})=4+2-1-1+1 =6-0.5-0.5\] \[=\mathrm{v}\left(\Delta^{*}\right)-\sum_{\sigma\in\Sigma_{\Delta} [2]}\sum_{i=1}^{k_{\sigma}-1}\mathrm{v}(\mathrm{conv}[m_{\sigma},i])\,.\] ## 4. Proving the cone-wise identity We distinguish two cases in our proof of Theorem 3.4. For unimodular cones, we easily see that the left hand side and the right hand side of Equation (1) both vanish. For non-unimodular cones, a combinatorial proof by induction will be given in the rest of this section. Our reasoning is based on the fact that for any non-unimodular cone \(\sigma\) with primitive ray generators \(u_{1}\) and \(u_{2}\) all interior lattice points of the fundamental parallelogram \(\Pi(u_{1},u_{2})\) can be generated by some vector \(w=\frac{1}{V}(au_{1}+u_{2})\), where \(a\in\mathbb{N}\) and \(V:=\mathrm{v}(\sigma)=\mathrm{v}(\Pi(u_{1},u_{2}))/2\). More precisely, these lattice points are represented in the \((u_{1},u_{2})\)-basis as \[\lfloor{iw}\rfloor=\frac{1}{V}((ia\,\mathrm{mod}\,V)u_{1}+iu_{2}) \tag{4}\] for \(i=0,1,\ldots,V\), see Figure 4. Our main idea is to argue by induction over the volume \(V=\mathrm{v}(\sigma)\) of a cone \(\sigma\) with primitive ray generators \(u_{1}\) and \(u_{2}\), _i.e._, \(\sigma=\mathrm{cone}(u_{1},u_{2})\). For \(V=1\) the cone is unimodular and thus Theorem 3.4 holds. We proceed by assuming that we are given a non-unimodular cone \(\sigma\) with volume \(V>1\). Every such cone \(\sigma\) can be constructed from some other cone \(\widehat{\sigma}\) with primitive ray generators \(\widehat{u}_{1}\) and \(\widehat{u}_{2}\) and strictly smaller volume, _i.e._, \(\widehat{V}:=\mathrm{v}(\widehat{\sigma})<\mathrm{v}(\sigma)=V\). In order to determine this cone \(\widehat{\sigma}\), we consider the three consecutive lattice points \(u_{1}\), \(w\), and \(v\) on the boundary of \(\mathrm{sail}_{\sigma}\). As the two cones \(\mathrm{cone}(u_{1},w)\) and \(\mathrm{cone}(w,v)\) are unimodular, we deduce that Figure 4. **Induction illustration of both cone generation operations.** Both cases show the fundamental parallelograms \(\Pi(u_{1},u_{2})\) of the given cone \(\sigma\) and \(\Pi(\widehat{u}_{1},\widehat{u}_{2})\) of the cone \(\widehat{\sigma}\) from which \(\sigma\) is constructed together with the generating vectors \(w\) and \(\widehat{w}\) for the respective base changes. (A) Case I: \(V=8,a=3\) and \(\widehat{V}=5,\widehat{a}=3\). (B) Case II: \(V=5,a=3\) and \(\widehat{V}=3,\widehat{a}=1\). \(u_{1}+v=\lambda w\) for some \(\lambda\in\mathbb{N}\) with \(\lambda\geq 2\). If \(\lambda>2\), we define \(\widehat{\sigma}\) to be the cone generated by \(\widehat{u}_{1}:=u_{1}-w\) and \(\widehat{u}_{2}:=u_{2}\). Note that \(\widehat{\sigma}\) has the same lattice points in its sail as \(\sigma\) except \(u_{1}\) (which is replaced by \(u_{1}-w\)) and has strictly smaller volume, see Figure 1(A) for an illustration. If \(\lambda=2\), the three lattice points \(u_{1}\), \(w\), and \(v\) are collinear and we define \(\widehat{\sigma}\) to be the cone generated by \(\widehat{u}_{1}:=w\) and \(\widehat{u}_{2}:=u_{2}\). Note that \(\overline{\operatorname{sail}_{\widehat{\sigma}}}\cap N=\overline{ \operatorname{sail}_{\sigma}}\cap N\) and \(\widehat{\sigma}\) has strictly smaller volume than \(\sigma\), see Figure 1(B). Therefore, every non-unimodular cone \(\sigma\) can be obtained from some other cone \(\widehat{\sigma}=\operatorname{cone}(\widehat{u}_{1},\widehat{u}_{2})\) with strictly smaller volume by one of the following two operations: I. \[\widehat{\sigma}\to\sigma=\operatorname{cone}(\widehat{u}_{1}+ \widehat{w},\widehat{u}_{2}), \widehat{u}_{1}\mapsto u_{1}=\widehat{u}_{1}+\widehat{w}, \widehat{u}_{2}\mapsto u_{2}=\widehat{u}_{2}\,,\] II. \[\widehat{\sigma}\to\sigma=\operatorname{cone}(2\widehat{u}_{1}- \widehat{w},\widehat{u}_{2}), \widehat{u}_{1}\mapsto u_{1}=2\widehat{u}_{1}-\widehat{w}, \widehat{u}_{2}\mapsto u_{2}=\widehat{u}_{2}\,,\] where \(\widehat{w}\) is defined analogously as \(w\) above, see Figure 4. By construction, we can easily deduce the properties I. \[\widehat{V}\mapsto V=\widehat{V}+\widehat{a}, a=\widehat{a}, w=\widehat{w}\,,\] (5) II. \[\widehat{V}\mapsto V=2\widehat{V}-\widehat{a}, a=\widehat{V}, w=\widehat{u}_{1}\,.\] (6) As induction hypothesis we assume that Equation (1) holds for all cones \(\widehat{\sigma}\) with volume \(\widehat{V}\) strictly smaller than \(V\). Therefore, it suffices to show that Equation (1) still holds when the cone \(\widehat{\sigma}\) is changed to \(\sigma\). We consider the left hand side (LHS) and the right hand side (RHS) of Equation (1) separately and determine the differences of the new and old values associated to \(\sigma\) and \(\widehat{\sigma}\), that is \(\operatorname{LHS}-\widehat{\operatorname{LHS}}\) and \(\operatorname{RHS}-\widehat{\operatorname{RHS}}\), respectively. Comparing these differences, we will see that they coincide. This finishes the induction step. ### Left hand side of Equation (1) First, we will express the left hand side of Equation (1) in terms of the volume and the sawtooth function. **Definition 4.1** (Rf72, Chapter 1, Introduction).: Let \(x\) be a rational number. Then \[(\!(x)\!):=\left\{\begin{array}{ll}x-[x]-1/2&\text{if }x\in\mathbb{Q} \setminus\mathbb{Z},\\ 0&\text{if }x\in\mathbb{Z}\end{array}\right.\] defines the _sawtooth function_ of period \(1\), where \([x]\) denotes the greatest integer not exceeding \(x\). Given integers \(h,k\) with \(\gcd(h,k)=1\) and \(k\geq 1\), the _Dedekind sum_ is defined as \[\operatorname{s}(h,k):=\sum_{i=1}^{k}\left(\!\!\left(\frac{hi}{k}\right)\! \right)\left(\!\left(\frac{i}{k}\right)\!\right)\,.\] **Remark 4.2**.: Let \(h,k,m\), and \(i\) be integers. By the periodicity of the sawtooth function we immediately see that \(\left(\!\!\left(\frac{(h-mk)i}{k}\right)\!\right)=\left(\!\!\left(\frac{hi}{k} \right)\!\right)\). Thus \[\operatorname{s}(h,k)=\operatorname{s}(h-mk,k)\ \text{ and }\ \operatorname{s}(-h,k)=- \operatorname{s}(h,k)\,,\] where the last equation holds since the sawtooth function is odd, that is \((\!(-x)\!)=-(\!(x)\!)\)[12, Chapter 3, Elementary Properties]. **Lemma 4.3** [12, Chapter 2, Lemma 2, Theorem 1].: _Let \(h\) and \(k\) be two integers with \(\gcd(h,k)=1\). Then_ \[\operatorname{s}(1,k)=-\frac{1}{4}+\frac{1}{6k}+\frac{k}{12}=\frac{1}{12k}(k- 1)(k-2)\] _and_ \[\operatorname{s}(h,k)+\operatorname{s}(k,h)=-\frac{1}{4}+\frac{1}{12}\left( \frac{h}{k}+\frac{1}{hk}+\frac{k}{h}\right)\,.\] **Lemma 4.4**.: _For every \(2\)-dimensional cone \(\sigma\) with primitive ray generators \(u_{1}\) and \(u_{2}\), we have_ \[12\sum_{n\in\nabla\ni\cap N}(\kappa(n)+1)^{2}=\frac{(V-1)(V-2)}{V}+12\cdot \operatorname{s}(a,V)\,, \tag{7}\] _where \(V=\operatorname{v}(\sigma)\) and \(a\in\mathbb{N}\) is such that all interior lattice points of \(\Pi(u_{1},u_{2})\) are generated by \(w=\frac{1}{V}(au_{1}+u_{2})\)._ Proof.: Without loss of generality, we restrict ourselves to non-unimodular cones \(\sigma=\operatorname{cone}(u_{1},u_{2})\). Since \(u_{1}\) and \(u_{2}\) are primitive and \(Vw=au_{1}+u_{2}\), we have \(\gcd(V,a)=1\). Furthermore, we denote by \((u_{1}^{*},u_{2}^{*})\) the dual basis to \((u_{1},u_{2})\) with respect to the standard scalar product. Observe that \(\kappa=-u_{1}^{*}-u_{2}^{*}\) by construction. Therefore, we get \[2\sum_{n\in\nabla_{\sigma}^{\circ}\cap N}(\kappa(n)+1)^{2}=\sum_{i=1}^{V-1}( \kappa(\lfloor iw\rfloor)+1)^{2}\] which follows from the symmetry of \(\kappa\). Furthermore, as \(\kappa=-u_{1}^{*}-u_{2}^{*}\) and \(\gcd(V,a)=1\), we deduce from Equation (4) and Definition 4.1 that \[2V\left(1+\kappa(\lfloor iw\rfloor)+\left(\!\left(\frac{ai}{V} \right)\!\right)\!\right) =2V-2(ia\operatorname{mod}V)-2i+2ai-2V\left[\frac{ai}{V}\right]-V\] \[=V-2i+2\left(ai-(ia\operatorname{mod}V)-V\left[\frac{ai}{V} \right]\right)\] \[=2V-i\] holds for \(i=1,\ldots,V-1\). This yields \[\kappa(\lfloor iw\rfloor)+1=-\left(\!\left(\frac{ai}{V}\right)\!\right)+ \frac{1}{2}-\frac{i}{V}\,.\] As \(\gcd(V,a)=1\), the set of values of \(\left(\!\left(\frac{ai}{V}\right)\!\right)^{2}\) for \(i=1,\ldots,V-1\) agrees with the set of values of \(\left(\!\left(\frac{j}{V}\right)\!\right)^{2}\) for \(j=1,\ldots,V-1\). Therefore, \[\sum_{i=1}^{V-1}\left(\!\left(\frac{ai}{V}\right)\!\right)^{2}=\sum_{j=1}^{V-1} \left(\!\left(\frac{j}{V}\right)\!\right)^{2}=\operatorname{s}(1,V)=\frac{1}{ 12V}(V-1)(V-2)\,,\] where we used Definition 4.1, Lemma 4.3, and the fact \(\left(\!\left(\frac{V}{V}\right)\!\right)=0\) for the second equality. It is well known that \(\sum_{i=1}^{V-1}\left(\!\left(\frac{ai}{V}\right)\!\right)=0\) (as \(\left(\!\left(\frac{ai}{V}\right)\!\right)+\left(\!\left(\frac{a(V-i)}{V} \right)\!\right)=0\) for \(i=1,\ldots,V-1\) if \(\gcd(V,a)=1\), analogously to Remark 4.2). Thus, \[\sum_{i=1}^{V-1}\frac{i}{V}\left(\!\left(\!\left(\frac{ai}{V}\right)\!\right) =\sum_{i=1}^{V-1}\left(\frac{i}{V}-\left[\frac{i}{V}\right]-\frac{1}{2} \right)\left(\!\left(\frac{ai}{V}\right)\!\right)=\sum_{i=1}^{V-1}\left(\! \left(\frac{i}{V}\right)\!\right)\left(\!\left(\frac{ai}{V}\right)\!\right)= \operatorname{s}(a,V)\,.\] Combining everything, we obtain \[\sum_{i=1}^{V-1}(\kappa(\lfloor iw\rfloor)+1)^{2} =\sum_{i=1}^{V-1}\left(-\left(\!\left(\frac{ai}{V}\right)\! \right)+\frac{1}{2}-\frac{i}{V}\right)^{2}\] \[=\sum_{i=1}^{V-1}\left(\!\left(\frac{ai}{V}\right)\!\right)^{2}- \sum_{i=1}^{V-1}\left(\!\left(\frac{ai}{V}\right)\!\right)+2\sum_{i=1}^{V-1} \frac{i}{V}\left(\!\left(\frac{ai}{V}\right)\!\right)\] \[\qquad-\frac{1}{V}\sum_{i=1}^{V-1}i+\frac{1}{4}(V-1)+\frac{1}{V^ {2}}\sum_{i=1}^{V-1}i^{2}\] \[=\frac{1}{12V}(V-1)(V-2)+2\operatorname{s}(a,V)+\frac{1}{12V}(V- 1)(V-2)\] \[=\frac{1}{6V}(V-1)(V-2)+2\operatorname{s}(a,V)\,.\] In the following, we will consider the difference \(\operatorname{LHS}-\widehat{\operatorname{LHS}}\) for Case I and II separately: **Case I.** Using Lemma 4.4 and Equation (5), we get \[\operatorname{LHS}-\widehat{\operatorname{LHS}} =\frac{1}{\widehat{V}+\widehat{a}}(\widehat{V}+\widehat{a}-1)( \widehat{V}+\widehat{a}-2)+12\operatorname{s}(\widehat{a},\widehat{V}+ \widehat{a})\] \[\quad-\frac{1}{\widehat{V}}(\widehat{V}-1)(\widehat{V}-2)-12 \operatorname{s}(\widehat{a},\widehat{V})\,.\] The reciprocity law (Lemma 4.3) and the periodicity for Dedekind sums (Remark 4.2) imply \[12\,\mathrm{s}(\widehat{a},\widehat{V}+\widehat{a}) =12\left(-\,\mathrm{s}(\widehat{V}+\widehat{a},\widehat{a})-\frac{ 1}{4}+\frac{1}{12}\left(\frac{\widehat{a}}{\widehat{V}+\widehat{a}}+\frac{1}{ \widehat{a}(\widehat{V}+\widehat{a})}+\frac{\widehat{V}+\widehat{a}}{\widehat{ a}}\right)\right)\] \[=-12\,\mathrm{s}(\widehat{V},\widehat{a})-3+\frac{\widehat{a}}{ \widehat{V}+\widehat{a}}+\frac{1}{\widehat{a}(\widehat{V}+\widehat{a})}+\frac{ \widehat{V}+\widehat{a}}{\widehat{a}}\] \[=12\left(\mathrm{s}(\widehat{a},\widehat{V})+\frac{1}{4}-\frac{1 }{12}\left(\frac{\widehat{a}}{\widehat{V}}+\frac{1}{\widehat{a}\widehat{V}}+ \frac{\widehat{V}}{\widehat{a}}\right)\right)\] \[\qquad-3+\frac{\widehat{a}}{\widehat{V}+\widehat{a}}+\frac{1}{ \widehat{a}(\widehat{V}+\widehat{a})}+\frac{\widehat{V}+\widehat{a}}{\widehat{ a}})\] \[=12\,\mathrm{s}(\widehat{a},\widehat{V})+\frac{-\widehat{a}^{2}+ \widehat{a}\widehat{V}+\widehat{V}^{2}-1}{\widehat{V}(\widehat{V}+\widehat{a})}\,.\] Simplifying \[\frac{1}{\widehat{V}+\widehat{a}}(\widehat{V}+\widehat{a}-1)(\widehat{V}+ \widehat{a}-2)=\frac{1}{\widehat{V}}(\widehat{V}-1)(\widehat{V}-2)+\frac{ \widehat{a}(\widehat{a}\widehat{V}+\widehat{V}^{2}-2)}{\widehat{V}(\widehat{V} +\widehat{a})}\,,\] we arrive at \[\mathrm{LHS}-\widehat{\mathrm{LHS}}=\frac{(\widehat{a}+1)(\widehat{V}-1)( \widehat{V}+\widehat{a}+1)}{\widehat{V}(\widehat{V}+\widehat{a})}=(\widehat{ a}+1)\left(1-\frac{\widehat{a}+1}{\widehat{V}(\widehat{V}+\widehat{a})}\right)\,. \tag{8}\] **Case II.** Using Lemma 4.4 and Equation (6), we similarly obtain \[\mathrm{LHS}-\widehat{\mathrm{LHS}} =\frac{1}{2\widehat{V}-\widehat{a}}(2\widehat{V}-\widehat{a}-1) (2\widehat{V}-\widehat{a}-2)+12\,\mathrm{s}(\widehat{V},2\widehat{V}- \widehat{a})\] \[\quad-\frac{1}{\widehat{V}}(\widehat{V}-1)(\widehat{V}-2)-12\, \mathrm{s}(\widehat{a},\widehat{V})\,.\] Again, the reciprocity law (Lemma 4.3) and elementary properties of Dedekind sums (Remark 4.2) imply \[12\,\mathrm{s}(\widehat{V},2\widehat{V}-\widehat{a}) =-12\,\mathrm{s}(2\widehat{V}-\widehat{a},\widehat{V})-3+\frac{ \widehat{a}}{2\widehat{V}-\widehat{a}}+\frac{1}{\widehat{a}(2\widehat{V}- \widehat{a})}+\frac{2\widehat{V}-\widehat{a}}{\widehat{a}}\] \[=12\,\,\mathrm{s}(\widehat{a},\widehat{V})-3+\frac{\widehat{V}^{ 2}+(2\widehat{V}-\widehat{a})^{2}+1}{\widehat{V}(2\widehat{V}-\widehat{a})}\,.\] As \[\frac{1}{2\widehat{V}-\widehat{a}}(2\widehat{V}-\widehat{a}-1)(2\widehat{V}- \widehat{a}-2)=\frac{1}{\widehat{V}}(\widehat{V}-1)(\widehat{V}-2)+\frac{( \widehat{V}-\widehat{a})(-\widehat{a}\widehat{V}+2\widehat{V}^{2}-2)}{ \widehat{V}(2\widehat{V}-\widehat{a})}\,,\] we finally obtain \[\mathrm{LHS}-\widehat{\mathrm{LHS}} =\frac{(\widehat{V}+1)(\widehat{V}-\widehat{a}-1)(2\widehat{V}- \widehat{a}-1)}{\widehat{V}(2\widehat{V}-\widehat{a})}\] \[=(\widehat{V}-\widehat{a}-1)\left(1+\frac{\widehat{V}-\widehat{a}- 1}{\widehat{V}(2\widehat{V}-\widehat{a})}\right)\,. \tag{9}\] ### Right hand side of Equation (1) The two summands \[\mathrm{v}\left(\nabla_{\sigma}\setminus\mathrm{sail}_{\sigma}\right)\ \ \text{and}\ \ \sum_{i=1}^{k_{\sigma}-1}\mathrm{v}(\mathrm{conv}[m_{\sigma},i])\] on the right hand side of Equation (1) will be considered separately for each case. **Case I.** By our construction, we have \[\nabla_{\sigma}\setminus\mathrm{sail}_{\sigma}=\overline{((\nabla_{\widehat{ \sigma}}\setminus\mathrm{sail}_{\widehat{\sigma}})\cup\mathrm{conv}(\widehat{ u}_{1},\widehat{u}_{1}+\widehat{w},\widehat{u}_{2}))\setminus\mathrm{conv}( \widehat{u}_{1},\widehat{u}_{1}+\widehat{w},\widehat{w})}\] as illustrated in Figure 5. Since \[\mathrm{conv}(\widehat{u}_{1},\widehat{u}_{1}+\widehat{w},\widehat{w})\subseteq (\nabla_{\widehat{\sigma}}\setminus\mathrm{sail}_{\widehat{\sigma}})\cup \mathrm{conv}(\widehat{u}_{1},\widehat{u}_{1}+\widehat{w},\widehat{u}_{2})\] and \[\mathrm{v}\left((\nabla_{\widehat{\sigma}}\setminus\mathrm{sail}_{\widehat{ \sigma}})\cap\mathrm{conv}(\widehat{u}_{1},\widehat{u}_{1}+\widehat{w}, \widehat{u}_{2})\right)=0\,,\] we obtain \[\mathrm{v}\left(\nabla_{\sigma}\setminus\mathrm{sail}_{\sigma} \right)-\mathrm{v}\left(\nabla_{\widehat{\sigma}}\setminus\mathrm{sail}_{ \widehat{\sigma}}\right)\] \[=\mathrm{v}\left(\mathrm{conv}(\widehat{u}_{1},\widehat{u}_{1}+ \widehat{w},\widehat{u}_{2})\right)-\mathrm{v}\left(\mathrm{conv}(\widehat{ u}_{1},\widehat{u}_{1}+\widehat{w},\widehat{w})\right)\] \[=\det(\widehat{w},\widehat{u}_{2}-\widehat{u}_{1})-\det(\widehat {w},\widehat{w}-\widehat{u}_{1})=\det(\widehat{w},\widehat{u}_{2}-\widehat{w} )=\det(\widehat{w},\widehat{u}_{2})\] \[=\frac{\widehat{a}}{\widehat{V}}\cdot\widehat{V}=\widehat{a}\] by utilizing \(\det(\widehat{u}_{1},\widehat{u}_{2})=\widehat{V}\). For the second summand of the right hand side of Equation (1), we need to determine how the involved functionals behave when we change from \(\widehat{\sigma}\) to \(\sigma\). Recall \(m_{\widehat{\sigma}}=-\widehat{u}_{1}^{*}-\widehat{u}_{2}^{*}\). Since \(u_{1}=\widehat{u}_{1}+\ \widehat{w}\) and \(u_{2}=\widehat{u}_{2}\), we have \[m_{\sigma}=-\frac{\widehat{V}-1}{\widehat{V}+\widehat{a}}\widehat{u}_{1}^{*}- \widehat{u}_{2}^{*}=m_{\widehat{\sigma}}+\frac{\widehat{a}+1}{\widehat{V}+ \widehat{a}}\widehat{u}_{1}^{*}\,.\] As explained in Section 3, the functionals \(m_{\widehat{\sigma},1},m_{\widehat{\sigma},2},\ldots,m_{\widehat{\sigma},k_{ \sigma}}\) may be associated with edges of \(\overline{\operatorname{sail}}_{\widehat{\sigma}}\) not incident to \(0\) (and similarly for \(\overline{\operatorname{sail}}_{\sigma}\)). We enumerate these edges of the sail starting from the edge incident to \(\widehat{u}_{2}\) (and finishing with the edge incident to \(\widehat{u}_{1}\)). By construction, \(\overline{\operatorname{sail}}_{\sigma}\) has the same edges as \(\overline{\operatorname{sail}}_{\widehat{\sigma}}\), except for the last edge connecting \(\widehat{w}\) to \(\widehat{u}_{1}\) and \(\widehat{w}\) to \(\widehat{u}_{1}+\ \widehat{w}\), respectively (cf. Figure 5A). Accordingly, this is also true for the functionals, except that \(m_{\widehat{\sigma},k_{\widehat{\sigma}}}\) is replaced by \(m_{\sigma,k_{\sigma}}=-\widehat{V}\widehat{u}_{2}^{*}\) (cf. Figure 6). As \[\langle m_{\widehat{\sigma},1},\widehat{u}_{2}\rangle=-1\ \ \text{and}\ \ \langle m_{\widehat{\sigma},1},\frac{1}{\widehat{V}}\widehat{u}_{1}+\frac{ \widehat{b}}{\widehat{V}}\widehat{u}_{2}\rangle=-1\,,\] we have \[m_{\widehat{\sigma},1}=(\widehat{b}-\widehat{V})\widehat{u}_{1}^{*}-\widehat{ u}_{2}^{*}\,, \tag{10}\] where \(\widehat{b}\in[1,V]\) is the multiplicative inverse of \(\widehat{a}\) modulo \(\widehat{V}\), _i.e._, \(\widehat{a}\cdot\widehat{b}=1\ \text{mod}\ \widehat{V}\). Furthermore, \[m_{\widehat{\sigma},k_{\widehat{\sigma}}}=-\widehat{u}_{1}^{*}+(\widehat{a}- \widehat{V})\widehat{u}_{2}^{*} \tag{11}\] because \(\langle m_{\widehat{\sigma},k_{\widehat{\sigma}}},\widehat{u}_{1}\rangle=-1\) and \(\langle m_{\widehat{\sigma},k_{\widehat{\sigma}}},\widehat{w}\rangle=\langle m _{\widehat{\sigma},k_{\widehat{\sigma}}},\widehat{\frac{a}{\widehat{V}}} \widehat{u}_{1}+\frac{1}{\widehat{V}}\widehat{u}_{2}\rangle=-1\). Additionally, \(m_{\widehat{\sigma}}\), \(m_{\sigma}\) and \(m_{\widehat{\sigma},1}\) are collinear (as they all take value \(-1\) on \(\widehat{u}_{2}\)), see Figure 6. Therefore, with \(k_{\widehat{\sigma}}=k_{\sigma}\), we have \[\bigcup_{i=1}^{k_{\widehat{\sigma}}-1}\operatorname{conv}[m_{\widehat{\sigma }},i]\subseteq\bigcup_{i=1}^{k_{\sigma}-1}\operatorname{conv}[m_{\sigma},i]\,.\] Hence, we get (cf. Figure 6) \[\sum_{i=1}^{k_{\sigma}-1}\operatorname{v}(\operatorname{conv}[m_ {\sigma},i])-\sum_{i=1}^{k_{\widehat{\sigma}}-1}\operatorname{v}( \operatorname{conv}[m_{\widehat{\sigma}},i])\] \[=\operatorname{v}(\operatorname{conv}(m_{\sigma},m_{\widehat{ \sigma}},m_{\widehat{\sigma},k_{\widehat{\sigma}}}))+\operatorname{v}( \operatorname{conv}(m_{\sigma},m_{\widehat{\sigma},k_{\widehat{\sigma}}},m_{ \sigma,k_{\sigma}}))\] \[=\det(m_{\widehat{\sigma},k_{\widehat{\sigma}}}-m_{\widehat{ \sigma}},m_{\sigma}-m_{\widehat{\sigma}})+\det(m_{\sigma,k_{\sigma}}-m_{ \widehat{\sigma},k_{\widehat{\sigma}}},m_{\sigma}-m_{\widehat{\sigma},k_{ \widehat{\sigma}}})\] \[=\det\left((\widehat{a}+1-\widehat{V})\widehat{u}_{2}^{*},\frac{ \widehat{a}+1}{\widehat{V}+\widehat{a}}\widehat{u}_{1}^{*}\right)+\det\left( \widehat{u}_{1}^{*}-\widehat{a}\widehat{u}_{2}^{*},(\widehat{V}-\widehat{a}-1 )\widehat{u}_{2}^{*}+\frac{\widehat{a}+1}{\widehat{V}+\widehat{a}}\widehat{u}_ {1}^{*}\right)\] \[=-\frac{(\widehat{V}-\widehat{a}-1)(\widehat{a}+1)}{\widehat{V}+ \widehat{a}}\det(\widehat{u}_{2}^{*},\widehat{u}_{1}^{*})-\frac{\widehat{a}( \widehat{a}+1)}{\widehat{V}+\widehat{a}}\det(\widehat{u}_{2}^{*},\widehat{u}_ {1}^{*})+(\widehat{V}-\widehat{a}-1)\det(\widehat{u}_{1}^{*},\widehat{u}_{2}^ {*})\] \[=\frac{(\widehat{V}-\widehat{a}-1)(\widehat{a}+1)}{\widehat{V}( \widehat{V}+\widehat{a})}+\frac{\widehat{a}(\widehat{a}+1)}{\widehat{V}( \widehat{V}+\widehat{a})}+\frac{\widehat{V}-\widehat{a}-1}{\widehat{V}}=1- \frac{(\widehat{a}+1)^{2}}{\widehat{V}(\widehat{V}+\widehat{a})}\,.\] Combining everything yields \[\operatorname{RHS}-\widehat{\operatorname{RHS}}=\widehat{a}+1-\frac{( \widehat{a}+1)^{2}}{\widehat{V}(\widehat{V}+\widehat{a})},\] which coincides with the corresponding difference \(\operatorname{LHS}-\widehat{\operatorname{LHS}}\) in Equation (8). **Case II.** By our assumption (as illustrated in Figure 7), we have \[\nabla_{\sigma}\setminus\operatorname{sail}_{\sigma}=(\nabla_{\widehat{\sigma}} \setminus\operatorname{sail}_{\widehat{\sigma}})\cup\operatorname{conv}( \widehat{u}_{1},\widehat{u}_{2},2\widehat{u}_{1}-\widehat{w})\,.\] Using again \(\det(\widehat{u}_{1},\widehat{u}_{2})=\widehat{V}\), we obtain \[v\left(\nabla_{\sigma}\setminus\operatorname{sail}_{\sigma} \right)-v\left(\nabla_{\widehat{\sigma}}\setminus\operatorname{sail}_{ \widehat{\sigma}}\right)=v\left(\operatorname{conv}(\widehat{u}_{1},\widehat{u }_{2},2\widehat{u}_{1}-\widehat{w})\right)=\det(\widehat{u}_{1}-\widehat{w}, \widehat{u}_{2}-\widehat{u}_{1})\\ =\det\left(\left(1-\frac{\widehat{a}}{\widehat{V}}\right) \widehat{u}_{1}-\frac{1}{\widehat{V}}\widehat{u}_{2},\widehat{u}_{2}-\widehat{ u}_{1}\right)=\left(1-\frac{\widehat{a}}{\widehat{V}}\right)\cdot\widehat{V}- \frac{1}{\widehat{V}}\cdot\widehat{V}=\widehat{V}-\widehat{a}-1\,.\] As in Case I, we need to determine how the functionals involved in the second summand of the RHS of Equation (1) behave when we change from \(\widehat{\sigma}\) to \(\sigma\). First recall that \(m_{\widehat{\sigma}}=-\widehat{u}_{1}^{*}-\widehat{u}_{2}^{*}\). As \(u_{1}=2\widehat{u}_{1}-\widehat{w}\) and \(u_{2}=\widehat{u}_{2}\), we have \[m_{\sigma}=-\frac{\widehat{V}+1}{2\widehat{V}-\widehat{a}}\widehat{u}_{1}^{*}- \widehat{u}_{2}^{*}=m_{\widehat{\sigma}}+\frac{\widehat{V}-\widehat{a}-1}{2 \widehat{V}-\widehat{a}}\widehat{u}_{1}^{*}\,.\] As above, we enumerate the edges of the sail that are not incident to \(0\) starting from the edge incident to \(\widehat{u}_{2}\) (finishing with the edge incident to \(\widehat{u}_{1}\)). By construction, \(\overline{\operatorname{sail}}_{\sigma}\) has the same edges as \(\overline{\operatorname{sail}}_{\widehat{\sigma}}\) and an additional edge with functional \(m_{\sigma,k_{\widehat{\sigma}}+1}=m_{\widehat{\sigma},k_{\widehat{\sigma}}}\). The other functionals are identical, see Figure 8 for an illustration. As in Case I, the functionals \(m_{\widehat{\sigma},1}\) and \(m_{\widehat{\sigma},k_{\widehat{\sigma}}}\) can be expressed by Equation (10) and (11). Hence, \[\sum_{i=1}^{k_{\sigma}-1}\mathrm{v}(\mathrm{conv}[m_{\sigma},i])- \sum_{i=1}^{k_{\tilde{\sigma}}-1}\mathrm{v}(\mathrm{conv}[m_{\widehat{\sigma}},i])\] \[=\sum_{i=1}^{k_{\tilde{\sigma}}-1}\mathrm{v}(\mathrm{conv}(m_{ \sigma},m_{\widehat{\sigma},i},m_{\widehat{\sigma},i+1}))-\sum_{i=1}^{k_{ \tilde{\sigma}}-1}\mathrm{v}(\mathrm{conv}(m_{\widehat{\sigma}},m_{\widehat{ \sigma},i},m_{\widehat{\sigma},i+1}))\] \[=\mathrm{v}(\mathrm{conv}(m_{\sigma},m_{\widehat{\sigma}},m_{ \widehat{\sigma},k_{\tilde{\sigma}}}))=\det(m_{\sigma}-m_{\widehat{\sigma}},m_ {\widehat{\sigma},k_{\tilde{\sigma}}}-m_{\widehat{\sigma}})\] \[=\det\left(\frac{\widehat{V}-\widehat{a}-1}{2\widehat{V}- \widehat{a}}\widehat{u}_{1}^{*},(\widehat{V}-\widehat{a}-1)\widehat{u}_{2}^{* }\right)=\frac{(\widehat{V}-\widehat{a}-1)^{2}}{2\widehat{V}-\widehat{a}}\det (\widehat{u}_{1}^{*},\widehat{u}_{2}^{*})=\frac{(\widehat{V}-\widehat{a}-1)^{ 2}}{\widehat{V}(2\widehat{V}-\widehat{a})}\,.\] Combining everything yields \[\mathrm{RHS}-\widehat{\mathrm{RHS}}=\widehat{V}-\widehat{a}-1+\frac{( \widehat{V}-\widehat{a}-1)^{2}}{\widehat{V}(2\widehat{V}-\widehat{a})}\,,\] which coincides with the corresponding difference \(\mathrm{LHS}-\widehat{\mathrm{LHS}}\) in Equation (9). ## Acknowledgements The first author is partially supported by the Deutsche Forschungsgemeinschaft (DFG - German Research Foundation) - Project-ID 195170736 - TRR109 "Discretization in Geometry and Dynamics". The second and third authors have been partially supported by the Polish-German grant "ATAG - Algebraic Torus Actions: Geometry and Combinatorics" [project number 380241778] of the Deutsche Forschungsgemeinschaft (DFG).
2303.14500
Formalization of Quantum Intermediate Representations for Code Safety
Quantum Intermediate Representation (QIR) is a Microsoft-developed, LLVM-based intermediate representation for quantum program compilers. QIR aims to provide a general solution for quantum program compilers independent of front-end languages and back-end hardware, thus avoiding duplicate development of intermediate representations and compilers. Since it is still under development, QIR is described in natural language and lacks a formal definition, leading to ambiguity in its interpretation and a lack of rigor in implementing quantum functions. In this paper, we provide formal definitions for the data types and instruction sets of QIR, aiming to provide correctness and security guarantees for operations and intermediate code conversions in QIR. To validate our design, we show some samples of unsafe QIR code where errors can be detected by our formal approach.
Junjie Luo, Jianjun Zhao
2023-03-25T15:40:18Z
http://arxiv.org/abs/2303.14500v1
# Formalization of Quantum Intermediate Representations ###### Abstract Quantum Intermediate Representation (QIR) is a Microsoft-developed, LLVM-based intermediate representation for quantum program compilers. QIR aims to provide a general solution for quantum program compilers independent of front-end languages and back-end hardware, thus avoiding duplicate development of intermediate representations and compilers. Since it is still under development, QIR is described in natural language and lacks a formal definition, leading to ambiguity in its interpretation and a lack of rigor in implementing quantum functions. In this paper, we provide formal definitions for the data types and instruction sets of QIR, aiming to provide correctness and safety guarantees for operations and intermediate code conversions in QIR. To validate our design, we show some samples of unsafe QIR code where errors can be detected by our formal approach. ## 1 Introduction Developing a Noisy Intermediate-Scale Quantum Computer (NISQ) [21] offers new opportunities for research and development of quantum software. With the increasing development of quantum hardware, quantum program processors (QPUs) are expected to complement further and accelerate existing classical scientific computation workflows, called heterogeneous quantum-classical computation. Various programming languages such as Qiskit[1], Cirq[27], Q#[26], Quipper[8], and ProjectQ[28] have been developed to implement such a computational model. Like classical programs, compiled quantum programming languages, such as Q#, require their quantum programs to be compiled by a compiler into an intermediate representation (IR), which is subsequently optimized and transformed for efficient execution on a designated platform. Since IR has platform-independent features, its optimizer and executable generator can be reused under multiple source languages and generate the corresponding executable code according to the target execution platform. It is often necessary to develop new IRs or extend existing ones to adapt the intermediate representation to quantum properties. Some of the intermediate representations used in quantum programming languages include MLIR [11, 14], SQIR [9], and OpenQASM [5]. Quantum Intermediate Representation (QIR) [7] is a new intermediate representation of quantum programs developed by Microsoft, based on the popular open-source LLVM intermediate language [10]. QIR enables the representation of quantum computing workflows and tasks by specifying a set of rules for representing quantum structures in LLVM without any extensions or modifications. Its goal is to provide a unified and common interface to multiple quantum programming languages and quantum computing platforms, thus facilitating the development of general quantum compilation tools that can be reused in the compiler mechanism. The QIR-based compiler processes the high-level source quantum language by converting it to QIR and handing it over to the target backend for execution (see Figure 1). Since QIR is designed and implemented based on LLVM, applying the classical LLVM optimization methods to transform and optimal semantics. Thus, its execution and the correctness of the optimization and conversion are difficult to rigorously proven, making it difficult to guarantee the correct operations of quantum programs. In our work, we hope to design a formal method for QIR and develop a framework for verification based on this method so that QIR code can be tested for safety by this verification framework before it actually executed. In our work, we focus on the core functions of QIR, which play a crucial role in the correct operation of QIR programs. Therefore, in this paper, we prioritize formalizing the syntax and semantics of these core functions so that we can detect possible errors in the execution of QIR's core functions and guarantee that QIR's essential functions can be executed properly. Our contributions can be summarized as follows: * We formalize the syntax of QIR based on the work of Zhao _et al._[29]. Specifically, we have adapted its abstract syntax to remove LLVM directives and types not used in QIR and augment QIR-specific data types. * We design the semantics of important instructions in QIR, such as allocation and release qubits, gate operations, and measurement. These operations constitute the implementation of the most basic quantum program functions, and formalization based on them can allow us to capture unsafe parts of the code. * We design a management model for qubits through which, together with our formal methods, the unsafe parts of the QIR code can be captured (e.g., qubits cloning and the use of released qubits). * We validate the effectiveness of our formal approach by applying it to real cases of unsafe QIR code. The rest of the paper is organized as follows. We introduce the background information of LLVM Intermediate Representation (LLVM IR) and QIR in Section 2. We present our formalization of the syntax and semantics of QIR in Section 3. The validation of our formal method with real-world examples is presented in Section 4. We discuss related work in Sections 5, and the conclusion is given in Section 6. ## 2 Background This section briefly introduces LLVM IR, the basis of QIR, and QIR itself. Figure 1: The QIR compiler architecture. ### Basic Concepts Unlike classical programs that use classical bits to store information, in quantum programs, quantum bits ("qubits") are used as the medium for storing data. As compared to a classical bit that can only be in either 0 or 1 state, a qubit can be in both 0 and 1 states, and it is called a quantum superposition state. The superposition state of a qubit can be represented by \(\left|\psi\right\rangle=\alpha\left|0\right\rangle+\beta\left|1\right\rangle\), where \(\left|\right\rangle\) is called the Dirac symbol, and \(\left|\alpha\right|^{2}+\left|\beta\right|^{2}=1\). We cannot directly observe the state of a qubit, which is in the superposition state; instead, we must measure it and obtain a 0-state with probability \(\left|\alpha\right|^{2}\) or a 1-state with probability \(\left|\beta\right|^{2}\). In quantum computing, we usually use quantum logic gates to control the state of the qubits. Some basic gate operations include: * **X gate (NOT gate)**: When a quantum undergoes the X-gate operation, its state changes from \(\left|\psi\right\rangle=\alpha\left|0\right\rangle+\beta\left|1\right\rangle\) to \(\left|\psi\right\rangle=\beta\left|0\right\rangle+\alpha\left|1\right\rangle\). We can use the matrix to represent this operation: \[X=\left[\begin{array}{cc}0&1\\ 1&0\end{array}\right]\] (1) * **Y gate and Z gate**: Similarly, for the Y and Z gates have the following expressions: \[Y=\left[\begin{array}{cc}0&-1i\\ 1i&0\end{array}\right]\quad\&\quad Z=\left[\begin{array}{cc}1&0\\ 0&-1\end{array}\right]\] (2) * **Hadamard gate (H gate)**: Hadamard gate can turn a \(\left|0\right\rangle\) state or a \(\left|1\right\rangle\) state into a superposition of \(\left|0\right\rangle\) and \(\left|1\right\rangle\) with equal probability: \[H=\frac{1}{\sqrt{2}}\left[\begin{array}{cc}1&1\\ 1&-1\end{array}\right]\] (3) * **Controlled gate**: Unlike the single-qubit gate that can only accept one qubit as input, as described above, the controlled gate can accept multiple qubits as input. Specifically, for qubits input to the controlled gate, there are control qubits and target qubits, and the state of the target qubit is changed only when the states of both control qubits are \(\left|1\right\rangle\). For example, for the CNOT gate (Controlled NOT gate), the logical expression is: \[\left|A,B\right\rangle\rightarrow\left|A,B\oplus A\right\rangle\] (4) Similarly, the Toffoli gate, also known as the CCNOT gate, possesses two control qubits that together control the flip of the target qubit with the expression: \[\left|A,B,C\right\rangle\rightarrow\left|A,B,C\oplus AB\right\rangle\] (5) All the above quantum logic gates are reversible, so no information loss occurs during quantum computing. By applying gate operations to qubits, quantum circuits can be composed, and thus quantum algorithms can be implemented. It is the model widely used by quantum programs nowadays. One other important operation is measurement. A classical bit in a 0 or 1 state can be obtained by performing a measurement operation on a single qubit. More content related to quantum computing can be found in the book by Nielsen and Chuang [20]. ### Llvm ir The LLVM IR [10] is a high-level static single assignment form (SSA) language used by the LLVM compiler infrastructure as the intermediate representation for source code. It is a representation of the source code that is independent of the source languages and target platforms. It is currently widely used and there are many conversion and optimization methods available around it. Compilers can perform various optimizations on the code of a program based on this, including common subexpression elimination, dead code elimination, and loop unrolling. These optimizations can improve the performance of the generated machine code, making it run faster and more efficiently. LLVM IR is a strongly typed language, meaning each value has a specific type, such as an integer or floating point value. Such a design helps prevent type errors and allows the compiler to generate more efficient code. Similar to RISC instructions, LLVM IR is designed as a triple-address form. Combined with its strong typing, it allows for both compilation and optimization based on individual compilation units and precise global optimization performed at link time using type information. LLVM IR also includes support for control flow, loops, and other common language constructs, which makes it possible to represent a wide range of programs. ### Quantum Intermediate Representation (QIR) Microsoft has developed a new quantum intermediate representation called Quantum Intermediate Representation (QIR), which is based on the widely used open-source LLVM IR. To promote QIR and provide solutions for efficient use of different quantum processors, Microsoft has established the QIR Alliance [2]. QIR avoids the need for creating new compilers for different quantum programming languages by leveraging the existing classical infrastructure of LLVM IR, enabling support for the necessary functions of quantum programs without requiring additional extensions or modifications. Specifically, QIR preserves the LLVM primitives such as **call**, **bitcast**, **getelementptr**, and other classical data types such as integer **is\(z\)** and double **Double**. In order to implement quantum operations, QIR adds two data types to LLVM IR, **%Qubit** and **%Result**, which represent quantum registers and the results of measurements, respectively. There are also two data structures, **%Array** and **%Tuple**, which represent arrays of the same type of data and user-defined structures composed of arrays of multiple types, respectively. All the above four data types are opaque to QIR. To make it easier to express the operations of qubits, QIR adds new data types, such as **%Pauli**, which refers to the four Pauli matrices in quantum mechanics. In addition to the data types, QIR also declares quantum-related runtime functions and functions that operate on qubits (e.g., quantum gate operations and measurement operations). These functions are not implemented in the QIR and need to be defined and implemented externally. Such a design facilitates the adaptation of the QIR to different hardware backends, thus making the QIR independent of the hardware backend. QIR's official documentation [15] contains definitions for its instructions, and we will provide a brief explanation of the QIR instructions used in our work. Since the instructions for qubits declared in QIR have the same prefix _@__quantum__rt__ or _@__quantum__dis__, in our work, we have adopted an omitted notation for them to save space (e.g., the _@__quantum__rt__qubit__allocate_ is omitted and noted as _qubit__allocate_, the _@__quantum__qis__gateop__body_ is omitted and noted as _gateop__body_). Figure 2 shows a sample QIR code, which briefly describes the design of QIR. Lines 1 to 3 of the code mark the opaque data types. In lines 5 through 12, the code implements the function of allocating a qubit register and measuring it after executing the H gate. In line 7, the _qubit__allocate_ function is called, which allocates a qubit register. The functions that perform operations on the _Qubit_ are all declared as _gateop__body_, where _gateop_ is the specific operation to be performed, e.g., \(h\) in line 8 refers to the H gate. After executing the H gate operation in line 8, the measurement operation in line 9 is explicitly defined in lines 14 through 32. After completing the measurement, the qubit is released with _qubit_release_ in line 10, and the measurement result is returned in line 11. ## 3 Methodology In this section, we formalize the syntax and semantics of QIR to support the verification of unsafe code. Since QIR is designed based on LLVM IR, this paper does not consider the formalization of the classical LLVM IR part, which can be done using methods [29, 12]. Our formalization of QIP is based on version 0.1 of QIR. ### Syntax of QIR We first present our formalization of the syntax of QIR in Figure 3. Our abstract syntax extends on the work of Zhou _et al._[29], which provides a relatively complete formalization of LLVM IR. We mainly add to it data types that are specific to QIR, such as **%Subit**, **%Result**, **%Array**, etc. Since the QIR instruction is called with LLVM's **call** instruction, its abstract semantics can be attributed to _id_, so instructions about quantum gate operations, measurement operations, etc., cannot be reflected in the abstract syntax. Figure 2: Example of QIR. ### Semantics of QIR We next present our formalization of the semantics of QIR. Since QIR is designed to treat instructions as declarations of quantum runtime functions, the type is explicitly specified for each instruction input and output. Such a design avoids the nondeterministic semantics found in classical LLVM IR, thus liberating our efforts from cumbersome type checking and allowing us to focus the formalization on checking the correctness of quantum operations. Since the back-end hardware is responsible for implementing quantum operations in QIR, formalizing this process in QIR is not necessary. The guarantee of correct quantum operations should lie with the back-end hardware. Therefore, we can assume that the quantum operations are correct and further improve the security of the QIR code. #### 3.2.1 Allocating and Releasing Qubits QIR manipulates the corresponding qubits in the back-end registers by feeding **%Qubit\({}^{*}\)** or **%Array\({}^{*}\)** composed of them to the corresponding functions, without having direct access to **%Qubit** or **%Array** information (since they are opaque to QIR as well as LLVM). It makes it difficult for QIR to manage the state of qubits, and the compiler cannot know whether they have been released, thus raising a potential security risk. In this section, we formalize the semantics from the creation of a single qubit as well as an array of qubits and propose a set of management schemes for qubits and qubit arrays so that the possible problems of qubits cloning and the use of released qubits in QIR can be effectively captured. Static creation.In QIR, the allocation of qubits includes both static and dynamic methods. For qubits whose identifiers are already determinable at compile time, static qubit values can be generated using LLVM's **inttoptr** instruction. The following example is given in the specification document of QIR: \[\%\text{qubit3}=\textbf{inttoptr}\ \textbf{i32}\ \textbf{3 to \% Qubit}^{*}\] This code refers to a qubit3 device in the real equipment (or simulator) by converting the 3 of the \(\textbf{i32}\) data type to the **%Qubit\({}^{*}\)** type. Since the instructions for static creation of qubits use only the classical LLVM IR, the formal semantics are not elaborated in our work. Dynamic management.Dynamic qubits are allocated and released by the quantum runtime method. To store the allocated qubits and qubit arrays in the QIR code, we introduce two data structures in the Figure 3: The abstract syntax of QIR. The red font shows what is added to the abstract semantics of QIR compared to LLVM IR. framework, a one-dimensional array \(\mathsf{Q}\) and a two-dimensional array \(\mathsf{Q}\mathbf{A}\), respectively. Our management model is presented in Figure 4, where \(\mathsf{Q}\) stores the \(\boldsymbol{\%\mathbf{Qubit}}^{*}\) pointers for each allocated qubit. The first column of \(\mathsf{Q}\mathbf{A}\) stores the \(\boldsymbol{\%\mathbf{Array}}^{*}\) pointer for the qubit array, while the corresponding row stores the pointer \(\boldsymbol{\%\mathbf{Qubit}}^{*}\) for each qubit in the array. Some of the methods used to manage these two data structures are listed in Table 1. Our formal method is concerned with two possible safety issues in QIR code: the **cloning operation of qubits** and the **usage of released qubits**. These two kinds of unsafe codes can be effectively captured using our proposed management model. Equations 6 to 9 show the operational semantics of allocation and release of qubits and qubit arrays, respectively, where \(qubit\_allocate\), \(qubit\_allocate\_array\), \(qubit\_release\), and \(qubit\_release\_array\) are the instructions provided in QIR. Specifically, after performing the allocation of qubits and qubit arrays, their return values (\(\boldsymbol{\%\mathbf{Qubit}}^{*}\) and \(\boldsymbol{\%\mathbf{Array}}^{*}\)) are stored in \(\mathsf{Q}\) and \(\mathsf{Q}\mathbf{A}\) (See Figure 4). After releasing the qubit and qubit arrays, the corresponding values are removed from \(\mathsf{Q}\) and \(\mathsf{Q}\mathbf{A}\). **To avoid using released qubits and qubit arrays**, we add the process of checking whether the qubits and qubit arrays are in \(\mathsf{Q}\) and \(\mathsf{Q}\mathbf{A}\) (use **checkq** and **checkqarrlist**) in all the rules in addition to Q_ALLOC and QAR_ALLOC. In particular, when releasing a single qubit (rule Q_DEALLOC), we have added the process of checking whether the qubit belongs to a qubit array (with **findqarr**) because releasing a single qubit in a qubit array directly can lead to errors when releasing the qubit array later. \[\frac{q=qubit\_allocate()\ \mathbf{appdist}(\mathsf{Q},q)=\mathsf{Q}^{ \prime}}{\mathsf{Q}\vdash\mathsf{Q},\mathit{id}=\mbox{\bf call }\boldsymbol{\%\mathbf{Qubit}}^{*}qubit\_allocate() \Rightarrow\mathsf{Q}^{\prime},\mathit{id}\gets q}\ \mbox{Q\_ALLOC} \tag{6}\] \[\frac{qarray=qubit\_allocate\_array(n)\ \mathbf{appqarrlist}(\mathsf{Q} \mathbf{A},qarray)=\mathsf{Q}\mathbf{A}^{\prime}}{\mathsf{Q}\mathbf{A},n \vdash\mathbf{Q},\mathit{id}=\mbox{\bf call }\boldsymbol{\%\mathbf{Array}}^{*}qubit\_allocate\_array(n) \Rightarrow\mathsf{Q}\mathbf{A}^{\prime},n,\mathit{id}\gets qarray}\ \mbox{QAR\_ALLOC} \tag{7}\] \begin{table} \begin{tabular}{|l|l|} \hline Method & Description \\ \hline \(\mathbf{appqlist}(\mathsf{Q},\mathit{q})\) & Append \(\mathit{q}\) to \(\mathsf{Q}\). \\ \hline \(\mathbf{appqarrlist}(\mathsf{Q}\mathbf{A},\mathit{qarray})\) & Append \(\mathit{qarray}\) to \(\mathsf{Q}\mathbf{A}\). \\ \hline \(\mathbf{checkq}\mathbf{checkq}(\mathsf{Q},\mathit{q})\) & Check if \(\mathit{q}\) is in \(\mathsf{Q}\). If true, return 1; otherwise, return 0. \\ \hline \(\mathbf{checkq}\mathbf{arrlist}(\mathsf{Q}\mathbf{A},\mathit{qarray})\) & Check if \(\mathit{qarray}\) is in \(\mathsf{Q}\mathbf{A}\). If true, return 1; otherwise, return 0. \\ \hline \(\mathbf{delq}(\mathsf{Q},\mathit{q})\) & Remove \(\mathit{q}\) from \(\mathsf{Q}\). \\ \hline \(\mathbf{delqarr}(\mathsf{Q}\mathbf{A},\mathit{qarray})\) & Remove the row of \(\mathit{qarray}\) from \(\mathsf{Q}\mathbf{A}\). \\ \hline \(\mathbf{appqarr}(\mathsf{Q}\mathbf{A},\mathit{qarray},\mathit{q})\) & Append \(\mathit{q}\) to \(\mathit{qarray}\) in \(\mathsf{Q}\mathbf{A}\). Skip if \(\mathit{q}\) already exists in \(\mathit{qarray}\). \\ \hline \(\mathbf{checkq}\mathbf{arr}(\mathsf{Q}\mathbf{A},\mathit{qarray},\mathit{q})\) & Check if \(\mathit{q}\) is in \(\mathit{qarray}\). If true, return 1; otherwise, return 0. \\ \hline \(\mathbf{findqarr}(\mathsf{Q}\mathbf{A},\mathit{q})\) & Returns the array where \(\mathit{q}\) is located, or 0 if the array does not exist. \\ \hline \end{tabular} \end{table} Table 1: Some of the methods used to manage qubits and qubit arrays in our validation framework Figure 4: A management model for qubits and qubits arrays in QIR. \[\begin{array}{l}\text{if not }qarray={\bf findqarr}({\bf Q}{\bf A},q)\text{ and }{\bf checkq}({\bf Q},q)\text{ then }qubit\_release(q)\text{ }{\bf delq}({\bf Q},q)={\bf Q}^{\prime}\text{ else }abort\\ {\bf Q},{\bf Q}{\bf A},q\vdash{\bf Q},{\bf Q}{\bf A},{\bf call\ }qubit\_release(q)\Rightarrow{\bf Q}^{\prime},{\bf Q}{\bf A},q \end{array} \tag{8}\] \[\begin{array}{l}\text{if }{\bf checkqarrlist}({\bf Q}{\bf A},qarray)\text{ then }qubit\_release\_array outcome, it violates the non-cloning principle of quantum computing. Its management for **Q** and **QA** is shown in Figure 4. Specifically, rule QARR_CREATE adds the new array pointer to the **QA** and adds the qubits that make up the array to the corresponding row. \[\begin{array}{l}\text{if }\textbf{checkq}(\textbf{Q},q_{i})\text{ or }qarray_{temp}=\textbf{ findqarr}(\textbf{QA},q_{i})^{i=1...n}\text{ then}\\ \par aarray=(id_{1}=\textbf{call }\textbf{\%Array}^{*}array_{c}create\_1d(sz,n)) \textbf{popqarrlist}(\textbf{QA},qarray)\\ \hline ptr_{i}=(id_{2i}=(\textbf{call }\textbf{isz}^{*}array_{c}get\_element\_ptr \_1d(qarray,i)))\textbf{ }qptr_{i}=(id_{3i}=\textbf{bitcast }\textbf{isz}^{*}\textbf{ }ptr_{i}\textbf{ to }\textbf{\% Qubit}^{**})^{i=1...n}\end{array}\] \[\begin{array}{l}\text{if not }\textbf{checkqarr}(\textbf{QA},qarray_{i})\text{ then }\textbf{store }\textbf{% Qubit}^{*}\textbf{ }q_{i}\text{, }\textbf{% Qubit}^{**}\textbf{ }qptr_{i},align\text{else }abort^{i=1...n}\\ \hline\textbf{popqarr}(\textbf{QA},qarray_{i})^{i=1...n}=\textbf{QA}^{i}\text{ else }abort\\ \hline\textbf{Q},\textbf{QA},(q_{1},q_{2},...,q_{n})\vdash\textbf{Q},\textbf{QA},id=create\_qubit\_array(q_{1},q_{2},...,q_{n})\Rightarrow\textbf{Q},\textbf{ QA}^{i},(q_{1},q_{2},...,q_{n}),id\gets qarray\end{array} \tag{11}\] #### 3.2.3 Gate Operations In QIR, gate operations can be divided into two main categories: single-qubit gate operations and controlled gate operations formed by adding control qubits to single-qubit gate operations. We will formalize the semantics of these two types of gate operations separately. In QIR, the gate operations of qubits do not change in our management model for **Q** and **QA** since they do not involve loading and storing of data as well as allocation and release of qubits. The rules SG_OP and CG_OP show our formalization of the semantics of single-qubit gates as well as controlled gates, respectively, where \(gateop\_body\) and \(gateop\_ctl\) are instructions for gate operations in QIR, and \(gateop\) is a single-qubit gate such as \(x\),\(Rx\),\(h\). \[\begin{array}{l}\text{if }\textbf{checkq}(\textbf{Q},q)\text{ or }qarray_{temp}=\textbf{ findqarr}(\textbf{QA},q)\text{ then }gateop\_body(option\text{ }d,q)\text{ else }abort\\ \hline\textbf{Q},\textbf{QA},option\text{ }d,q\vdash\textbf{Q},\textbf{QA}, \textbf{call void }\textbf{}gateop\_body(option\text{ }d,q)\Rightarrow\textbf{Q},\textbf{QA},option\text{ }d,q\\ \end{array}\] SG_OP (12) \[\begin{array}{l}\text{if }(\textbf{checkq}(\textbf{Q},q)\text{ or }qarray_{temp}=\textbf{ findqarr}(\textbf{QA},q))\text{ and }\textbf{checkqarr}(\textbf{QA},qarray)\\ \text{and not }\textbf{checkqarr}(\textbf{QA},qarray)\text{ then }gateop\_ctl(qarray, (option\text{ }d,q))\text{ else }abort\\ \hline\textbf{Q},\textbf{QA},option\text{ }d,q\vdash\textbf{Q},\textbf{QA}, \textbf{call void }\textbf{}gateop\_ctl(qarray, (option\text{ }d,q))\Rightarrow\textbf{Q},\textbf{QA},option\text{ }d,q\end{array} \tag{13}\] In QIR, the control qubits are entered into the instruction in the form of a qubit array. So similar to the rule QARR_CREATE, for CG_OP, we also add a check for whether the target qubit exists in the control qubit array (with **checkqarr**) to avoid the problem of qubit cloning. #### 3.2.4 Measurement For the measurement operation, the measurement result can be returned in QIR by entering an array of Pauli values and an array of qubits. The semantics of the measurement operation is: \[\begin{array}{l}\text{if }\textbf{checkqarr}(\textbf{QA},qarray)\text{ then }result= measure\_body(Pauliarr,qarray)\text{ else }abort\\ \hline\textbf{QA},Pauliarr,qarray\vdash id=\textbf{call }\textbf{\% Result}^{*}\textbf{ measure\_body(Pauliarr,qarray)}\Rightarrow\textbf{QA},Pauliarr,qarray,id\leftarrow result \end{array} \tag{14}\] ## 4 Verification on Unsafe Code of QIR This section applies our method to verify some insecure QIR codes. Since QIR is still not officially in use, the most prominent way to generate QIR code at present is to use the Q# compiler provided by Microsoft. This provides us with the inspiration to collect verification samples. If an unsafe Q# program passes the compiler but fails at runtime, we can generate the corresponding QIR code from it for verification. Figure 5 shows two samples of unsafe Q# code referenced from [23], where in Figure 4(a) the qubit returned is released at the end of the execution of the function _NewQubit_, and in Figure 4(b) the three qubits called by _CCNOT_ are the same qubit, violating the no-cloning theorem. In the next subsections, we will present the corresponding QIR code and apply the semantics we designed to check for faults. ### Using Deallocated Qubits Figure 7 shows the QIR code obtained from the Q# code conversion from Figure 4(a). In this example, for the return value of the function _@NewQubit_body_ in line 3, we can analyze it using the rules Q_ALLOC and Q_DEALLOC, so that we can know that the %_q_ returned has been released (see Figure 5(a)). In the execution of _h_body_ in line 4, the rule SG_OP is applied to interrupt the program since %_q_ does not exist in Q, thus avoiding the error of using the released qubit. ### Qubit Cloning Figure 8 shows the QIR code obtained from the code conversion of the cloned quantum in Figure 4(b). In the example, for the function, _CCNOT_body_ in line 4, two identical **Qubit**\({}^{*}\) %_q_1 are entered as control qubits, and %_q_1 itself as the target qubit. Thus for lines 12 to 18 of the code, the rule QARR_CREATE can be applied, thus interrupting the program in line 18 due to the transfer to the array %_controlQubits_ stores the same **Qubit**\({}^{*}\) %_q_1 and avoiding the qubit cloning problem (see Figure 5(b)). Other than that, for Figure 5: Example of unsafe Q# codes. Figure 6: Example of applying the formal method to verify unsafe QIR code. line 20 of the code, the rule CG_OP can be applied, and since %_q_1, which is the target qubit, is the same as the element in the control qubit array, qubit cloning can be avoided in this step as well. ## 5 Related Work This section discusses some related work in the areas of intermediate representations for both classical and quantum programming languages. ### LLVM IR Semantics As the basis of QIR, the formal approach of LLVM IR is an important reference for our work. Zhao _et al._[29] propose the Vellvm (verified LLVM) framework, which provides a formalization of the static semantics, memory model, and several operational semantics of LLVM IR. The framework is implemented using Coq, through which verified executable code with high confidence can be extracted directly, and the effectiveness of Vellvm was verified with the SoftBound [17] case. Li and Gunter [12] design K-LLVM, the complete formal LLVM IR semantics, including all LLVM IR instructions, intrinsic functions in the LLVM documentation, etc. Compared to Vellvm, which focuses on formalizing LLVM semantics as a mathematical object, K-LLVM shows possible implementations of the semantics in a virtual computer with a more direct approach. K-LLVM is implemented by \(\mathbb{K}\)[16], and its validity is verified by testing against unit test programs as well as actual LLVM IR programs. However, the above work is directed toward formalizing LLVM IR and does not address quantum programming languages or intermediate representations. ### Formalized Quantum Intermediate Representation and Programming Language As the most relevant work to us, Hietala _et al._[9] present VOQC, a verified optimizer for quantum circuits. As the input to VOQC, a small quantum intermediate representation (SQIR) was developed to represent quantum circuits and support the verification of quantum circuit optimization. SQIR is well formalized, and its syntax and semantics for quantum circuits can guarantee the correctness of VOQC optimization. Our work differs from SQIR in two main ways: first, the formal object, which is QIR developed by Microsoft, and its application scenario is to provide a general solution for quantum Figure 7: QIR code converted from Figure 4(a). To save space, we have only intercepted the minimum critical code. programming languages and back-end hardware. SQIR, on the other hand, is an independently developed quantum intermediate representation, which is mainly applied to the verification process of VOQC. The second is the difference in purpose. In our work, the main goal of the formalization of QIR is to provide guarantees for the correctness of quantum programs and the behavior of QIR, while SQIR focuses on the assurance of correctness for quantum circuit optimization. In terms of formalizations of quantum programming languages, Singhal _et al._[23] present \(\lambda_{Q\#}\), an idealized version of Q#. Based on Staton's work [24], they provide a syntactic and semantic formalization of \(\lambda_{Q\#}\) that enables it to ensure quantum no-cloning theorem and to provide stacked-like management of qubits. By converting from Q# to \(\lambda_{Q\#}\), additional safety guarantees can be provided for Q# code. For the Quipper [8] programming language, Mahmoud and Felty [13] present Proto-Quipper, which contains the core functionality of Quipper and extends Quipper's system based on the linear specification logic (SL). They also implemented a formalization of Proto-Quipper via Hybrid [6], encoded the complete Proto-Quipper specification, and proved the correctness of its type system. While these works can serve as valuable references for our research, it is critical to recognize significant differences between our work and theirs; our research centers on quantum intermediate representations rather than quantum programming languages. ## 6 Concluding Remarks In this paper, we have formalized the core functionality of QIR, particularly its abstract syntax and the semantics of its several runtime functions. Our formalization ensures that QIR programs follow the quantum non-cloning theorem and avoid calls to released qubits and arrays. Based on the analysis of real QIR code, we also demonstrated the effectiveness of our formalization in capturing unsafe code in programs. Our current work is only an initial attempt to formalize the specification of QIR, and more work is needed to refine it. The QIR functions that have been formalized so far are only a small part. More Figure 8: QIR code converted from Figure 4(a). To save space, we have only intercepted the minimum critical code. functions need to be formalized, such as \(Tuple\) and \(String\) in the data type, the semantics of the complete measurement process, and performing batched gate operations on qubits array. From a practical perspective, to apply our formal method directly to the QIR programs for automatic verification (rather than through human analysis as in this work), we need to implement our formal approach with formal interactive theorem checkers such as Coq [4] in further work.
2308.16065
Asymptotics of Some Plancherel Averages via Polynomiality Results
Consider Young diagrams of $n$ boxes distributed according to the Plancherel measure. So those diagrams could be the output of the RSK algorithm, when applied to random permutations of the set $\{1,\ldots,n\}$. Here we are interested in asymptotics, as $n\to \infty$, of expectations of certain functions of random Young diagrams, such as the number of bumping steps of the RSK algorithm that leads to that diagram, the side length of its Durfee square, or the logarithm of its probability. We can express these functions in terms of hook lengths or contents of the boxes of the diagram, which opens the door for application of known polynomiality results for Plancherel averages. We thus obtain representations of expectations as binomial convolutions, that can be further analyzed with the help of Rice's integral or Poisson generating functions. Among our results is a very explicit expression for the constant appearing in the almost equipartition property of the Plancherel measure.
Werner Schachinger
2023-08-30T14:40:49Z
http://arxiv.org/abs/2308.16065v1
# Asymptotics of some Plancherel averages via polynomiality results ###### Abstract. Consider Young diagrams of \(n\) boxes distributed according to the Plancherel measure. So those diagrams could be the output of the RSK algorithm, when applied to random permutations of the set \(\{1,\ldots,n\}\). Here we are interested in asymptotics, as \(n\to\infty\), of expectations of certain functions of random Young diagrams, such as the number of bumping steps of the RSK algorithm that leads to that diagram, the side length of its Durfee square, or the logarithm of its probability. We can express these functions in terms of hook lengths or contents of the boxes of the diagram, which opens the door for application of known polynomiality results for Plancherel averages. We thus obtain representations of expectations as binomial convolutions, that can be further analyzed with the help of Rice's integral or Poisson generating functions. Among our results is a very explicit expression for the constant appearing in the almost equipartition property of the Plancherel measure. Key words and phrases:Robinson-Schensted algorithm, Young diagram, Plancherel measure, Durfee square, asymptotic expansion, Vershik-Kerov conjecture ## 1. Introduction We identify Young diagrams (sets consisting of left aligned decreasingly ordered rows of square boxes) with partitions \(\lambda=(\lambda_{1},\ldots,\lambda_{k})\) with \(\lambda_{1}\geq\lambda_{2}\geq\cdots\geq\lambda_{k}\), and denote \(|\lambda|=\lambda_{1}+\ldots+\lambda_{k}\). The notation \(\lambda\vdash n\) then signifies that \(\lambda\) is a partition of \(n\), i.e., \(|\lambda|=n\). We let \(Y(\pi)=Y_{\lambda}:=\sum_{\ell=1}^{k}\lambda_{\ell}(\ell-1)\) denote the number of bumping steps of the Robinson-Schensted algorithm (see Figures 1 and 2) when applied to a permutation \(\pi\) that is mapped to a pair of standard Young tableaux of shape \(\lambda\). A standard Young tableau is a Young diagram \(\lambda\) filled with numbers \(1,\ldots,|\lambda|\) in a way such that numbers in each row and each column are increasing. See e.g. [23, sec. 1.6] or [24, sec. 3.1] for nice expositions of the algorithm and references to the original articles by Gilbert de Beauregard Robinson, by Craige Eugene Schensted, and by Donald Ervin Knuth, who significantly widened the scope of the algorithm, the abbreviation with reference to all three authors, _RSK algorithm_, now frequently being used also to refer to the original Robinson-Schensted algorithm. We denote by \(Y_{n}\) the restriction of \(Y(\pi)\) to permutations of the set \(\{1,2,\ldots,n\}\) chosen uniformly at random. The Young diagrams \(\lambda\) obtained by the RSK algorithm are then distributed according to the \(n\)th Plancherel measure, i.e., \(\mathrm{IP}l^{(n)}(\lambda)=\frac{f_{\lambda}^{2}}{n!}=\frac{n!}{p_{\lambda}^{2}}\), where \(f_{\lambda}\) is the number of standard Young tableaux of shape \(\lambda\), satisfying \(f_{\lambda}=\frac{n!}{p_{\lambda}}\), and where \(p_{\lambda}:=\prod_{u\in\lambda}h_{u}\) denotes the product of the hook lengths of the diagram \(\lambda\), see [7]. Here the hook length \(h_{u}\) of a particular box \(u\) of \(\lambda\) is one more than the number of boxes to the right of \(u\) plus the number of boxes below \(u\). Note that \(Y_{\lambda}\) has also the meaning of \(|\lambda|\) times the \(y\)-coordinate of the barycenter of the set \[S_{\lambda}:=\{(i,j)\in\mathbb{Z}^{2}:0\leq j\leq k-1,0\leq i\leq\lambda_{j+1}-1\},\] which is just the set of lower left corners of the boxes of \(\lambda\) in French notation, which addresses boxes by Cartesian coordinates of the first quadrant. Apart from here we always stick to English notation with its matrix style indexing of boxes. Note that \(X_{\lambda}\), the \(x\)-coordinate of the barycenter of \(S_{\lambda}\), is given by \(Y_{\mathcal{N}}\), where \(\lambda^{\prime}\) is the partition conjugate to \(\lambda\), its parts being defined by \(\lambda^{\prime}_{j}:=|\{i:\lambda_{i}\geq j\}|\). Stated differently, \(\lambda\) and \(\lambda^{\prime}\) are mirror images of one another with respect to the main diagonal (upper left to lower right). The sets of hook lengths are therefore the same for \(\lambda\) and \(\lambda^{\prime}\), which yields invariance of Plancherel measure under conjugation. Thus \(X_{n}\) and \(Y_{n}\) are identical in distribution. This allows for a representation of \(\operatorname{I\!E}Y_{n}\) and \(\operatorname{Var}Y_{n}\) in terms of \(X_{n}+Y_{n}\) and \(X_{n}-Y_{n}\), \[\operatorname{I\!E}Y_{n} =\tfrac{1}{2}\operatorname{I\!E}\left(X_{n}+Y_{n}\right),\] \[\operatorname{Var}Y_{n} =\tfrac{1}{2}\left(\operatorname{Var}X_{n}+\operatorname{Var}Y_{n }\right)=\tfrac{1}{4}\big{(}\operatorname{Var}\left(X_{n}+Y_{n}\right)+ \operatorname{Var}\left(X_{n}-Y_{n}\right)\big{)}.\] Note that we can express \(X_{\lambda}-Y_{\lambda}\), resp. \(X_{\lambda}+Y_{\lambda}\), in terms of contents \(\{c_{u}:u\in\lambda\}\), resp. hook lengths \(\{h_{u}:u\in\lambda\}\), of the diagram \(\lambda\): \[X_{\lambda}-Y_{\lambda}=\sum_{u\in\lambda}c_{u},\qquad X_{\lambda}+Y_{\lambda} =\sum_{u\in\lambda}h_{u}-|\lambda|. \tag{1.1}\] Figure 1. Tableaux \(P\) and \(Q\), as they evolve when subjecting the RSK algorithm to the permutation \(\pi=(75186342)\). \(P\) is constructed by row insertions of elements of \(\pi\) one by one, while \(Q\) is recording the position of boxes as they are added. Figure 2. Detailed insertion of \(2\) into the second to last tableau \(P\). Inserting \(2\), \(3\), and \(5\) into their (shaded) destination boxes causes bumps of elements down one row, being now in need of insertion themselves. In the last step, \(7\) is the largest element of row four, so this insertion happens without a pump. Here the content \(c_{u}\) of a box \(u=(i,j)\) of \(\lambda\) is \(j-i\), i.e., the column number of \(u\) minus the row number of \(u\), see Figure 3 for an illustration of hook lengths, contents, and bumping step counts. For a proof of (1.1) note \[Y_{\lambda}=\sum_{i=1}^{k}\lambda_{i}(i-1)=\sum_{(i,j)\in\lambda}(i-1)=\sum_{j= 1}^{\lambda_{1}}\sum_{i=1}^{\lambda_{j}^{\prime}}(i-1)=\sum_{j=1}^{\lambda_{1 }}\sum_{i=1}^{\lambda_{j}^{\prime}}(\lambda_{j}^{\prime}-i)=\sum_{(i,j)\in \lambda}(\lambda_{j}^{\prime}-i),\] and similarly \[X_{\lambda}=Y_{\lambda^{\prime}}=\sum_{(i,j)\in\lambda}(j-1)=\sum_{(i,j)\in \lambda}(\lambda_{i}-j),\] leading to \(X_{\lambda}-Y_{\lambda}=Y_{\lambda^{\prime}}-Y_{\lambda}=\sum_{(i,j)\in \lambda}\left[(j-1)-(i-1)\right]=\sum\limits_{(i,j)\in\lambda}(j-i)=\sum \limits_{u\in\lambda}c_{u}\) and \(X_{\lambda}+Y_{\lambda}+|\lambda|=\sum\limits_{(i,j)\in\lambda}\left[(\lambda _{i}-j)+(\lambda_{j}^{\prime}-i)+1\right]=\sum\limits_{u\in\lambda}h_{u}\), where \((\lambda_{i}-j)+(\lambda_{j}^{\prime}-i)+1\) is clearly the hook length of box \((i,j)\). Further functions of \(\lambda\) that can be written in terms of hook lengths or contents are \(\log p_{\lambda}=\sum_{u\in\lambda}\log(h_{u})\) making its appearance in section 3, and \(D(\lambda)=\sum_{u\in\lambda}\delta_{0,c_{u}}\), the number of boxes of \(\lambda\) on the main diagonal, that we will meet in section 4. Being able to express some function of \(\lambda\) in terms of the contents or hook lengths of the boxes of \(\lambda\) can allow us to employ the polynomiality results for Plancherel averages derived by Stanley [26]. **Theorem 1.1**.: ([26, Thm. 2.1,Thm. 4.3]) _Let \(F(x)\) be a formal power series over \(\mathbb{Q}\) of bounded degree that is symmetric in the variables \(x=(x_{1},x_{2},\ldots)\). Then both averages_ \[\frac{1}{n!}\sum_{\lambda\vdash n}f_{\lambda}^{2}F(c_{u}:u\in\lambda)\qquad \text{and}\qquad\frac{1}{n!}\sum_{\lambda\vdash n}f_{\lambda}^{2}F(h_{u}^{2}: u\in\lambda)\] _are polynomial functions of \(n\)._ Note that an even more general result is given in [26, Thm 4.4]. See also [20] for alternative proofs and further generalizations. The proof of [26, Thm. 2.1] restricts w. l. o. g. to elementary symmetric functions indexed by partitions \(\mu=(\mu_{1},\ldots,\mu_{k})\), i.e., to functions \(F(\cdot)=e_{\mu}(\cdot)=\prod_{i=1}^{k}e_{\mu_{i}}(\cdot)\), where \(e_{m}(x_{1},x_{2},\ldots)=\sum_{i_{1}<\cdots<i_{m}}x_{i_{1}}\ldots x_{i_{m}}\) for \(m\geq 1\). As remarked in [26] right below that proof, the resulting polynomial \(N_{\mu}\) is of degree \(|\mu|\) if and only if \(|\mu|\) is even and \(\mu_{1}\leq\frac{|\mu|}{2}\), otherwise, \(N_{\mu}=0\). Here is an immediate application of these degree considerations. Figure 3. The partition \((5,3,1,1)\vdash 10\), drawn as Young diagram, filled from left to right with bumping step counts, sums of box coordinates, hook lengths, and contents. The sum of the entries of the second diagram is \(10\) less than the sum of the hook lengths. **Lemma 1.2**.: \(\operatorname{Var}\left(X_{n}-Y_{n}\right)=\binom{n}{2}\)_._ Proof.: Since \(\operatorname{I\!E}\left(X_{n}-Y_{n}\right)=0\), we have \[\operatorname{Var}\left(X_{n}-Y_{n}\right)=\frac{1}{n!}\sum_{\lambda\vdash n}f_ {\lambda}^{2}\bigg{(}\sum_{u\in\lambda}c_{u}\bigg{)}^{2}=\frac{n(n-1)}{2}.\] Note that here we have \(F(c_{u}:u\in\lambda)=\big{(}e_{1}(c_{u}:u\in\lambda)\big{)}^{2}\), i.e., \(\mu=(1,1)\). The polynomial \(N_{\mu}\) is therefore of degree \(2\), and it is completely determined by its values at \(n\in\{0,1,2\}\), which are \(N_{\mu}(0)=N_{\mu}(1)=0,N_{\mu}(2)=1\), proving the claim. The result \(N_{\mu}(n)=\frac{n(n-1)}{2}\) is also stated as a special case in [26, p. 94]. A workaround is needed for \(\operatorname{Var}\left(X_{n}+Y_{n}\right)\), or even \(\operatorname{I\!E}\left(X_{n}+Y_{n}\right)\), because \(X_{n}+Y_{n}\) is not a symmetric function of \(\{h_{u}^{2}:u\in\lambda\}\), but only of \(\{h_{u}:u\in\lambda\}\). Finding a series representation \(x=\sum_{k\geq 0}a_{k}p_{k}(x^{2})\) with polynomials \(p_{k}\), that holds for integers \(x\geq 1\), (but need not hold or even converge elsewhere) would allow, interchanging summations, to apply the polynomiality results termwise. If we are lucky -- and we are -- the polynomials \(p_{k}\) have well known Plancherel averages. Such kind of workaround is employed in this paper to deal with Plancherel averages of several interesting functions of partitions, leading firstly to a representation of the expectation as a binomial convolution, that is free of references to partitions, and can be analyzed using Rice's integral, or Poisson generating functions. In some cases holonomicity of the sequence of expectations can be inferred from the binomial convolution representation. This then allows for fast computation of many terms, that can be used to numerically confirm error terms, or conduct experiments. The paper is organized as follows: In section 2 we consider the expected number of bumping steps in the RSK algorithm. In particular, we derive asymptotics for \(\operatorname{I\!E}\left(X_{n}+Y_{n}\right)\), thus refining the result obtained by Romik [22]. In section 3 we consider \(\operatorname{I\!E}\,\log\operatorname{I\!P}l^{(n)}(\lambda)\), with \(\lambda\) distributed according to Plancherel measure. From the first asymptotic terms we obtain a very explicit representation of a constant appearing in an _almost equipartition property_ (abbreviated AEP) for Plancherel measure, conjectured in [28], and proven in [4]. In section 4 we derive asymptotics of the expectation of the side length \(D(\lambda)\) of the _Durfee square_ of \(\lambda\), i.e., the largest square fitting in the upper left corner of the Young diagram of \(\lambda\), when partitions \(\lambda\) are distributed according to Plancherel measure, see Table 1. Considering in section 5 more generally lengths of south-east directed cuts through the Young diagram of \(\lambda\), we enter the realm of a sequence of random curves \(\psi_{\lambda}\) known to converge uniformly in probability to the _Logan-Shepp-Vershik-Kerov limit shape curve_\(\Omega\) for \(|\lambda|\to\infty\). For any fixed integer \(a\) the sequence with terms \(\sqrt{n}\operatorname{I\!E}\psi_{\lambda}\left(\frac{a}{\sqrt{2n}}\right)\), with expectation computed with respect to \(\operatorname{I\!P}l^{(n)}\), turns out to be holonomic. Experiments then strongly hint at convergence of \(\operatorname{I\!E}\psi_{\lambda}\left(\frac{\lfloor u\sqrt{2n}\rfloor}{ \sqrt{2n}}\right)\to\Omega(u)\), uniformly in \(u\), and reveal that second order terms show interesting fluctuations. However we are only able to prove asymptotic results in the case of fixed \(a\), i.e., in the vicinity of \(u=0\). In section 6 we return to the number \(Y_{n}\) of bumping steps, giving a heuristic argument for \(\operatorname{Var}Y_{n}=\mathcal{O}(n^{2})\), based on the limit shape curve. ## 2. Refined asymptotics of the expected number of bumping steps in the RSK algorithm Recall that \(\operatorname{I\!E}\left(X_{n}+Y_{n}\right)=2\operatorname{I\!E}Y_{n}\) denotes twice the expected number of bumping steps of the RSK algorithm when applied to a random permutation of \(\{1,\ldots,n\}\). Romik [22, eq. (1)] derived the following asymptotic result, \(\operatorname{I\!E}Y_{n}\sim\frac{128}{27\pi^{2}}n^{\frac{3}{2}}\), and showed \(Y_{n}/\!\operatorname{I\!E}Y_{n}\to 1\) in probability. The sequence of interest starts \[\bigl{(}\operatorname{I\!E}\left(X_{n}+Y_{n}\right)\bigr{)}_{n=1}^{10}=\bigl{(} 0,1,\tfrac{7}{3},\tfrac{25}{6},\tfrac{19}{3},\tfrac{44}{5},\tfrac{347}{30}, \tfrac{8181}{560},\tfrac{541273}{30240},\tfrac{1943453}{90720}\bigr{)}.\] The next theorem leads to a refinement of Romik's asymptotic equivalent for \(\operatorname{I\!E}Y_{n}\). **Theorem 2.1**.: (Expected number of bumping steps in the RSK algorithm) _Let \(\delta_{n}:=\log n+2\gamma+12\log 2\), with \(\gamma\) denoting Euler's constant. Then_ \[\operatorname{I\!E}\left(X_{n}+Y_{n}\right) =\frac{256}{27\pi^{2}}n^{\frac{3}{2}}-n+\frac{9\delta_{n}-77}{9 \pi^{2}}n^{\frac{1}{2}}+\frac{3510\delta_{n}-31589}{27648\pi^{2}}n^{-\frac{1}{ 2}}\] \[\qquad+\frac{5565\delta_{n}-62224}{786432\pi^{2}}n^{-\frac{3}{2}} +\frac{e^{8}}{2^{12}\pi^{3/2}}\cos\left(8\sqrt{n}+\frac{\pi}{4}\right)n^{- \frac{7}{4}}+\mathcal{O}\Bigl{(}n^{-\frac{9}{4}}\Bigr{)}.\] Proof.: As \(X_{n}+Y_{n}\) is not a symmetric polynomial of the multiset \(\{h_{u}^{2}:u\in\lambda\}\), but only of the multiset \(\{h_{u}:u\in\lambda\}\), we can not expect \(\operatorname{I\!E}\left(X_{n}+Y_{n}\right)\) to be a polynomial in \(|\lambda|\). Indeed, by Romik's result, \(\operatorname{I\!E}\left(X_{n}+Y_{n}\right)=\Theta(n^{\frac{3}{2}})\) is definitely not a polynomial. However, we can invoke polynomiality results via the following identity. Using \[p(x,r):=\prod_{i=1}^{r}(x^{2}-i^{2}), \tag{2.1}\] the equation \[x=1+\sum_{r=1}^{\infty}\binom{2r}{r}\frac{(-1)^{r}}{(1-2r)(2r+1)!}p(x,r) \tag{2.2}\] holds for \(x\in\mathbb{N}:=\{1,2,3,\ldots\}\). This will be proved in the appendix. Now, by [21, Thm. 1], we have \[\frac{1}{n!}\sum_{\lambda\vdash n}f_{\lambda}^{2}\sum_{u\in\lambda}p(h_{u},r)= K_{r}\binom{n}{r+1}, \tag{2.3}\] \begin{table} \begin{tabular}{|c||c|c|c|c|c||c|} \hline \(\lambda\) & \(\quad\) & \(\quad\) & \(\quad\) & \(\quad\) & \(\quad\) & \(\quad\) & \(\quad\) \\ \hline \(\operatorname{I\!P}l^{(4)}(\lambda)\) & \(\frac{1}{24}\) & \(\frac{3}{8}\) & \(\frac{1}{6}\) & \(\frac{3}{8}\) & \(\frac{1}{24}\) & & \\ \(X_{\lambda}-Y_{\lambda}\) & \(6\) & \(2\) & \(0\) & \(-2\) & \(-6\) & \(0\) \\ \(X_{\lambda}+Y_{\lambda}\) & \(6\) & \(4\) & \(4\) & \(4\) & \(6\) & \(\frac{25}{6}\) \\ \(\log\operatorname{I\!P}l^{(4)}(\lambda)\) & \(\log\frac{1}{24}\) & \(\log\frac{3}{8}\) & \(\log\frac{1}{6}\) & \(\log\frac{3}{8}\) & \(\log\frac{1}{24}\) & \(\frac{8}{3}\log\frac{1}{2}+\frac{\log 3}{2}\) \\ \(D(\lambda)\) & \(1\) & \(1\) & \(2\) & \(1\) & \(1\) & \(\frac{7}{6}\) \\ \hline \end{tabular} \end{table} Table 1. Some functions of partitions \(\lambda\vdash 4\) and their expectations with respect to Plancherel measure. with \(K_{r}=\frac{(2r)!(2r+1)!}{(r+1)!^{2}r!}\), leading to \[\mathrm{I\!E}\;(X_{n}+Y_{n}) =\frac{1}{n!}\sum_{\lambda\vdash n}f_{\lambda}^{2}\sum_{u\in \lambda}(h_{u}-1)\overset{\eqref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eqeq:eq:eqeq:eq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeq:eqeq:eqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeq with initial conditions \[u_{0}=0,\quad u_{1}=1,\quad u_{2}=3,\quad u_{3}=\frac{16}{3}.\] Clearly, the terms \(\frac{u_{n}}{n!}\) comprise a sequence, that is the convolution of two sequences that are obviously holonomic. For said convolution the _gfun_ package [25] then easily produces a recursion. For the Poisson generating function \(U(z):=e^{-z}\sum_{n\geq 0}\frac{u_{n}}{n!}z^{n}\) we obtain \[U(z)=-z\sum_{k\geq 0}\binom{2k}{k}^{2}\frac{(-z)^{k}}{(k+1)(2k-1)(k+1)!^{2}}=z \,{}_{2}F_{3}\,\genfrac{[}{]}{0.0pt}{}{-\frac{1}{2},\,\frac{1}{2}}{2,\,2};-16z \bigg{]}\,,\] a hypergeometric function that may also be used to recover asymptotics of \(u_{n}\), see [15, Sec. 5.11.2] for asymptotic expansions of generalized hypergeometric functions. Indeed, the leading terms of an asymptotic expansion of \(U(z)\), provided by Maple, together with Depoissonization via the saddle point method, yield an alternative proof of Theorem 2.1. Note that the recurrence relation allows for easily computing millions of terms of the sequence \((\operatorname{I\!E}\left(X_{n}+Y_{n}\right))_{n\geq 1}\), which can be used to numerically confirm the error term in Theorem 2.1. ## 3. The constant appearing in the AEP for Plancherel measure We consider the random variables \(Z_{n}=Z_{n}(\lambda):=\sum_{u\in\lambda}\log h_{u}\), where \(\lambda\vdash n\) is distributed according to the Plancherel measure, and denote \(z_{n}:=\operatorname{I\!E}Z_{n}\). The sequence starts \[(z_{1},\ldots,z_{5}) =(0,\ \log 2,\ \tfrac{\log 2}{3}+\log 3,\ \tfrac{17}{6}\log 2 +\tfrac{\log 3}{4},\ \tfrac{13}{6}\log 2+\tfrac{7}{10}\log 3+\tfrac{7}{12}\log 5)\] \[\approx(0,\ 0.6931471806,\ 1.329661349,\ 2.238570083,\ 3.209686276)\] The first few asymptotic terms of \(z_{n}\) will lead to a representation of the constant \(H\), conjectured by Vershik and Kerov [28] to exist as the limit in probability of random variables \(-\tfrac{1}{\sqrt{n}}\log\operatorname{I\!P}l^{(n)}(\lambda)\), where \(\operatorname{I\!P}l^{(n)}(\lambda):=n!\left(\prod_{u\in\lambda}\tfrac{1}{h_{ u}}\right)^{2}\), with \(\lambda\vdash n\) again distributed according to the Plancherel measure. A strengthening of the conjecture (convergence in \(L_{p}\) for \(p<\infty\)) has been proved by Bufetov [4], from which we borrowed above notation, and an expression for \(H\) in terms of a threefold integral has been given in [4, eq. (15)]. We aim here at a less involved representation of \(H\), and at more terms of an asymptotic expansion of \(\operatorname{I\!E}\left[n^{-\frac{1}{2}}(2Z_{n}-\log n!)\right]\). **Theorem 3.1**.: _Let \(H_{n}=1+\tfrac{1}{2}+\cdots+\tfrac{1}{n}\) denote the \(n\)th harmonic number. Then, as \(n\to\infty\), we have_ \[-\frac{\operatorname{I\!E}\log\operatorname{I\!P}l^{(n)}(\lambda)}{\sqrt{n}}=H -\left(\frac{13}{24}\log n+\frac{13\gamma}{12}+\log\sqrt{2\pi}+\frac{1}{4}-h ^{\prime}(0)\right)\frac{1}{\sqrt{n}}+o\big{(}n^{-\frac{1}{2}}\big{)},\] _where_ \[H =\frac{16}{3\pi^{2}}(4\gamma+1)+\frac{64}{\pi^{2}}\sum_{\ell \geq 2}\frac{\ell^{2}}{4\ell^{2}-1}\Big{(}\log\ell-H_{\ell}+\gamma+\frac{1}{2 \ell}\Big{)}\] \[\approx 1.87702830628\] _and_ \[h^{\prime}(0)=\sum_{\ell\geq 2}\ell\left(H_{\ell}-\log\ell-\gamma-\frac{1}{2 \ell}+\frac{1}{12\ell^{2}}\right)\approx 0.001562493.\] Proof.: As we will prove in the appendix, the Kronecker delta defined on \(\mathbb{N}\times\mathbb{N}\) can be expressed in terms of the polynomials \(p(x,r)\) given in (2.1) as follows, \[\delta_{\ell,n}=\sum_{r=\ell-1}^{\infty}(-1)^{\ell+r+1}\frac{2\ell^{2}}{(r+\ell +1)!(r-\ell+1)!}p(n,r). \tag{3.1}\] From this we deduce \[\log n=\sum_{\ell\geq 2}\log\ell\sum_{r\geq\ell-1}\frac{2(-1)^{\ell+r+1}\ell^{ 2}}{(r+\ell+1)!(r-\ell+1)!}p(n,r)=2\sum_{r\geq 2}(-1)^{r}g(r)p(n,r-1),\] for \(n\in\mathbb{N}\), where \(g\) is given by \[g(r)=\sum_{\ell=2}^{r}\frac{(-1)^{\ell}\ell^{2}\log\ell}{\Gamma(r+\ell+1) \Gamma(r-\ell+1)}.\] We want to extend \(g\) to a meromorphic function in the right halfplane \(\Re r>-1\). Therefore we employ \[\log\ell=H_{\ell}-\gamma-\frac{1}{2\ell}+\frac{1}{12\ell^{2}}+\mathcal{O}( \ell^{-4}),\] and the identities (all with easy proofs, only the last one is proven in the appendix) \[\sum_{\ell=2}^{r}\frac{(-1)^{\ell}\ell^{2}}{\Gamma(r+\ell+1) \Gamma(r-\ell+1)} =\frac{1}{\Gamma(r)\Gamma(r+2)} \tag{3.2b}\] \[\sum_{\ell=2}^{r}\frac{(-1)^{\ell}\ell}{\Gamma(r+\ell+1)\Gamma(r- \ell+1)} =\frac{3(r-1)}{2(2r-1)\Gamma(r)\Gamma(r+2)}\] (3.2c) \[\sum_{\ell=2}^{r}\frac{(-1)^{\ell}}{\Gamma(r+\ell+1)\Gamma(r- \ell+1)} =\frac{r-1}{2r\Gamma(r)\Gamma(r+2)}\] (3.2d) \[\sum_{\ell=2}^{r}\frac{(-1)^{\ell}\ell^{2}(H_{\ell}-1)}{\Gamma(r+ \ell+1)\Gamma(r-\ell+1)} =\frac{1}{4(r-1)(2r-1)\Gamma(r)^{2}}. \tag{3.2a}\] By Euler's reflection formula, for complex \(r\not\in\mathbb{Z}\) and for real \(\ell\to\infty\) we have \[\Gamma(r+\ell+1)\Gamma(r-\ell+1)\frac{\sin\pi(\ell-r)}{\pi}=\frac{\Gamma(r+ \ell+1)}{\Gamma(\ell-r)}\sim\ell^{2r+1}.\] Therefore the following series \[h(r):=\sum_{\ell\geq 2}(-1)^{\ell}\ell^{2}\frac{\log\ell-H_{\ell}+\gamma+\frac{ 1}{2\ell}-\frac{1}{12\ell^{2}}}{\Gamma(r+\ell+1)\Gamma(r-\ell+1)}\] converges for \(\Re r>-1\), and satisfies \(h(1)=h(0)=0\), with \(h^{\prime}(0)\) as given in the theorem. Hence \[g(z)=h(z)+\frac{1-\gamma}{\Gamma(z)\Gamma(z+2)}+\frac{1}{4(z-1)(2z-1)\Gamma(z )^{2}}-\frac{(z-1)(16z+1)}{24z(2z-1)\Gamma(z)\Gamma(z+2)}\] is the sought extension, meromorphic for \(\Re z>-1\). Now \[z_{n}=\frac{1}{n!}\sum_{\lambda\vdash n}f_{\lambda}^{2}\sum_{u\in \lambda}\log h_{u} =2\sum_{r\geq 2}(-1)^{r}g(r)\frac{1}{n!}\sum_{\lambda\vdash n}f_{ \lambda}^{2}\sum_{u\in\lambda}p(h_{u},r-1)\] \[=2\sum_{r\geq 2}(-1)^{r}g(r)K_{r-1}\binom{n}{r}\] \[=(-1)^{n}\frac{n!}{2\pi\mathrm{i}}\oint_{C}\frac{\phi(z)}{z(z-1)( z-2)\cdots(z-n)}dz,\] where \[\phi(z)=2g(z)\frac{\Gamma(2z)\Gamma(2z-1)}{\Gamma(z+1)^{2}\Gamma(z)}.\] Here \(C\) is a contour that encircles integers \(2,\ldots,n\), but neither any other integers, nor poles of \(\phi\). By computing (leading asymptotic terms of) residues of \(\Phi_{n}(z):=\phi(z)\frac{n!\Gamma(-z)}{\Gamma(n+1-z)}\) at \(1\), \(\frac{1}{2}\), and \(0\), we obtain \[z_{n}= \,\frac{n\log n-n}{2}-\frac{1}{4}+\frac{4}{9\pi^{2}}\Big{(}24 \gamma+7-18\pi h(\tfrac{1}{2})\Big{)}n^{\frac{1}{2}}\] \[-\frac{\log n}{48}-\frac{13\gamma}{24}+\frac{1}{8}+\frac{h^{ \prime}(0)}{2}+\mathcal{O}(n^{-\frac{1}{2}})+\frac{1}{2\pi\mathrm{i}}\oint_{C ^{\prime}}\Phi_{n}(z)dz,\] where \(C^{\prime}\) encircles the interval \([0,n]\), but no poles of the integrand outside that interval. As shown in the appendix, the latter integral is \(o(1)\), thus we arrive at \[-\mathrm{I\!E}\,\frac{\log\mathrm{I\!P}l^{(n)}(\lambda)}{\sqrt{n}} =\mathrm{I\!E}\,\frac{2Z_{n}-\log n!}{\sqrt{n}}\] \[=\] \[-\left(\frac{13\gamma}{12}+\log\sqrt{2\pi}+\frac{1}{4}-h^{\prime }(0)\right)\frac{1}{\sqrt{n}}+o\big{(}n^{-\frac{1}{2}}\big{)}.\] Finally, for the evaluation of \(h(\tfrac{1}{2})\), we use \(\Gamma(\tfrac{1}{2}+\ell+1)\Gamma(\tfrac{1}{2}-\ell+1)=(-1)^{\ell+1}\frac{\pi }{4}(4\ell^{2}-1)\), as well as \(\sum_{\ell\geq 2}(4\ell^{2}-1)^{-1}=\frac{1}{6}\), leading to \[h(\tfrac{1}{2})=-\frac{4}{\pi}\sum_{\ell\geq 2}\frac{\ell^{2}}{4\ell^{2}-1} \Big{(}\log\ell-H_{\ell}+\gamma+\frac{1}{2\ell}\Big{)}+\frac{1}{18\pi},\] which completes the proof. \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline \(n\) & \(\frac{1}{\sqrt{n}}(2z_{n}-\log n!)\) & \(n\) & \(\frac{1}{\sqrt{n}}(2z_{n}-\log n!)\) & \(n\) & \(\frac{1}{\sqrt{n}}(2z_{n}-\log n!)\) \\ \hline 2 & 0.4901290717 & 7 & 0.8208116414 & 128 & 1.4880650932 \\ 3 & 0.5008878635 & 8 & 0.8690239552 & 256 & 1.5781760349 \\ 4 & 0.649543169 & 16 & 1.0657023619 & 512 & 1.6489336120 \\ 5 & 0.7297992837 & 32 & 1.2347905493 & 1024 & 1.7039138626 \\ 6 & 0.7726513179 & 64 & 1.3748129422 & 2048 & 1.7462734777 \\ \hline \end{tabular} \end{table} Table 2. Some terms of the sequence approaching \(H\). _Remark 3.2_.: Note that the term \(2\sum_{r\geq 2}(-1)^{r}g(r)K_{r-1}\binom{n}{r}\) can be used to compute \(z_{n}\) for values of \(n\) so large that naively generating all partitions \(\lambda\vdash n\) is not an option. Of course, care has to be taken, since cancellations will occur in numerical computations because of alternating signs of summands. Table 2 shows that \(\frac{1}{\sqrt{n}}(2z_{n}-\log n!)\) is slowly approaching \(H\) from below, with the values obtained in [29, Table 1] from simulations fitting neatly into this pattern. The convergence rate is in good accordance with the error term given in Theorem 3.1. Observe that \(\frac{13\gamma}{12}+\log\sqrt{2\pi}+\frac{1}{4}-h^{\prime}(0)\approx 1.792693.\) For \(n=2048\) we then get \(H-(\frac{13}{24}\log 2048+1.792693)/\sqrt{2048}\approx 1.746154\), which matches the table entry fairly well. ## 4. The expected side length of the Durfee square Here we consider the side length of the Durfee square of partition \(\lambda\), \[D(\lambda):=\max\{i:\lambda_{i}\geq i\},\] and we denote the restriction of \(D(\lambda)\) to \(\lambda\vdash n\) distributed according to Plancherel measure by \(D_{n}\). With respect to uniform measure, where all partitions \(\lambda\vdash n\) are equally likely, the expectation and the most likely value of \(D(\lambda)\) have been studied in [5, 6, 18]. Regarding Plancherel measure, it is known since the days of the limit shape theorem (see Theorem 5.1 in the next section) that \(\frac{1}{\sqrt{n}}D_{n}\to\frac{2}{\pi}\) in probability. Furthermore, we may deduce from [2, Thm. 3.6] convergence in distribution of \(\frac{\pi}{\sqrt{\log n}}\left(D_{n}-\frac{2}{\pi}\sqrt{n}\right)\) to a standard normal random variable. Should that convergence in distribution be accompanied by convergence of second moments, \(\operatorname{I\!E}D_{n}=\frac{2}{\pi}\sqrt{n}+\mathcal{O}\left(\sqrt{\log n}\right)\) would follow. We are not aware of a proof of such result, let alone of any results in the literature regarding fine asymptotics of \(\operatorname{I\!E}D_{n}\). **Theorem 4.1**.: _Let \(d_{n}:=\operatorname{I\!E}D_{n}\). Then, as \(n\to\infty\), we have_ \[d_{n}=\frac{2}{\pi}\sqrt{n}+\left(\frac{3}{16\pi}-\frac{e^{2}}{8\pi}\sin\left( 4\sqrt{n}\right)\right)\frac{1}{\sqrt{n}}+\mathcal{O}\big{(}n^{-1}\big{)}.\] Proof.: In terms of contents \(c_{u}\) of a Young diagram \(\lambda\), we have \[D(\lambda)=\sum_{u\in\lambda}\delta_{0,c_{u}}.\] Define polynomials in terms of the polynomials \(p(x,r)\) given in (2.1) via \[q(x,r):=\prod_{i=0}^{r-1}(x^{2}-i^{2})=\begin{cases}x^{2}p(x,r-1),&r\geq 1,\\ 1,&r=0.\end{cases}\] These also allow for a representation of the Kronecker delta, similar to (3.1), \[\delta_{\ell,n}=\sum_{r=\ell}^{\infty}(-1)^{\ell+r}\frac{2-\delta_{0,n}}{(r+ \ell)!(r-\ell)!}q(n,r), \tag{4.1}\] now valid for non-negative integers \(\ell,n\). By [26, eq. (7)], see also [8, Thm. A.1], we have \[\frac{1}{n!}\sum_{\lambda\vdash n}f_{\lambda}^{2}\sum_{u\in\lambda}q(c_{u},r) =\frac{(2r)!}{(r+1)!}\binom{n}{r+1}. \tag{4.2}\] This leads to the representation \[d_{n}=\frac{1}{n!}\sum_{\lambda\vdash n}f_{\lambda}^{2}\sum_{u\in\lambda}\delta_{0, c_{u}}=\sum_{r\geq 1}\frac{(-1)^{r+1}(2r-2)!}{(r-1)!^{2}r!}\binom{n}{r}. \tag{4.3}\] For the Poisson generating function \(D(z):=e^{-z}\sum_{n\geq 0}\frac{d_{n}}{n!}z^{n}\) we obtain \[D(z)=z\sum_{k\geq 0}\frac{(-z)^{k}(2k)!}{k!^{2}(k+1)!^{2}}=z\left(2J_{0}^{2}(2 \sqrt{z})-\frac{J_{0}(2\sqrt{z})J_{1}(2\sqrt{z})}{\sqrt{z}}+2J_{1}^{2}(2\sqrt {z})\right),\] where \(J_{0}\) and \(J_{1}\) are Bessel functions of the first kind. We may use \(D(z)\) to recover asymptotics of \(d_{n}\), see [15, Sec. 5.11.4] for asymptotic expansions of Bessel functions. We find \[D(z)=\frac{2}{\pi}\sqrt{z}-\frac{1}{16\pi\sqrt{z}}-\frac{\sin(4\sqrt{z})}{8 \pi\sqrt{z}}+\frac{3\cos(4\sqrt{z})}{64\pi z}+\mathcal{O}\big{(}z^{-\frac{3}{2 }}\big{)},\] for \(|z|\to\infty\), \(|\arg z|\leq\pi-\delta\) with \(\delta>0\). A uniform bound is furnished by \(|D(z)|\leq\cosh(4\sqrt{|z|})\). Evaluating now \(d_{n}=\frac{n!}{2\pi i}\oint_{C}z^{-n-1}e^{z}D(z)dz\), with contour \(C:=\{z\in\mathbb{C}:|z|=n\}\), observing that there is an approximate saddle point at \(z=n\), finishes the proof. _Remark 4.2_.: The sequence starts \(\big{(}d_{n}\big{)}_{n=1}^{10}\!=\!(1,1,1,\frac{7}{6},\frac{17}{12},\frac{33} {20},\frac{109}{60},\frac{3217}{1680},\frac{39703}{20160},\frac{364859}{181440 })\), and it again satisfies a linear recurrence relation, \[d_{n+3}=\frac{3n^{2}+9n+8}{(n+2)(n+3)}d_{n+2}-\frac{3n+1}{n+3}d_{n+1}+\frac{n +1}{n+3}d_{n}, \tag{4.4}\] with initial conditions \[d_{0}=0,\quad d_{1}=1,\quad d_{2}=1,\] readily obtained from (4.3) using _gfun_. ## 5. Expected fluctuations around the limit shape curve Let us introduce the _limit shape curve_ \[\Omega(u)=\begin{cases}\frac{2}{\pi}\left(u\arcsin\frac{u}{\sqrt{2}}+\sqrt{2-u^{ 2}}\right),&\text{ if }|u|\leq\sqrt{2},\\ |u|,&\text{ if }|u|>\sqrt{2}.\end{cases} \tag{5.1}\] The lower right boundary of the Young diagram of a partition \(\lambda\vdash n\), scaled to have unit area, rotated together with parts of positive \(x\)-axis and negative \(y\)-axis by \(135^{\circ}\), gives rise to a piecewise linear function \(\psi_{\lambda}\), also defined on \(\mathbb{R}\). When \(\lambda\) is distributed according to Plancherel measure, the random functions \(\psi_{\lambda}\) approach the limit shape curve, as \(n\to\infty\), in a sense that is made precise in the following result by Vershik and Kerov [27] and Logan and Shepp [14], which we present following closely [23, Thm. 1.22]. **Theorem 5.1**.: (Limit shape theorem for Plancherel-random partitions) _For all \(\varepsilon>0\), we have \(\mathbb{P}(\|\psi_{\lambda}-\Omega\|_{\infty}>\varepsilon)\to 0\) as \(n\to\infty\), i.e., the random functions \(\psi_{\lambda}\) converge to \(\Omega\) in probability in the norm \(\|\cdot\|_{\infty}\)._ A discretized version of \(\psi_{\lambda}\), defined on the set \(\big{\{}\frac{a}{\sqrt{2n}}:a\in\mathbb{Z}\big{\}}\), can be expressed in terms of contents \(c_{u}\) of \(\lambda\) via \(\Psi_{\lambda}(a):=\sum_{u\in\lambda}\delta_{-a,c_{u}}\). Indeed, the set \(\Big{\{}\Big{(}\frac{a}{\sqrt{2n}},\frac{2}{\sqrt{2n}}\big{(}\Psi_{\lambda}(a )+\frac{|a|}{2}\big{)}\Big{)}:a\in\mathbb{Z}\Big{\}}\) is a subset of the graph of \(\psi_{\lambda}\) containing, among others, all the points where the slope of \(\psi_{\lambda}\) changes from \(1\) to \(-1\) or back. For example, if \(\lambda=(5,3,1,1)\), then \((\Psi_{\lambda}(a))_{a=-4}^{4}=(1,1,1,2,2,1,1,1,0)\). Define now a related function, \(\Phi_{\lambda}(a):=\frac{1}{2}\big{(}\Psi_{\lambda}(a)+\Psi_{\lambda}(-a) \big{)}\), i.e., \[\Phi_{\lambda}(a)=\begin{cases}D(\lambda),&a=0,\\ \frac{1}{2}\sum_{u\in\lambda}\delta_{|a|,|c_{u}|},&\text{else}.\end{cases}\] This symmetrised function is used because it can be expressed in terms of Kronecker deltas restricted to pairs of nonnegative integers, thus allowing to use the representation (4.1). Next, let \(\omega_{a,n}:=\operatorname{I\!E}\Phi_{\lambda}(a)\), with \(\lambda\vdash n\) distributed according to the Plancherel measure, and define a sequence of functions \[\tilde{\Omega}_{n}(u):=\sqrt{\frac{2}{n}}\left(\omega_{\lfloor\sqrt{2n}u \rfloor,n}+\frac{1}{2}|\lfloor\sqrt{2n}u\rfloor|\right),\] that one would expect to converge to \(\Omega(u)\), although such convergence is not implied by Theorem 5.1. By [2, Thm. 3.6] we have convergence in distribution of \(\frac{\pi\sqrt{n}}{\sqrt{2\log n}}\left[\sqrt{\frac{2}{n}}\Big{(}\Phi_{ \lambda}(\lfloor\sqrt{2n}u\rfloor)+\frac{1}{2}|\lfloor\sqrt{2n}u\rfloor| \Big{)}-\Omega(u)\right]\) to a standard normal random variable in case that \(|u|<\sqrt{2}\). Should there also be convergence of second moments, \(\tilde{\Omega}_{n}(u)=\Omega(u)+\mathcal{O}\left(\sqrt{\frac{\log n}{n}}\right)\) would follow for \(|u|<\sqrt{2}\). See Figure 4 for the limit shape curve, and, scaled to unit area, a superimposed partition of \(10\), and values \(\tilde{\Omega}_{10}(u)\) for \(u\in\{\frac{a}{\sqrt{20}}:-6\leq a\leq 6\}\). There is a seeming coincidence on the \(y\)-axes, yet \(\Omega(0)=\frac{2\sqrt{2}}{\pi}\approx.900316\), \(\frac{\omega_{0,10}}{\sqrt{5}}=\frac{364859}{181440\sqrt{5}}\approx.899305\), and the ordinate of the upper corner of the rotated Young diagram, \(\frac{2}{\sqrt{5}}\approx.894427\), are all different. An alternating sum representation of \(\omega_{a,n}\), building upon (4.1), is the following \[\omega_{a,n}=\sum_{r\geq a}\frac{(-1)^{r+a+1}(2r-2)!}{(r-1+a)!(r-1-a)!r!}\binom{ n}{r}, \tag{5.2}\] which again gives rise to a linear recurrence relation (obtained using _gfun_) \[(n+4)(n+a+3)(n-a+3)\omega_{a,n+4}=[ 4n^{3}+32n^{2}+(86-2a^{2})n+78-7a^{2}]\omega_{a,n+3}\] \[-(n+3)(6n^{2}+22n+20-a^{2})\omega_{a,n+2}\] \[+(n+1)(n+2)(n+3)(4\omega_{a,n+1}-\omega_{a,n}),\] holding for \(n\geq a-2\), with initial conditions \[\omega_{a,n}=0,\text{ for }\min(0,a-2)\leq n\leq a,\quad\omega_{a,a+1}=\frac{1}{( a+1)!}.\] Note that there is a common factor \((n+1)\) in the recurrence relation, when \(a=2\). Note also that setting \(a=0\) yields a recurrence relation with both order and degree one larger than the one given in (4.4). As was done for \(d_{n}\), asymptotics via Poisson generating functions (which can again be expressed in terms of Bessel functions) can be obtained also for \(\omega_{a,n}\) for fixed integer \(a>0\): \[\omega_{a,n}+\frac{a}{2}=\frac{2}{\pi}\sqrt{n}+\left(\frac{4a^{2}+3}{16\pi}-( -1)^{a}\frac{e^{2}}{8\pi}\sin\big{(}4\sqrt{n}\big{)}\right)\frac{1}{\sqrt{n}}+ \mathcal{O}\big{(}n^{-1}\big{)}. \tag{5.3}\] In order to obtain asymptotics of \(\omega_{a,n}\) for \(n\) and \(a\) simultaneously approaching \(\infty\), which would be needed for asymptotics of \(\tilde{\Omega}_{n}(u)\), one could use the parametrization \(n=2\kappa^{2}\in\mathbb{N},a=\lfloor 2\kappa u\rfloor\), and consider \[\frac{\omega_{\lfloor 2\kappa u\rfloor,2\kappa^{2}}}{\kappa}=\frac{1}{2\pi \mathrm{i}}\oint_{C^{\prime\prime}}\frac{1}{\kappa}\frac{(-1)^{\lfloor 2 \kappa u\rfloor+1}\Gamma(-z)\Gamma(2z-1)}{\Gamma(z+\lfloor 2\kappa u\rfloor) \Gamma(z-\lfloor 2\kappa u\rfloor)\Gamma(z+1)}\frac{(2\kappa^{2})\Gamma(-z)}{ \Gamma(2\kappa^{2}+1-z)}dz,\] implied by (5.2), where \(C^{\prime\prime}\) is a contour that encircles integers \(1,\ldots,2\kappa^{2}\), but neither any other integers, nor poles of the integrand. Outside \(C^{\prime\prime}\), the integrand has poles Figure 5. Plots of \(\tilde{\Omega}_{n}(u)-\Omega(u)\) for roughly quadrupling values of \(n\in\{1573,6230,24798,98943\}\) from top left to bottom right. at \(\frac{1}{2}\), at \(0\), and at all negative half-integers. For fixed \(u\) it turns out that each residue contributes to the leading (constant) term of the asymptotics in the limit \(\kappa\to\infty\), with the sum of those contributions converging, but for fixed \(\kappa\) the sum of residues does not converge. Balancing those two limiting processes (taking more and more residues into account, letting \(\kappa\to\infty\)) and at the same time bounding the integral over a sequence of correspondingly deformed contours appears to be intricate, so unfortunately we have not been able to prove \(\tilde{\Omega}_{n}(u)\to\Omega(u)\) for \(u\neq 0\). Using holonomicity of \((\omega_{a,n})_{n\geq 0}\) to generate many terms of that sequence for many values of \(a\), we obtain the plots in Figures 5 and 6. The values for \(n\) in Figure 5 and in the second plot in Figure 6 have been chosen to satisfy \(\sin(4\sqrt{n})\approx 1\) in order to give maximal weight to the term \((-1)^{a}\) present in (5.3) and thus ensure better comparability of the plots in the vicinity of \(0\). The value of \(n\) in the first plot of Figure 6 satisfies \(\sin(4\sqrt{n})\approx 0\). We conclude this section with some (non-rigorous) observations based on these plots. to enforce "smooth" dependence of \(\omega_{a,n}\) on \(a\), one would, in the light of (5.3), restrict to odd (or to even) \(a\). However, this would only work for \(0\leq a\leq\alpha_{n}\) with \(\alpha_{n}=o(\sqrt{n})\). The location of the first "peak" to the right of \(0\) seems to suggest, that \(\alpha_{n}=\Theta\big{(}n^{\frac{1}{4}}\big{)}\) may hold. For larger \(a\) it is no longer useful to distinguish between even and odd, instead one should consider \(\omega_{a,n}\) evaluated at \(a\) belonging to other arithmetic progressions: Near \(\frac{a}{\sqrt{2n}}=\sqrt{2}\cos\frac{\pi}{3}\approx 0.707\) the way to go would be to consider \(\big{(}\omega_{a+3k,n}\big{)}_{k}\), whereas near \(\frac{a}{\sqrt{2n}}=\sqrt{2}\cos\frac{\pi}{4}=1\) it would be \(\big{(}\omega_{a+4k,n}\big{)}_{k}\). Every fifth term should be taken near \(\sqrt{2}\cos\frac{\pi}{5}\approx 1.144\) and \(\sqrt{2}\cos\frac{2\pi}{5}\approx 0.437\). We expect this pattern to continue, with regions of smoothness near \(\sqrt{2}\cos\frac{\ell\pi}{m}\) for \(1\leq\ell<\frac{m}{2}\), and \(\ell,m\) coprime. For larger \(m\) these regions will become noticeable only if \(n\) gets large enough, and those regions will shrink with \(n\) further increasing, making room for yet other regions to pop up. ## 6. A heuristic upper bound for the variance of the number of bumping steps in the RSK algorithm Let \(L_{n}:=X_{n}+Y_{n}\), \(\ell_{n}:=\mathrm{I\!E}\left(X_{n}+Y_{n}\right)\), and \(v_{n}:=\mathrm{Var}\left(X_{n}+Y_{n}\right)\). We now give a heuristic derivation of \(\ell_{n}\), and an upper bound for \(v_{n}\) based on [13]. Let \(\Omega(x)\) be the function defined in (5.1), describing the limit shape of normalized Young diagrams with respect to Plancherel measure. Denote \(s(x):=\frac{1}{\pi}\sqrt{2-x^{2}}\), the density of the semicircle distribution with support \([-\sqrt{2},\sqrt{2}]\). As shown in [13], this is also the limiting density of the random abscissa of a newly inserted box into a scaled and rotated Young diagram that closely resembles the limit shape curve, when new insertions are made according to the Plancherel growth process, that ensures that at each stage of the process the Young diagram is distributed according to Plancherel measure, see also [23, sec. 1.19]. This leads to \[\ell_{n}-\ell_{n-1}\sim\sqrt{2n}\int_{-\sqrt{2}}^{\sqrt{2}}\Omega(x)s(x)dx= \frac{128}{9\pi^{2}}\sqrt{n},\] and thus \(\ell_{n}\sim\frac{256}{27\pi^{2}}n^{\frac{3}{2}}\). Moreover, assuming independence of \(L_{n-1}\) and \(L_{n}-L_{n-1}\), \[v_{n}-v_{n-1}\sim 2n\int_{-\sqrt{2}}^{\sqrt{2}}\left(\Omega(x)-\frac{64\sqrt{ 2}}{9\pi^{2}}\right)^{2}s(x)dx=\frac{54\pi^{4}+2835\pi^{2}-32768}{162\pi^{4}}n,\] and thus \[v_{n}\sim\frac{54\pi^{4}+2835\pi^{2}-32768}{324\pi^{4}}n^{2}\approx 0.01496867061 \,n^{2}.\] Numerically we have e.g. \(\frac{v_{56}}{50^{2}}\approx 0.01216526413\). Indeed, \(L_{n-1}\) and \(L_{n}-L_{n-1}\) seem to be negatively correlated. The sequence of covariances \(\bigl{(}\operatorname{Cov}\left(L_{n}-L_{n-1},L_{n-1}\right)\bigr{)}_{n\geq 2}\) starts \((0,0,-\frac{1}{9},-\frac{17}{180},-\frac{1}{15},-\frac{61}{450},-\frac{863}{56 00},\ldots)\), staying negative up to \(n=40\) with roughly linear growth, see Figure 7. So it seems that in the light of Lemma 1.2 one can safely guess that for the number \(Y_{n}\) of bumping steps \(\operatorname{Var}Y_{n}=\Theta(n^{2})\) holds. It would be desirable to have a proof for that, and also know at least the leading asymptotic term of \(\operatorname{Var}Y_{n}\). ## 7. Conclusion In this paper we have obtained asymptotics of expectations of certain statistics of Plancherel distributed Young diagrams. That these statistics could be expressed in terms of hook lengths and contents of the boxes of such diagram was essential, as it allowed us to invoke polynomiality results for Plancherel averages, leading to representations of expectations as binomial convolutions, that make for easier asymptotic treatment. We hope that this approach will help to analyse further statistics of Plancherel distributed Young diagrams. Now polynomiality results have also been found for measures different from Plancherel (such as the Jack deformation of Plancherel measure, see [20]), or for subclasses of Plancherel distributed Young diagrams, such as strict partitions (see [16, 11, 17]). In case that appropriate substitutes for (2.3) or (4.2) are at hand, it is reasonable to believe that certain statistics in these settings could also be analysed along the lines of this paper. ## 8. Appendix ### Proof of equation (2.2): We use \(p(n,r)=\frac{(2r+1)!}{n}\binom{n+r}{2r+1}\) and rewrite (2.2) as \[n^{2}=\sum_{r\geq 0}\frac{(-1)^{r}}{1-2r}\binom{2r}{r}\binom{n+r}{2r+1}=:S_{n}\] Denoting by \(\Delta\) the forward difference operator, we will show that \(\Delta^{3}S\) is the zero sequence, which together with \(S_{1}=1,\Delta S_{1}=3,\Delta^{2}S_{1}=2\) yields \(S_{n}=n^{2}\) for \(n\in\mathbb{N}\). Now \[\Delta^{3}S_{n} =\sum_{r\geq 1}\frac{(-1)^{r}}{1-2r}\binom{2r}{r}\binom{n+r}{2r-2 }=\sum_{r\geq 1}(-1)^{r+1}\frac{2}{r}\binom{2r-2}{r-1}\binom{n+r}{2r-2}\] \[=2\sum_{r\geq 1}\frac{(-1)^{r+1}}{n+2}\binom{n+r}{r-1}\binom{n+2} {r}=\frac{2}{n+2}\sum_{r\geq 0}(-1)^{r}\binom{n+r+1}{n+1}\binom{n+2}{r+1}=0,\] where the last equality follows from [10, eq (5.24)]. ### Proof of equation (3.1): The equation is easily checked for \(\ell>n\), since then also \(r\geq n\) and thus \(p(n,r)=0\). For \(1\leq\ell\leq n\) we have \[\sum_{r=\ell-1}^{n-1} \frac{2\ell^{2}(-1)^{r+\ell+1}}{(r+\ell+1)!(r-\ell+1)!}\frac{(2r+ 1)!}{n}\binom{n+r}{2r+1}\] \[=\frac{2\ell^{2}(-1)^{\ell+1}}{n(n+\ell)}\sum_{r=\ell-1}^{n-1}(- 1)^{r}\binom{n+r}{n+\ell-1}\binom{n+\ell}{r+\ell+1}\] \[=\frac{2\ell^{2}(-1)^{\ell+1}}{n(n+\ell)}(-1)^{n+1}\delta_{\ell, n}=\delta_{\ell,n},\] treating the case \(\ell=n\) directly, and using [10, eq (5.24)] again for \(n>\ell\) ### Proof of equation (4.1): We use \(q(n,r)=\frac{n(2r)!}{n+r}\binom{n+r}{2r}\) for \(n>0\), and \(q(0,0)=1\). The equation is easily checked for \(n=0\), and for \(\ell>n\), since then also \(r>n\) and thus \(q(n,r)=0\). For \(n>0\), \(0\leq\ell\leq n\) we have \[\sum_{r=\ell}^{n} \frac{2(-1)^{r+\ell}}{(r+\ell)!(r-\ell)!}\frac{n(2r)!}{n+r}\binom{ n+r}{2r}=(-1)^{\ell}\frac{2n}{n+\ell}\sum_{r=\ell}^{n}(-1)^{r}\binom{n+r-1}{n+ \ell-1}\binom{n+\ell}{r+\ell}\] \[=\frac{2n(-1)^{\ell}}{n+\ell}(-1)^{n}\delta_{\ell,n}=\delta_{\ell,n},\] treating the case \(\ell=n\) directly, and using [10, eq (5.24)] again for \(n>\ell\). ### Proof of equation (3.2d): Using \(H_{\ell}-1=\sum_{k=2}^{\ell}\frac{1}{k}\), and interchanging summation, we obtain, using [10, eq (5.16)] at several places, \[\sum_{\ell=2}^{r}\frac{(-1)^{\ell}\ell^{2}(H_{\ell}-1)}{(r+\ell )!(r-\ell)!} =\frac{1}{(2r)!}\sum_{k=2}^{r}\frac{1}{k}\sum_{\ell=k}^{r}(-1)^{ \ell}[r^{2}-(r+\ell)(r-\ell)]\binom{2r}{r-\ell}\] \[=\frac{1}{(2r)!}\sum_{k=2}^{r}\frac{1}{k}\sum_{\ell=k}^{r}(-1)^{ \ell}\left[r^{2}\binom{2r}{r-\ell}-2r(2r-1)\binom{2r-2}{r-\ell-1}\right]\] \[=\frac{1}{(2r)!}\sum_{k=2}^{r}\frac{1}{k}(-1)^{k}\left[r^{2} \binom{2r-1}{r-k}-2r(2r-1)\binom{2r-3}{r-k-1}\right]\] \[=\frac{1}{(2r)!}\sum_{k=2}^{r}\frac{(-1)^{k}r}{r-1}(k-1)\binom{2r -1}{r-k}\] \[=\frac{1}{(2r)!}\sum_{k=2}^{r}\frac{(-1)^{k}r}{r-1}\left[(r-1) \binom{2r-1}{r-k}-(2r-1)\binom{2r-2}{r-k-1}\right]\] \[=\frac{1}{(2r)!}\frac{r}{r-1}\left[(r-1)\binom{2r-2}{r}-(2r-1) \binom{2r-3}{r}\right]\] \[=\frac{1}{(2r)!}\frac{r}{r-1}\frac{r}{r-2}\binom{2r-3}{r}=\frac{ 1}{4(r-1)(2r-1)(r-1)!^{2}}.\qed\] Saddle point evaluation of the integral \(I_{n}:=\frac{1}{2\pi\mathrm{i}}\oint_{C^{\prime}}f(z)\frac{n!\Gamma(-z)}{\Gamma (n+1-z)}dz\) This integral appears in the proof of Theorem 2.1, see section 2 for relevant notation. Putting \(n=m^{2}\), the integrand may be rewritten as \[G(z):=\frac{\pi\Gamma(m^{2}+1)\Gamma^{2}(2z-1)}{\sin(\pi z)\Gamma(m^{2}+1-z)(2 z-3)\Gamma^{3}(z+1)\Gamma^{3}(z)}.\] Denoting by \(\psi\) the digamma function, we have \[\frac{G^{\prime}(z)}{G(z)} =\psi(m^{2}+1-z)+4\psi(2z-1)-3\psi(z+1)-3\psi(z)-\pi\cot\pi z- \frac{2}{2z-3}\] \[\sim\log\frac{(z-m^{2}-1)(2z-1)^{4}}{(z+1)^{3}z^{3}}-\frac{1}{z- \frac{1}{2}}+\frac{3}{2z+2}+\frac{3}{2z}-\frac{1}{z-\frac{3}{2}},\] with error terms \(\mathcal{O}(m^{-2})+\mathcal{O}(z^{-2})\), holding for \(m\to+\infty,|z|\to\infty\), subject to \(z=o(m^{2})\) and \(\delta<|\arg z|\leq\pi-\delta\) for some \(\delta>0\). Two approximate saddle points of \(G(z)\) are \(\zeta:=6+4m\mathrm{i}\) and \(\bar{\zeta}=6-4m\mathrm{i}\). Indeed, \(\frac{d}{dz}\log G(\zeta)=\frac{G^{\prime}(\zeta)}{G(\zeta)}=\mathcal{O}( \frac{1}{m^{2}})\), and \(\frac{d^{2}}{dz^{2}}\log G(\zeta)=\frac{\mathrm{i}}{2m}+\mathcal{O}(\frac{1}{m ^{2}})\), which suggests a contour directed north-west in the point \(\zeta\): Let \(z=\zeta+e^{\mathrm{i}\frac{\pi}{4}}u\) and observe \[G(z)=G(\zeta)\exp\left(\mathcal{O}(\tfrac{u}{m^{2}})+\tfrac{\mathrm{i}}{4m}(e^{ \mathrm{i}\frac{\pi}{4}}u)^{2}+\mathcal{O}(\tfrac{u^{2}}{m^{2}})\right)=G( \zeta)e^{-\frac{u^{2}}{4m}}\left(1+\mathcal{O}(\tfrac{u+u^{2}}{m^{2}})\right).\] Also note that a cumbersome evaluation results in \[G(\zeta)=\frac{-\mathrm{i}e^{8\mathrm{i}m+8}}{2^{13}\pi m^{4}}\left(1+ \mathcal{O}\left(\frac{1}{m}\right)\right).\] Define the counter-clockwise oriented contour \(C^{\prime}\) as the polygon connecting the points \[z_{0}:=-\tfrac{5}{2}+\varepsilon, z_{1}:=\zeta-m-m\mathrm{i}, z_{2}:=\zeta+m+m\mathrm{i}, z_{3}:=n+1+m\mathrm{i},\] \[z_{6}:=\bar{\zeta}-m+m\mathrm{i}, z_{5}:=\bar{\zeta}+m-m\mathrm{i}, z_{4}:=n+1-m\mathrm{i},\] with \(\varepsilon>0\) small, and with segment \(c_{i}\) connecting \(z_{i}\) and \(z_{i+1}\) for \(0\leq i\leq 5\), and \(c_{6}\) connecting \(z_{6}\) and \(z_{0}\). It turns out that the integrals along \(c_{0}\) and \(c_{6}\) are of order \(\mathcal{O}(m^{-5+2\varepsilon})\), and \(c_{i}\), for \(2\leq i\leq 4\), make even smaller contributions. Moreover, the combined contribution of \(c_{1}\) and \(c_{5}\) is \(-2\sqrt{\tfrac{m}{\pi}}\Im\big{(}e^{\mathrm{i}\frac{\pi}{4}}G(\zeta)(1+ \mathcal{O}(\tfrac{1}{m}))\big{)}\), which, up to error terms of order \(\mathcal{O}(m^{-\frac{9}{2}})\), simplifies to \[-2\sqrt{\frac{m}{\pi}}\Im(e^{\mathrm{i}\frac{\pi}{4}}G(\zeta))=\frac{e^{8}}{2^ {12}\pi^{\frac{3}{2}}m^{\frac{7}{2}}}\Im(\mathrm{i}e^{\frac{\mathrm{i}\pi}{4}+ 8m})=\frac{e^{8}}{2^{12}\pi^{\frac{3}{2}}m^{\frac{7}{2}}}\cos\left(\frac{\pi}{ 4}+8m\right).\] ### Bounding the integral \(J_{n}:=\frac{1}{2\pi\mathrm{i}}\oint_{C^{\prime}}\Phi_{n}(z)\,dz\) This integral appears in the proof of Theorem 3.1, see section 3 for relevant notation. Let \(C^{\prime}\) be the boundary of the rectangle with corners \(-\frac{1}{2}\pm\mathrm{i}4em\), \(n+\frac{1}{2}\pm\mathrm{i}4em\), and \(m=\sqrt{n}\). Observe \((-1)^{\ell}\ell^{2}\big{(}\log\ell-H_{\ell}+\gamma+\frac{1}{2\ell}-\frac{1}{1 2\ell^{2}}\big{)}=\mathcal{O}(\ell^{-2})\), and, abbreviating \(\rho=r+\frac{1}{2}\), \[\left|\frac{\Gamma^{2}(r+1)}{\Gamma(r+\ell+1)\Gamma(r-\ell+1)}\right|=\left| \frac{r(r-1)\cdots(r-\ell+1)}{(r+1)\cdots(r+\ell)}\right|=\left|\prod_{k=1}^{ \ell}\frac{\rho-(k-\frac{1}{2})}{\rho+(k-\frac{1}{2})}\right|\leq 1\] for \(\Re\rho\geq 0\), i.e., for \(\Re r\geq-\frac{1}{2}\), therefore \(\Gamma^{2}(r+1)h(r)=\mathcal{O}(1)\) for \(\Re r\geq-\frac{1}{2}\), which leads to \(\Gamma^{2}(z+1)g(z)=\mathcal{O}(1)\) for \(\Re z\geq-\frac{1}{2}\), \(|z-w|\geq\frac{1}{2}\) for \(w\in\{0,\frac{1}{2},1\}\). Hence \[\Phi_{n}(z)=\mathcal{O}\left(\frac{\Gamma(2z)\Gamma(2z-1)}{\Gamma^{4}(z+1) \Gamma(z)}\frac{n!\,\Gamma(-z)}{\Gamma(n+1-z)}\right)=\mathcal{O}\left(\frac{ 2^{4z}}{z^{4}\Gamma^{2}(z+1)\sin\pi z}\frac{\Gamma(n+1)}{\Gamma(n+1-z)}\right),\] by the reflection and duplication formulas. For \(z=-\frac{1}{2}+\mathrm{i}t\), with \(t=\mathcal{O}\big{(}\sqrt{n}\big{)}\), we have \(|\Gamma^{2}(z+1)\sin\pi z|=\pi\) and \(\left|\frac{\Gamma(n+1)}{\Gamma(n+1-z)}\right|=\mathcal{O}\big{(}n^{-\frac{1}{ 2}}\big{)}\), yielding a contribution \(\mathcal{O}\big{(}n^{-\frac{1}{2}}\big{)}\) from the integral over the left segment of \(C^{\prime}\). The contributions from the two horizontal segments is \(\mathcal{O}\big{(}n^{-\frac{3}{2}}\big{)}\), while the right segment makes an exponentially small contribution. All this can be seen from the estimates \[\frac{1}{\Gamma^{2}(z+1)\sin\pi z}=\mathcal{O}\left(\left(\frac{\sigma^{2}+t^{ 2}}{e^{2}}\right)^{-\sigma-\frac{1}{2}}\right),\] holding for \(z=\sigma+\mathrm{i}t\) with \(\sigma\geq-\frac{1}{2}\) and \(|z-w|\geq\frac{1}{2}\) for \(w\in\mathbb{Z}\), and \[\left|\frac{2^{4z}\Gamma(n+1)}{\Gamma(n+1-z)}\right|=\mathcal{O}\bigg{(}(16n)^{ \sigma}\exp\Big{(}\frac{2t^{2}}{n-\sigma+t}\Big{)}\bigg{)},\] holding for \(z=\sigma+\mathrm{i}t\) with \(-\frac{1}{2}\leq\sigma\leq n+\frac{1}{2}\) and \(t=\mathcal{O}\big{(}\sqrt{n}\big{)}\). ## Acknowledgements We would like to thank two anonymous referees, whose suggestions led to substantial improvements of the paper.
2303.04713
On the space-time analyticity of the inhomogeneous heat equation on the half space with Neumann boundary conditions
We consider the inhomogeneous heat equation on the half-space $\mathbb R_{+}^{d}$ with Neumann boundary conditions. We prove a space-time Gevrey regularity of the solution, with a radius of analyticity uniform up to the boundary of the half-space. We also address the case of homogeneous Robin boundary conditions. Our results generalize the case of homogeneous Dirichlet boundary conditions established by Kukavica and Vicol in [10].
Elie Abdo, Weinan Wang
2023-03-08T16:52:49Z
http://arxiv.org/abs/2303.04713v1
On the space-time analyticity of the inhomogeneous heat equation on the half space with Neumann boundary conditions ###### Abstract. We consider the inhomogeneous heat equation on the half-space \(\mathbb{R}^{d}_{*}\) with Neumann boundary conditions. We prove a space-time Gevrey regularity of the solution, with a radius of analyticity uniform up to the boundary of the half-space. We also address the case of homogeneous Robin boundary conditions. Our results generalize the case of homogeneous Dirichlet boundary conditions established by Kukavica and Vicol in [10]. ## 1. Introduction We consider the heat equation \[\partial_{t}q-\Delta q=f \tag{1}\] on the upper-half space \[\Omega=\mathbb{R}^{d}_{+}=\left\{(x=(x_{1},...,x_{d})\in\mathbb{R}^{d}:x_{d}> 0\right\},\quad d\geq 2 \tag{2}\] with the initial condition \[q(x,0)=q_{0}(x), \tag{3}\] and the homogeneous Neumann boundary conditions \[\nabla q|_{\partial\Omega}\cdot n=0. \tag{4}\] Here \(n\) is the outward unit normal vector of \(\partial\Omega\). Since \(n=(0,...,0,-1)\), the condition (4) reduces to \[\partial_{d}q=0 \tag{5}\] on \(\partial\Omega\), where \(\partial_{d}\) stands for the normal derivative of \(q\). The forcing term \(f\) in (1) is a function of both space and time. Several approaches were developed over the years to study the analyticity of nonlinear parabolic equations on domains with boundaries ([8, 9]) based on successive applications of the \(L^{2}\) norms of derivatives, and without boundaries based on Fourier series techniques ([2, 5, 6, 11] and references therein), a mild formulation of the complexified problem ([1], [7]), etc. Recently, Kukavica and Vicol established in [10] a derivative reduction proof, based on classical energy inequalities, to study the analyticity up to the boundary of the \(d\)-dimensional inhomogeneous heat equation on the half-space with homogeneous Dirichlet Boundary conditions. In this paper, we seek a simple energy-type argument to prove the instantaneous space-time analyticity of solutions to the initial boundary value problem (1)-(4). The Neumann type of boundary conditions imposed on the solution \(q\) is a source of technical difficulty, breaking down the derivative reduction approach of [10]. More precisely, the argument in [10] uses the elliptic regularity estimate \(\|u\|_{H^{2}}\leq C\|g\|_{L^{2}}\) that holds for any \(u\) solving the Poisson equation \(\Delta u=g\) on the upper half-space with vanishing boundary conditions. In the case of Neumann boundary conditions, the \(H^{2}\) regularity of solutions does not have that simplified form but is rather described by the bound \[\|u\|_{H^{2}(\Omega)}\leq C\left(\|g\|_{L^{2}(\Omega)}+\|\bar{\partial}u\|_{H ^{1}(\Omega)}\right). \tag{6}\] This latter dependency on the \(H^{1}\) regularity of the tangential derivatives is a constraint on deriving derivative reduction estimates analogous to [10]. We present a proof that relies on the structure of the heat equation (1) and uses tangential interpolation inequalities rather than elliptic estimates. For that purpose, we consider a regularity exponent \(r\geq 2\), a time \(T>0\), and strictly positive small quantities \(0<\tilde{\epsilon},\bar{\epsilon},\epsilon\leq 1\), and we define the Gevrey type norm \[\psi(q)=\sum_{i+j+k\geq r}\sum_{|\alpha|=k}\frac{(i+j+k)^{r}}{(i+j+ k)!}\epsilon^{i}\tilde{\epsilon}^{j}\tilde{\epsilon}^{k}\|t^{i+j+k-r}\partial_{t}^ {i}\partial_{d}^{j}\bar{\partial}^{\alpha}q\|_{L^{2}_{t,x}([0,T]\times\Omega)} \tag{7}\] \[\qquad\qquad+\sum_{i+j+k<r}\sum_{|\alpha|=k}\|\partial_{t}^{i} \partial_{d}^{j}\bar{\partial}^{\alpha}q\|_{L^{2}_{t,x}([0,T]\times\Omega)}\] where \(|\alpha|\) is the sum of the components of the \((d-1)\)-dimensional vector \(\alpha=(\alpha_{1},\ldots,\alpha_{d-1})\), and \(\bar{\partial}^{\alpha}\) stands for the tangential derivative \(\partial_{1}^{\alpha_{1}}\partial_{2}^{\alpha_{2}}\ldots\partial_{d-1}^{ \alpha_{d-1}}\). All indices over which the sums are taken are assumed to be nonnegative integers. The second sum on the right-hand side of (7) is the \(H^{r-1}([0,T]\times\Omega))\) Sobolev norm of the solution \(q\) to (1)-(4), which itself is controlled by the sum of the \(H^{2(r-1)}(\Omega)\) Sobolev norm of the initial data \(q_{0}\) and the \(H^{2(r-2)}([0,T]\times\Omega)\) Sobolev norm of the forcing term \(f\), provided that \(q_{0}\) obeys the compatibility conditions (see e.g. [4]). We seek good control of the infinite sum in (7) via a modification of the derivative reduction approach of [10]. The following theorem states our main result: **Theorem 1**.: _Let \(T>0\) and \(r\geq 2\). Then there exists \(\epsilon,\tilde{\epsilon}\in(0,1]\), which depend only on \(T\), \(r\), and \(d\), such that for any \(q_{0}\in H^{2r}(\Omega)\) satisfying the compatibility conditions, and \(f\) sufficiently smooth, the solution \(q\) of (1)-(4) satisfies the estimate_ \[\psi\lesssim\|q_{0}\|_{H^{2r}(\Omega)}+\|f\|_{H^{2r-2}([0,T]\times \Omega)}+\sum_{i+k\geq r-2}\sum_{|\alpha|=k}\frac{(i+k+2)^{r}\epsilon^{i}\tilde {\epsilon}^{k+2}}{(i+k+2)!}\|t^{i+k+2-r}\partial_{t}^{i}\bar{\partial}^{\alpha }f\|_{L^{2}_{t,x}([0,T]\times\Omega)} \tag{8}\] \[+\sum_{i\geq r}\frac{(i+1)^{r}\epsilon^{i+1}}{(i+1)!}\|t^{i+1-r} \partial_{t}^{i}f\|_{L^{2}_{t,x}([0,T]\times\Omega)}+\sum_{i\geq r-1}\frac{(i +1)^{r-1}\epsilon^{i+1}}{(i+1)!}\|t^{i+2-r}\partial_{t}^{i}\partial_{d}f\|_{L^ {2}_{t,x}([0,T]\times\Omega)}\] \[+\sum_{i+j+k\geq 1+(r-3)_{+}}\sum_{|\alpha|=k}\frac{(i+j+k+1)^{r-1} \epsilon^{i}\tilde{\epsilon}^{j+k+1}}{(i+j+k+1)!}\|t^{i+j+k+2-r}\partial_{t}^ {i}\partial_{d}^{j}\bar{\partial}^{\alpha}f\|_{L^{2}_{t,x}([0,T]\times\Omega)}.\] _Here the notation \(A\lesssim B\) means that \(A\leq C_{r,d}B\) for some positive constant \(C_{r,d}\) depending only on \(r\), the dimension \(d\), and some universal constants._ The idea of the proof is based on a decomposition of the norm (7) into two main sums, one involving normal derivatives and one depending only on tangential and time derivatives. The terms with normal derivatives are controlled via the reduction technique of [10] in view of the fact that \(\partial_{d}q\) solves an inhomogeneous heat equation with homogeneous Dirichlet boundary conditions. As for the sum which does not depend on the normal derivatives of solutions, we decompose it into three sub-sums, \(S_{1}\), \(S_{2}\), and \(S_{3}\), where \(S_{1}\) includes all terms with at least two tangential derivatives, \(S_{2}\) depends on exactly one tangential derivative, and \(S_{3}\) is the sum of the remaining time derivative terms. The estimation of \(S_{1}\) uses the structure of the diffusion driven by \(\Delta q\), which, by making use of the heat equation (1), allows us to reduce the number of tangential derivatives by increasing the number of normal derivatives. As for the sum \(S_{2}\), we interpolate in the tangential variable to have an additional tangential derivative and hence have good control of \(S_{2}\) by \(S_{1}\). Finally, we estimate \(S_{3}\) by reducing the number of time derivatives based on standard energy equalities. This latter reduction is mainly obtained via integration by parts, which holds even under the Neumann boundary conditions imposed on the solution. The analogous result obtained in [10] for homogeneous Dirichlet boundary conditions was applied in [3] to prove the Gevrey regularity of the Navier-Stokes equations on half-spaces. We believe that our result will also be useful to study the space-time analyticity of \(d\)-dimensional nonlinear parabolic equations with homogeneous Neumann boundary conditions (where \(f=f(t,x,q)\) depends on \(q\) in a nonlinear fashion). We prove Theorem 1 in Section 2, and we briefly address the cases of inhomogeneous Neumann and homogeneous Robin boundary conditions in Section 3. Throughout the paper, the letter \(C\) denotes a positive universal constant that may change from line to line along the proofs. For fixed nonnegative indices \(i,j,k\), we use the notation \[\|\partial_{t}^{i}\partial_{d}^{j}\bar{\partial}^{k}q\|_{L^{2}_{t,x}([0,T] \times\Omega)}:=\sum_{|\alpha|=k}\|\partial_{t}^{i}\partial_{d}^{j}\bar{ \partial}^{\alpha}q\|_{L^{2}_{t,x}([0,T]\times\Omega)} \tag{9}\] for the sake of simplicity. ## 2. Proof of Theorem 1 Recalling the Gevrey norm (7), we decompose \(\psi(q)=\psi_{1}(q)+\psi_{2}(q)\), where \(\psi_{1}(q)\) and \(\psi_{2}(q)\) are the following sums \[\begin{split}\psi_{1}(q)=&\sum_{i+j+k\geq r,j\neq 0} \frac{(i+j+k)^{r}}{(i+j+k)!}\epsilon^{i}\tilde{\epsilon}^{j}\bar{\epsilon}^{k} \|t^{i+j+k-r}\partial_{t}^{i}\partial_{d}^{j}\bar{\partial}^{k}q\|_{L^{2}_{t,x}([0,T]\times\Omega)}\\ &+\sum_{i+j+k<r,j\neq 0}\|\partial_{t}^{i}\partial_{d}^{j}\bar{ \partial}^{k}q\|_{L^{2}_{t,x}([0,T]\times\Omega)},\end{split} \tag{10}\] and \[\psi_{2}(q)=\sum_{i+k\geq r}\frac{(i+k)^{r}}{(i+k)!}\epsilon^{i}\bar{\epsilon }^{k}\|t^{i+k-r}\partial_{t}^{i}\bar{\partial}^{k}q\|_{L^{2}_{t,x}([0,T]\times \Omega)}+\sum_{i+k<r}\|\partial_{t}^{i}\bar{\partial}^{k}q\|_{L^{2}_{t,x}([0,T]\times\Omega)}. \tag{11}\] **Estimation of \(\psi_{1}\).** The Gevrey norm (10) is controlled via use of normal, tangential, and time derivative reductions. Indeed, \(\psi_{1}\) can be rewritten as \[\begin{split}\psi_{1}(q)=&\sum_{i+\tilde{j}+k\geq r -1}\frac{(i+\tilde{j}+k+1)^{r}}{(i+\tilde{j}+k+1)!}\epsilon^{i}\tilde{\epsilon }^{\tilde{j}+1}\bar{\epsilon}^{k}\|t^{i+\tilde{j}+k-(r-1)}\partial_{t}^{i} \partial_{d}^{\tilde{j}}\bar{\partial}^{k}(\partial_{d}q)\|_{L^{2}_{t,x}([0,T] \times\Omega)}\\ &+\sum_{i+\tilde{j}+k<r-1}\|\partial_{t}^{i}\partial_{d}^{j}\bar {\partial}^{k}(\partial_{d}q)\|_{L^{2}_{t,x}([0,T]\times\Omega)}\\ \leq 2^{r-1}\tilde{\epsilon}&\sum_{i+\tilde{j}+k\geq r -1}\frac{(i+\tilde{j}+k)^{r-1}}{(i+\tilde{j}+k)!}\epsilon^{i}\tilde{ \epsilon}^{\tilde{j}}\bar{\epsilon}^{k}\|t^{i+\tilde{j}+k-(r-1)}\partial_{t}^ {i}\partial_{d}^{\tilde{j}}\bar{\partial}^{k}(\partial_{d}q)\|_{L^{2}_{t,x}([0,T]\times\Omega)}\\ &+\sum_{i+\tilde{j}+k<r-1}\|\partial_{t}^{i}\partial_{d}^{j}\bar {\partial}^{k}(\partial_{d}q)\|_{L^{2}_{t,x}([0,T]\times\Omega)}\\ \leq 2^{r-1}\phi(q),\end{split} \tag{12}\] where \[\begin{split}\phi(q)=&\sum_{i+\tilde{j}+k\geq r-1} \frac{(i+\tilde{j}+k)^{r-1}}{(i+\tilde{j}+k)!}\epsilon^{i}\tilde{\epsilon}^{ \tilde{j}}\bar{\epsilon}^{k}\|t^{i+\tilde{j}+k-(r-1)}\partial_{t}^{i}\partial _{d}^{\tilde{j}}\bar{\partial}^{k}(\partial_{d}q)\|_{L^{2}_{t,x}([0,T]\times \Omega)}\\ &+\sum_{i+\tilde{j}+k<r-1}\|\partial_{t}^{i}\partial_{d}^{\tilde{j }}\bar{\partial}^{k}(\partial_{d}q)\|_{L^{2}_{t,x}([0,T]\times\Omega)}.\end{split} \tag{13}\] Here the first equality is obtained via the change of variable \(\tilde{j}=j-1\) and the first inequality follows from an application of the algebraic inequality \[\frac{(i+\tilde{j}+k+1)^{r}}{(i+\tilde{j}+k+1)!}=\frac{(i+\tilde{j}+k+1)^{r-1}} {(i+\tilde{j}+k)!}\leq\frac{(2(i+\tilde{j}+k))^{r-1}}{(i+\tilde{j}+k)!}=\frac {2^{r-1}(i+\tilde{j}+k)^{r-1}}{(i+\tilde{j}+k)!} \tag{14}\] that holds for any nonnegative integers \(i,\tilde{j},k\) whose sum is greater than or equal to 1. Also, the last inequality in (12) uses the boundedness of \(\tilde{\epsilon}\) from above by \(1\) and \(r\) from below by 2. Since \(\partial_{d}q\) solves the heat equation \[\partial_{t}(\partial_{d}q)-\Delta(\partial_{d}q)=\partial_{d}f \tag{15}\] with homogeneous Dirichlet boundary conditions, we can apply the derivative reduction technique of [10] and infer that there exists a positive universal constant \(C\) depending on the dimension \(d\) and \(r\), such that for any \(C_{0}\in(0,C)\), if \(\epsilon\) obeys \[\epsilon\leq C_{0}, \tag{16}\] \(\bar{\epsilon}=\bar{\epsilon}(\epsilon,T,C_{0})\) obeys \[\frac{T\bar{\epsilon}^{2}}{\epsilon}+\frac{T^{\frac{1}{2}}\bar{\epsilon}}{ \sqrt{\epsilon}}+T\bar{\epsilon}\leq C_{0}, \tag{17}\] \(\tilde{\epsilon}=\tilde{\epsilon}(\bar{\epsilon},\epsilon,T,C_{0})\) obeys \[\frac{T\tilde{\epsilon}^{2}}{\epsilon}+\frac{\tilde{\epsilon}}{\epsilon}+ \frac{\tilde{\epsilon}^{2}}{\epsilon^{2}}+T^{2}\tilde{\epsilon}^{2}+\frac{T \bar{\epsilon}\tilde{\epsilon}}{\epsilon}+\frac{T^{\frac{1}{2}}\tilde{ \epsilon}}{\epsilon^{\frac{1}{2}}}+T\bar{\epsilon}\leq C_{0}, \tag{18}\] and \[0<\tilde{\epsilon}\leq\bar{\epsilon}\leq\epsilon\leq 1, \tag{19}\] then \[\phi(q)\lesssim\phi_{0}+\phi_{1}(f) \tag{20}\] where \(\phi_{0}\) and \(\phi_{1}(f)\) are given by \[\phi_{0}=\|q_{0}\|_{H^{2r-1}(\Omega)}+\|f\|_{H^{2r-3}([0,T]\times\Omega)} \tag{21}\] and \[\phi_{1}(f)=\sum_{i+j+k21+(r-3)_{+}}\frac{(i+j+k+1)^{r-1}\epsilon^{i}\tilde{ \epsilon}^{j+1}\bar{\epsilon}^{k}}{(i+j+k+1)!}\|t^{i+j+k+2-r}\partial_{t}^{i} \partial_{d}^{j}\bar{\partial}^{k}f\|_{L^{2}_{t,x}([0,T]\times\Omega)}\] \[+\sum_{i+k2(r-3)_{+}}\frac{(i+k+2)^{r-1}\epsilon^{i}\bar{\epsilon}^{k+2}}{(i+ k+2)!}\|t^{i+k+3-r}\partial_{t}^{i}\partial_{d}\bar{\partial}^{k}f\|_{L^{2}_{t,x}([0,T] \times\Omega)} \tag{22}\] \[+\sum_{i\geq r-1}\frac{(i+1)^{r-1}\epsilon^{i+1}}{(i+1)!}\|t^{i+2-r} \partial_{t}^{i}\partial_{d}f\|_{L^{2}_{t,x}([0,T]\times\Omega)},\] provided that the quantities (21) and (22) are finite. We refer the reader to [10, (4.9)-(4.12)] for the choices of \(\epsilon,\tilde{\epsilon},\bar{\epsilon}\) given by (16)-(19). Further assumptions on \(\epsilon,\tilde{\epsilon},\bar{\epsilon}\) will be imposed later. **Estimation of \(\psi_{2}\).** Here \(\psi_{2}\) is the trickier term. We split \(\psi_{2}\) into \(\psi_{2}=\psi_{2,1}+\psi_{2,2}\) where \[\psi_{2,1}=\sum_{i+k<r}\|\partial_{t}^{i}\bar{\partial}^{k}q\|_{L^{2}_{t,x}([ 0,T]\times\Omega)} \tag{23}\] and \[\psi_{2,2}=\sum_{i+k\geq r}\frac{(i+k)^{r}}{(i+k)!}\epsilon^{i}\bar{\epsilon }^{k}\|t^{i+k-r}\partial_{t}^{i}\bar{\partial}^{k}q\|_{L^{2}_{t,x}([0,T]\times \Omega)}. \tag{24}\] The norm \(\psi_{2,1}\) is bounded by the sum of the \(H^{2(r-1)}(\Omega)\) norm of \(q_{0}\) and the \(H^{2(r-2)}\big{(}[0,T]\times\Omega\big{)}\) norm of \(f\). In order to control \(\psi_{2,2}\), we perform tangential and time derivative reductions. However, we do not appeal to elliptic estimates but use the PDE obeyed by \(q\) instead. We decompose \(\psi_{2,2}\) into the sum \(\psi_{2,2}=\psi_{2,2,1}+\psi_{2,2,2}+\psi_{2,2,3}\), where \[\psi_{2,2,1}=\sum_{i+k\geq r,k\geq 2}\frac{(i+k)^{r}}{(i+k)!}\epsilon^{i}\bar{ \epsilon}^{k}\|t^{i+k-r}\partial_{t}^{i}\bar{\partial}^{k}q\|_{L^{2}_{t,x}([0,T]\times\Omega)}, \tag{25}\] \[\psi_{2,2,2}=\sum_{i\geq r-1}\frac{(i+1)^{r}}{(i+1)!}\epsilon^{i}\bar{ \epsilon}\|t^{i+1-r}\partial_{t}^{i}\bar{\partial}q\|_{L^{2}_{t,x}([0,T]\times \Omega)}, \tag{26}\] and \[\psi_{2,2,3}=\sum_{i\geq r}\frac{i^{r}}{i!}\epsilon^{i}\|t^{i-r}\partial_{t}^ {i}q\|_{L^{2}_{t,x}([0,T]\times\Omega)}. \tag{27}\] We start by estimating \(\psi_{2,2,1}\). The main idea is to reduce the number of horizontal derivatives by increasing the number of vertical derivatives, which, eventually, allows us to control \(\psi_{2,2,1}\) by the sum \(\psi_{1}\). A loss of two tangential derivatives is equivalent to a gain of two normal derivatives, a fact that is based on the heat equation (1), from which we obtain the relation \[\Delta_{d-1}q=-\partial_{d}\partial_{d}q+\partial_{t}q-f. \tag{28}\] Here \(\Delta_{d-1}\) stands for the \((d-1)\)-dimensional Laplace operator, \[\Delta_{d-1}q:=\partial_{1}\partial_{1}q+\cdots+\partial_{d-1}\partial_{d-1}q. \tag{29}\] The norm \(\|\cdot\|_{L^{2}_{t,x}([0,T]\times\Omega)}\) is equivalent to \[\|\cdot\|_{L^{2}_{t,x}([0,T]\times\Omega)}=\left\|\|\cdot\|_{L^{2}_{x_{1}, \ldots,x_{d-1}}(\mathbb{R}^{d-1})}\right\|_{L^{2}_{t,x_{d}}([0,T]\times(0,\infty ))}, \tag{30}\] and so the sum \(\psi_{2,2,1}\) can be written as \[\psi_{2,2,1}=\sum_{i+k\geq r,k\geq 2}\frac{(i+k)^{r}}{(i+k)!}\epsilon^{i} \tilde{\epsilon}^{k}\left\|t^{i+k-r}\|\partial_{t}^{i}\bar{\partial}^{k}q\|_ {L^{2}_{x_{1},\ldots,x_{d-1}}(\mathbb{R}^{d-1})}\right\|_{L^{2}_{t,x_{d}}([0,T ]\times(0,\infty))}. \tag{31}\] Denoting the inverse of the square root of the Laplace operator \(-\Delta_{d-1}\) on the whole space \(\mathbb{R}^{d-1}\) by \(\Lambda^{-1}_{d-1}\), and exploiting the boundedness of the Riesz transform operator \(\nabla_{d-1}\Lambda^{-1}_{d-1}\) on \(L^{2}(\mathbb{R}^{d-1})\), we have the following \((d-1)\)-dimensional elliptic regularity estimate \[\|\partial_{x_{s}}\partial_{x_{r}}\rho\|_{L^{2}(\mathbb{R}^{d-1})}=\|\partial_ {x_{s}}\Lambda^{-1}_{d-1}\partial_{x_{r}}\Lambda^{-1}_{d-1}\Delta_{d-1}\rho \|_{L^{2}(\mathbb{R}^{d-1})}\leq C\|\Delta_{d-1}\rho\|_{L^{2}(\mathbb{R}^{d-1 })} \tag{32}\] for any \(\rho\in H^{2}(\mathbb{R}^{d-1})\), and any \(s,r\in\{1,\ldots,d-1\}\). We point out that the first equality in (32) follows from the fact the operators \(\nabla_{d-1}\) and \(\Lambda^{-1}_{d-1}\) are Fourier multipliers in the whole space setting, so they commute. Accordingly, for any nonnegative integers \(i\geq 0\) and \(k\geq 2\), we have \[\|\partial_{t}^{i}\bar{\partial}^{k}q\|_{L^{2}_{x_{1},\ldots,x_{d-1}}(\mathbb{ R}^{d-1})}\leq\|\partial_{t}^{i}\bar{\partial}^{k-2}q\|_{\dot{H}^{2}( \mathbb{R}^{d-1})}\leq C\|\partial_{t}^{i}\bar{\partial}^{k-2}\Delta_{d-1}q\| _{L^{2}_{x_{1},\ldots,x_{d-1}}(\mathbb{R}^{d-1})} \tag{33}\] which, followed by an application of the relation (28), yields the estimate \[\psi_{2,2,1}\leq \sum_{i+k\geq r,k\geq 2}\frac{(i+k)^{r}}{(i+k)!}\epsilon^{i} \tilde{\epsilon}^{k}\|t^{i+k-r}\partial_{t}^{i}\partial_{d}^{2}\bar{\partial}^ {k-2}q\|_{L^{2}_{t,x}([0,T]\times\Omega)} \tag{34}\] \[+\sum_{i+k\geq r,k\geq 2}\frac{(i+k)^{r}}{(i+k)!}\epsilon^{i} \tilde{\epsilon}^{k}\|t^{i+k-r}\partial_{t}^{i+1}\bar{\partial}^{k-2}q\|_{L^{ 2}_{t,x}([0,T]\times\Omega)}\] \[+\sum_{i+k\geq r-2}\frac{(i+k+2)^{r}}{(i+k+2)!}\epsilon^{i} \tilde{\epsilon}^{k+2}\|t^{i+k+2-r}\partial_{t}^{i}\bar{\partial}^{k}f\|_{L^{2 }_{t,x}([0,T]\times\Omega)}.\] The first sum in (34) is controlled by a constant multiple of the Gevrey norm \(\psi_{1}(q)\) given by (10) due to the presence of second-order normal derivatives. In other words, we have \[\sum_{i+k\geq r,k\geq 2}\frac{(i+k)^{r}}{(i+k)!}\epsilon^{i} \tilde{\epsilon}^{k}\|t^{i+k-r}\partial_{t}^{i}\partial_{d}^{2}\bar{\partial}^ {k-2}q\|_{L^{2}_{t,x}([0,T]\times\Omega)} \tag{35}\] \[= \frac{\bar{\epsilon}^{2}}{\bar{\epsilon}^{2}}\sum_{i+k+2r}\frac {(i+2+k)^{r}}{(i+2+k)!}\epsilon^{i}\tilde{\epsilon}^{2}\bar{\epsilon}^{k}\|t^{ i+2+k-r}\partial_{t}^{i}\partial_{d}^{2}\bar{\partial}^{k}q\|_{L^{2}_{t,x}([0,T] \times\Omega)}\] \[\leq \frac{\bar{\epsilon}^{2}}{\bar{\epsilon}^{2}}\psi_{1}(q)\leq\frac {\bar{\epsilon}^{2}}{\bar{\epsilon}^{2}}2^{r-1}\left(\phi_{0}+\phi_{1}(f) \right)=2^{r-1}\left(\phi_{0}+\phi_{1}(f)\right),\] provided that \(\bar{\epsilon}=\tilde{\epsilon}\). We recall that \(\phi_{0}\) and \(\phi_{1}(f)\) are given by (21) and (22) respectively. The second sum in (34) is bounded by a small constant multiple of (24) up to an additive constant depending only on \(q_{0}\) and \(f\). Indeed, \[\begin{split}&\sum_{i+k\geq r,k\geq 2}\frac{(i+k)^{r}}{(i+k)!} \epsilon^{i}\bar{\epsilon}^{k}\|t^{i+k-r}\partial_{t}^{i+1}\bar{\partial}^{k-2 }q\|_{L^{2}_{t,x}([0,T]\times\Omega)}\\ &=\frac{\bar{\epsilon}^{2}}{\epsilon}\sum_{i+k\geq r-1}\frac{(i+k +1)^{r}}{(i+k+1)!}\epsilon^{i}\bar{\epsilon}^{k}\|t^{i+k-r+1}\partial_{t}^{i} \bar{\partial}^{k}q\|_{L^{2}_{t,x}([0,T]\times\Omega)}\\ &\leq\frac{T\bar{\epsilon}^{2}}{\epsilon}\sum_{i+k\geq r}\frac{(i +k+1)^{r-1}}{(i+k)!}\epsilon^{i}\bar{\epsilon}^{k}\|t^{i+k-r}\partial_{t}^{i} \bar{\partial}^{k}q\|_{L^{2}_{t,x}([0,T]\times\Omega)}+\frac{\bar{\epsilon}^{2 }}{\epsilon}\frac{r^{r}}{r!}\sum_{i+k=r-1}\epsilon^{i}\bar{\epsilon}^{k}\| \partial_{t}^{i}\bar{\partial}^{k}q\|_{L^{2}_{t,x}([0,T]\times\Omega)}\\ &\leq\frac{2^{r-1}T\bar{\epsilon}^{2}}{\epsilon}\sum_{i+k\geq r} \frac{(i+k)^{r}}{(i+k)!}\epsilon^{i}\bar{\epsilon}^{k}\|t^{i+k-r}\partial_{t}^ {i}\bar{\partial}^{k}q\|_{L^{2}_{t,x}([0,T]\times\Omega)}+\frac{r^{r}}{r!}\sum _{i+k=r-1}\|\partial_{t}^{i}\bar{\partial}^{k}q\|_{L^{2}_{t,x}([0,T]\times \Omega)}\end{split} \tag{36}\] by (19). In view of the condition (17), we infer that \[\begin{split}&\sum_{i+k\geq r,k\geq 2}\frac{(i+k)^{r}}{(i+k)!} \epsilon^{i}\bar{\epsilon}^{k}\|t^{i+k-r}\partial_{t}^{i+1}\bar{\partial}^{k-2 }q\|_{L^{2}_{t,x}([0,T]\times\Omega)}\\ &\leq 2^{r-1}C_{0}\psi_{2,2}+C\|q_{0}\|_{H^{2(r-1)}(\Omega)}+C\|f\|_{H^ {2(r-2)}([0,T]\times\Omega)}\\ &\leq\delta\psi_{2,2}+C\|q_{0}\|_{H^{2(r-1)}(\Omega)}+C\|f\|_{H^ {2(r-2)}([0,T]\times\Omega)}\end{split} \tag{37}\] provided that \(C_{0}\) is chosen to be smaller than \(\frac{\delta}{2^{r-1}}\). Here \(\delta\) is a positive constant that will be determined later. Now we proceed to estimate \(\psi_{2,2,2}\). The main idea is to increase the number of tangential derivative by one via interpolation and show that \(\psi_{2,2,2}\) is dominated by the sum of \(\psi_{2,2,1}\) and \(\psi_{2,2,3}\), reducing consequently the problem to a time derivative reduction. We need the following elementary lemma: **Lemma 1**.: _Let \(i\geq 1\) and \(r\geq 1\) be some integers. Then the following estimate_ \[\frac{(i+1)^{r}}{(i+1)!}\leq\sqrt{2^{r+1}}\sqrt{\frac{(i+2)^{r}}{(i+2)!}}\sqrt {\frac{i^{r}}{i!}} \tag{38}\] _holds._ **Proof of Lemma 1.** We have \[\begin{split}&\sqrt{\frac{i!(i+2)!}{[(i+1)!]^{2}}}\sqrt{\frac{(i+1 )^{2r}}{i^{r}(i+2)^{r}}}=\sqrt{\frac{i+2}{i+1}}\sqrt{\left(\frac{i+1}{i} \right)^{r}\left(\frac{i+1}{i+2}\right)^{r}}\\ &=\sqrt{1+\frac{1}{i+1}}\sqrt{\left(1+\frac{1}{i}\right)^{r} \left(\frac{i+1}{i+2}\right)^{r}}\leq\sqrt{2}\sqrt{2^{r}}=\sqrt{2^{r+1}}. \end{split} \tag{39}\] In view of Lemma 1 and the \((d-1)\)-dimensional interpolation inequality \[\|\bar{\partial}\rho\|_{L^{2}_{x_{1},\ldots,x_{d-1}}(\mathbb{R}^{d-1})}\leq\| \bar{\partial}\bar{\partial}\rho\|_{L^{2}_{x_{1},\ldots,x_{d-1}}(\mathbb{R}^{d -1})}^{\frac{1}{2}}\|\rho\|_{L^{2}_{x_{1},\ldots,x_{d-1}}(\mathbb{R}^{d-1})}^{ \frac{1}{2}}+\|\rho\|_{L^{2}_{x_{1},\ldots,x_{d-1}}(\mathbb{R}^{d-1})} \tag{40}\] that holds for any \(\rho\in H^{2}(\mathbb{R}^{d-1})\), we have \[\begin{split}\psi_{2,2,2}&=\sum_{i\geq r-1}\frac{(i+1)^ {r}}{(i+1)!}\epsilon^{i}\bar{\epsilon}\|t^{i+1-r}\partial_{t}^{i}\bar{\partial} q\|_{L^{2}_{t,x}([0,T]\times\Omega)}\\ &=\frac{r^{r}}{r!}\epsilon^{r-1}\bar{\epsilon}\|\partial_{t}^{r-1 }\bar{\partial}q\|_{L^{2}_{t,x}([0,T]\times\Omega)}+\sum_{i\geq r}\frac{(i+1)^ {r}}{(i+1)!}\epsilon^{i}\bar{\epsilon}\|t^{i+1-r}\partial_{t}^{i}\bar{\partial }q\|_{L^{2}_{t,x}([0,T]\times\Omega)}\\ &\leq C\left(\|q_{0}\|_{H^{2r-1}(\Omega)}+\|f\|_{H^{2r-3}([0,T] \times\Omega)}\right)+\sum_{i\geq r}\frac{(i+1)^{r}}{(i+1)!}\epsilon^{i}\bar{ \epsilon}\|t^{i+1-r}\partial_{t}^{i}q\|_{L^{2}_{t,x}([0,T]\times\Omega)}\\ &\quad+C\sqrt{2^{r+1}}\sum_{i\geq r}\sqrt{\frac{(i+2)^{r}}{(i+2)!}}\sqrt{\frac{i^{r}}{i!}}\epsilon^{i}\bar{\epsilon}\left\|\|\bar{\partial} \bar{\partial}(t^{i+1-r}\partial_{t}^{i}q)\right\|_{L^{2}(\mathbb{R}^{d-1})} ^{\frac{1}{2}}\|t^{i+1-r}\partial_{t}^{i}q\|_{L^{2}_{t,x}(\mathbb{R}^{d-1})} ^{\frac{1}{2}}\bigg{\|}_{L^{2}_{t,x_{d}}}\\ &=C\left(\|q_{0}\|_{H^{2r-1}(\Omega)}+\|f\|_{H^{2r-3}([0,T]\times \Omega)}\right)+\sum_{i\geq r}\frac{(i+1)^{r}}{(i+1)!}\epsilon^{i}\bar{ \epsilon}\|t^{i+1-r}\partial_{t}^{i}q\|_{L^{2}_{t,x}([0,T]\times\Omega)}\\ &\quad+C\sqrt{2^{r+1}}\sum_{i\geq r}\left\|\left\|\frac{(i+2)^{r} }{(i+2)!}\epsilon^{i}\bar{\epsilon}^{2}\bar{\partial}\bar{\partial}(t^{i+2-r} \partial_{t}^{i}q)\right\|_{L^{2}(\mathbb{R}^{d-1})}^{\frac{1}{2}}\bigg{\|} \frac{i^{r}}{i!}\epsilon^{i}t^{i-r}\partial_{t}^{i}q\bigg{\|}_{L^{2}(\mathbb{R }^{d-1})}^{\frac{1}{2}}\bigg{\|}_{L^{2}_{t,x_{d}}},\end{split} \tag{41}\] which, after using Young's inequality, boils down to \[\begin{split}&\psi_{2,2,2}\leq C\left(\|q_{0}\|_{H^{2r-1}(\Omega) }+\|f\|_{H^{2r-3}([0,T]\times\Omega)}\right)+\sum_{i\geq r}\frac{(i+1)^{r}}{(i +1)!}\epsilon^{i}\bar{\epsilon}\|t^{i+1-r}\partial_{t}^{i}q\|_{L^{2}_{t,x}([0,T ]\times\Omega)}\\ &+C\sqrt{2^{r-1}}\sum_{i\geq r}\frac{(i+2)^{r}}{(i+2)!}\epsilon^{ i}\bar{\epsilon}^{2}\|t^{i+2-r}\partial_{t}^{i}\bar{\partial}\bar{\partial}q\|_{L^{2}_{t,x}([0,T]\times\Omega)}+C\sqrt{2^{r-1}}\sum_{i\geq r}\frac{i}{i!}\epsilon^{i}\| t^{i-r}\partial_{t}^{i}q\|_{L^{2}_{t,x}([0,T]\times\Omega)}\\ &\leq C\|q_{0}\|_{H^{2r-1}(\Omega)}+C\|f\|_{H^{2r-3}([0,T]\times \Omega)}+C2^{r-1}\bar{\epsilon}T\psi_{2,2,3}+C\sqrt{2^{r-1}}\psi_{2,2,1}+C \sqrt{2^{r-1}}\psi_{2,2,3}.\end{split} \tag{42}\] Since \(T\bar{\epsilon}\leq C_{0}\), we obtain \[\psi_{2,2,2}\leq C\|q_{0}\|_{H^{2r-1}(\Omega)}+C\|f\|_{H^{2r-3}([0,T]\times \Omega)}+C_{1}\sqrt{2^{r-1}}\psi_{2,2,1}+C\psi_{2,2,3}, \tag{43}\] where \(C\) is a positive constant depending only on \(r\) and \(d\), and \(C_{1}\) is a positive universal constant. We end the proof by estimating \(\psi_{2,2,3}\). We take the scalar product in \(L^{2}\) of the heat equation (1) with \(\partial_{t}q\). We integrate by parts the diffusion term \(\left(-\Delta q,\partial_{t}q\right)_{L^{2}}\) using the homogeneous Neumann boundary conditions. We apply the Cauchy-Schwarz inequality to bound the forcing term \((f,\partial_{t}q)_{L^{2}}\) and then make use of Young's inequality to obtain the energy inequality \[\|\partial_{t}q\|_{L^{2}}^{2}+\frac{d}{dt}\|\nabla q\|_{L^{2}}^{2} \leq\|f\|_{L^{2}}^{2}, \tag{44}\] which yields \[\int_{0}^{T}\|\partial_{t}q\|_{L^{2}}^{2}dt+\|\nabla q(T)\|_{L^{2} }^{2}\leq\|\nabla q(0)\|_{L^{2}}+\int_{0}^{T}\|f(t)\|_{L^{2}}^{2}dt \tag{45}\] after integrating in time from \(0\) to \(T\). This latter estimate holds for any inhomogeneous heat equation with homogeneous Neumann boundary conditions. Consequently, it applies to the equation \[\partial_{t}(t^{i-r}\partial_{t}^{i-1}q)-\Delta(t^{i-r}\partial_{t} ^{i-1}q)=(i-r)t^{i-1-r}\partial_{t}^{i-1}q+t^{i-r}\partial_{t}^{i-1}f \tag{46}\] for any \(i\geq r+1\). Since \(t^{i-r}\partial_{t}^{i-1}q\) vanishes at the initial time \(t=0\), we obtain \[\|\partial_{t}(t^{i-r}\partial_{t}^{i-1}q)\|_{L^{2}_{t,x}([0,T] \times\Omega)} \leq\|(i-r)t^{i-1-r}\partial_{t}^{i-1}q\|_{L^{2}_{t,x}([0,T]\times \Omega)}+\|t^{i-r}\partial_{t}^{i-1}f\|_{L^{2}_{t,x}([0,T]\times\Omega)}, \tag{47}\] which boils down to \[\|t^{i-r}\partial_{t}^{i}q\|_{L^{2}_{t,x}([0,T]\times\Omega)} \lesssim\|(i-r)t^{i-1-r}\partial_{t}^{i-1}q\|_{L^{2}_{t,x}([0,T] \times\Omega)}+\|t^{i-r}\partial_{t}^{i-1}f\|_{L^{2}_{t,x}([0,T]\times\Omega)} \tag{48}\] for any \(i\geq r+1\). By making use of (48), we estimate \(\psi_{2,2,3}\) as follows, \[\begin{split}\psi_{2,2,3}&\lesssim\frac{r^{r}}{r!} \epsilon^{r}\|\partial_{t}^{r}q\|_{L^{2}_{t,x}([0,T]\times\Omega)}+\sum_{i\geq r +1}\frac{i^{r}}{i!}\epsilon^{i}(i-r)\|t^{i-1-r}\partial_{t}^{i-1}q\|_{L^{2}_{t,x}([0,T]\times\Omega)}\\ &\qquad+\sum_{i\geq r+1}\frac{i^{r}}{i!}\epsilon^{i}\|t^{i-r} \partial_{t}^{i-1}f\|_{L^{2}_{t,x}([0,T]\times\Omega)}\\ &\leq C(\|q_{0}\|_{H^{2r}(\Omega)}+\|f\|_{H^{2r-2}([0,T]\times \Omega)})+C_{2}2^{r}\epsilon\psi_{2,2,3}\\ &\qquad+C\sum_{i\geq r}\frac{(i+1)^{r}}{(i+1)!}\epsilon^{i+1}\|t^ {i+1-r}\partial_{t}^{i}f\|_{L^{2}_{t,x}([0,T]\times\Omega)},\end{split} \tag{49}\] from which we obtain \[\psi_{2,2,3}\leq C\left(\|q_{0}\|_{H^{2r}(\Omega)}+\|f\|_{H^{2r-2}([0,T]\times \Omega)}+\sum_{i\geq r}\frac{(i+1)^{r}}{(i+1)!}\epsilon^{i+1}\|t^{i+1-r} \partial_{t}^{i}f\|_{L^{2}_{t,x}([0,T]\times\Omega)}\right) \tag{50}\] provided that \(\epsilon\leq C_{0}\leq\frac{1}{C_{2}2^{r+1}}\). Putting (34)-(37), (43) and (50) together, we conclude that \[\begin{split}\psi_{2,2}&\leq C\left(\|q_{0}\|_{H^{2 r}(\Omega)}+\|f\|_{H^{2r-2}([0,T]\times\Omega)}\right)+\delta\psi_{2,2}+C_{1} \sqrt{2^{r-1}}\delta\psi_{2,2}\\ &\qquad+C\phi_{1}(f)+C\sum_{i\geq r}\frac{(i+1)^{r}}{(i+1)!} \epsilon^{i+1}\|t^{i+1-r}\partial_{t}^{i}f\|_{L^{2}_{t,x}([0,T]\times\Omega)}\\ &\qquad+C\sum_{i+k\geq r-2}\frac{(i+k+2)^{r}}{(i+k+2)!}\epsilon^ {i}\bar{\epsilon}^{k+2}\|t^{i+k+2-r}\partial_{t}^{i}\bar{\partial}^{k}f\|_{L^ {2}_{t,x}([0,T]\times\Omega)}.\end{split} \tag{51}\] for any \(\delta>0\). Choosing \(\delta\) so that \(\delta(1+C_{1}\sqrt{2^{r-1}})\leq\frac{1}{2}\), we obtain the following bound for \(\psi_{2,2}\), \[\begin{split}\psi_{2,2}&\leq C\left(\|q_{0}\|_{H^{2 r}(\Omega)}+\|f\|_{H^{2r-2}([0,T]\times\Omega)}\right)+C\sum_{i\geq r}\frac{(i+1)^{r}}{( i+1)!}\epsilon^{i+1}\|t^{i+1-r}\partial_{t}^{i}f\|_{L^{2}_{t,x}([0,T]\times \Omega)}\\ &\qquad+C\sum_{i+k\geq r-2}\frac{(i+k+2)^{r}}{(i+k+2)!}\epsilon^ {i}\bar{\epsilon}^{k+2}\|t^{i+k+2-r}\partial_{t}^{i}\bar{\partial}^{k}f\|_{L^ {2}_{t,x}([0,T]\times\Omega)}+C\phi_{1}(f).\end{split} \tag{52}\] Therefore, we infer that \[\begin{split}\psi&\lesssim\|q_{0}\|_{H^{2r}(\Omega)}+\| f\|_{H^{2r-2}([0,T]\times\Omega)}+\sum_{i+k\geq r-2}\frac{(i+k+2)^{r}}{(i+k+2)!} \epsilon^{i}\bar{\epsilon}^{k+2}\|t^{i+k+2-r}\partial_{t}^{i}\bar{\partial}^{k }f\|_{L^{2}_{t,x}([0,T]\times\Omega)}\\ &+\sum_{i\geq r}\frac{(i+1)^{r}}{(i+1)!}\epsilon^{i+1}\|t^{i+1-r} \partial_{t}^{i}f\|_{L^{2}_{t,x}([0,T]\times\Omega)}\\ &\qquad+\sum_{i+j+k\geq 1+(r-3)_{+}}\frac{(i+j+k+1)^{r-1}\epsilon^{i} \bar{\epsilon}^{j+1}\bar{\epsilon}^{k}}{(i+j+k+1)!}\|t^{i+j+k+2-r}\partial_{t} ^{i}\partial_{d}^{j}\bar{\partial}^{k}f\|_{L^{2}_{t,x}([0,T]\times\Omega)}\\ &\qquad+\sum_{i+k\geq(r-3)_{+}}\frac{(i+k+2)^{r-1}\epsilon^{i} \bar{\epsilon}^{k+2}}{(i+k+2)!}\|t^{i+k+3-r}\partial_{t}^{i}\partial_{d}\bar{ \partial}^{k}f\|_{L^{2}_{t,x}([0,T]\times\Omega)}\\ &\qquad+\sum_{i\geq r-1}\frac{(i+1)^{r-1}\epsilon^{i+1}}{(i+1)!}\|t ^{i+2-r}\partial_{t}^{i}\partial_{d}f\|_{L^{2}_{t,x}([0,T]\times\Omega)}\end{split} \tag{53}\] for any \(\epsilon,\tilde{\epsilon},\bar{\epsilon}\in(0,1]\) obeying conditions (16)-(19), together with \(\tilde{\epsilon}=\bar{\epsilon}\). This finishes the proof of Theorem 1. ## 3. Remarks on the inhomogeneous Neumann and homogeneous Robin boundary conditions Theorem 1 can be generalized to the case of general Neumann boundary conditions: **Remark 1**.: _Let \(g\) be a time-independent sufficiently smooth function defined on \(\mathbb{R}^{d-1}\). Solutions to the inhomogeneous heat equation on the half space \(\Omega=\mathbb{R}^{d}_{+}\) with general Neumann boundary conditions_ \[\partial_{d}q=g \tag{54}\] _are analytic in space and time, a fact that follows by adapting the approach of Theorem 1 to the boundary value problem formed by (1) and (54). Indeed, the norm (7) can be decomposed into two sub-sums \(S_{1}\) and \(S_{2}\), where \(S_{1}\) encompasses all normal derivatives and \(S_{2}\) depends only on tangential and time derivatives. Since the function \(v:=\partial_{d}q-g\) solves the heat equation_ \[\partial_{t}(\partial_{d}q-g)-\Delta(\partial_{d}q-g)=\partial_{d}f+\bar{ \partial}\cdot\bar{\partial}g \tag{55}\] _and vanishes on the boundary of \(\Omega\), then we obtain good control of \(S_{1}\) by the Sobolev norm of the initial datum and the Gevrey norms of both \(f\) and \(g\). Here we abused notation and wrote \(g\) for the extension \(\tilde{g}(x_{1},\ldots,x_{d})=g(x_{1},\ldots,x_{d-1})\). As for the sum \(S_{2}\), we perform the same decomposition strategy as for \(\psi_{2}\) in the proof of Theorem 1, implementing henceforth our idea of decreasing the number of tangential derivatives by increasing the number of normal derivatives. The one and only main difference resides in the derivation of the energy inequality (44), which relies on integration by parts and use of the homogeneous type of Neumann boundary conditions. In the case of (54), the following analogous ordinary differential equation holds_ \[\|\partial_{t}q\|_{L^{2}}^{2}-\int_{\Omega}\Delta q\partial_{t}qdx=\int_{ \Omega}f\partial_{t}qdx \tag{56}\] _which, due to Holder and Young inequalities, reduces to_ \[\|\partial_{t}q\|_{L^{2}}^{2}+\frac{d}{dt}\|\nabla q-G\|_{L^{2}}^{2}\leq\|f\|_ {L^{2}}^{2}, \tag{57}\] _with \(G=(0,\ldots,0,-\tilde{g})\). Here we used_ \[\begin{split}&\frac{1}{2}\frac{d}{dt}\|\nabla q-G\|_{L^{2}}^{2}= \int_{\Omega}(\nabla q-G)\cdot\partial_{t}(\nabla q-G)dx\\ &=\int_{\Omega}(\nabla q-G)\cdot\partial_{t}\nabla qdx=-\int_{ \Omega}\nabla\cdot(\nabla q-G)\cdot\partial_{t}qdx=-\int_{\Omega}\Delta q \partial_{t}qdx\end{split} \tag{58}\] _that holds in view of the divergence-free condition obeyed by \(G\), and the vanishing property \((\nabla q-G)|_{\partial\Omega}\cdot n=0\) for \(n=(0,\ldots,0,-1)\). However, the energy equality (56) is not needed to perform time derivative reduction, as we seek bounds for the solution of the heat equation (46) obeyed by \(t^{i-r}\partial_{t}^{i-1}q\), which has a vanishing normal derivative on the boundary of the half-space for any \(i\geq r+1\). The details follow along the lines of the proof of Theorem 1 and will be omitted._ Our approach also applies to obtain the space-time analyticity of the heat equation with homogeneous Robin boundary conditions: **Remark 2**.: _The Gevrey regularity of solutions to the inhomogeneous heat equation (1) on the half space \(\Omega=\mathbb{R}^{d}_{+}\) with homogeneous Robin boundary conditions_ \[\big{(}aq+b\partial_{d}q\big{)}|_{\partial\Omega}=0 \tag{59}\] _reduces to a question of Sobolev global regularity, under some conditions imposed on \(a\) and \(b\). Indeed, if \(a=0\) or \(b=0\), then the problem boils down to the case of homogeneous Neumann or homogeneous Dirichlet boundary conditions. If both \(a\) and \(b\) are nonvanishing and have the same sign, then we repeat the same strategy of Theorem 1 and decompose the norm (7) into two sub-sums, \(S_{1}\) involving the vertical derivative components and \(S_{2}\) involving only horizontal and time derivatives. As \(v=\frac{a}{b}q+\partial_{d}q\) solves the heat equation_ \[\partial_{t}\left(\frac{a}{b}q+\partial_{d}q\right)-\Delta\left(\frac{a}{b}q+ \partial_{d}q\right)=\frac{a}{b}f+\partial_{d}f \tag{60}\] _with homogeneous Dirichlet boundary conditions, we can bound \(S_{1}\) by_ \[S_{1} \leq 2^{r-1}\tilde{\epsilon}\sum_{i+j+k\geq r-1}\frac{\big{(}i+j+k \big{)}^{r-1}}{(i+j+k)!}\epsilon^{i}\tilde{\epsilon}^{j}\tilde{\epsilon}^{k}\|t ^{i+j+k-(r-1)}\partial_{t}^{i}\partial_{d}^{j}\bar{\partial}^{k}\big{(} \partial_{d}q\big{)}\|_{L^{2}_{t,x}([0,T]\times\Omega)} \tag{61}\] \[\qquad+\sum_{i+j+k<r-1}\|\partial_{t}^{i}\partial_{d}^{j}\bar{ \partial}^{k}\partial_{d}q\|_{L^{2}_{t,x}([0,T]\times\Omega)}\] \[\leq 2^{r-1}\tilde{\epsilon}\sum_{i+j+k\geq r-1}\frac{(i+j+k)^{r- 1}}{(i+j+k)!}\epsilon^{i}\tilde{\epsilon}^{j}\tilde{\epsilon}^{k}\|t^{i+j+k-( r-1)}\partial_{t}^{i}\partial_{d}^{j}\bar{\partial}^{k}\left(\partial_{d}q+\frac{a}{b}q \right)\|_{L^{2}_{t,x}([0,T]\times\Omega)}\] \[\qquad+2^{r-1}\tilde{\epsilon}\left|\frac{a}{b}\right|_{i+j+k \geq r-1}\frac{(i+j+k)^{r-1}}{(i+j+k)!}\epsilon^{i}\tilde{\epsilon}^{j}\tilde {\epsilon}^{k}\|t^{i+j+k-(r-1)}\partial_{t}^{i}\partial_{d}^{j}\bar{\partial}^ {k}q\|_{L^{2}_{t,x}([0,T]\times\Omega)}\] \[\qquad+\sum_{i+j+k<r-1}\|\partial_{t}^{i}\partial_{d}^{j}\bar{ \partial}^{k}\partial_{d}q\|_{L^{2}_{t,x}([0,T]\times\Omega)}\] _as shown in (12), and obtain control of the first sum in the last inequality by applying the result of [10]. Regarding the second sum in (61), it can be controlled as follows,_ \[2^{r-1}\tilde{\epsilon}\left|\frac{a}{b}\right|\sum_{i+j+k\geq r -1}\frac{\big{(}i+j+k\big{)}^{r-1}}{(i+j+k)!}\epsilon^{i}\tilde{\epsilon}^{j} \tilde{\epsilon}^{k}\|t^{i+j+k-(r-1)}\partial_{t}^{i}\partial_{d}^{j}\bar{ \partial}^{k}q\|_{L^{2}_{t,x}([0,T]\times\Omega)} \tag{62}\] \[\leq 2^{r-1}\tilde{\epsilon}\left|\frac{a}{b}\right|\frac{(r-1)^{r -1}}{(r-1)!}\sum_{i+j+k=r-1}\epsilon^{i}\tilde{\epsilon}^{j}\tilde{\epsilon}^{ k}\|\partial_{t}^{i}\partial_{d}^{j}\bar{\partial}^{k}q\|_{L^{2}_{t,x}([0,T]\times \Omega)}\] \[\qquad+2^{r-1}T\tilde{\epsilon}\left|\frac{a}{b}\right|\sum_{i+j+ k\geq r}\frac{(i+j+k)^{r}}{(i+j+k)!}\epsilon^{i}\tilde{\epsilon}^{j}\tilde{ \epsilon}^{k}\|t^{i+j+k-r}\partial_{t}^{i}\partial_{d}^{j}\bar{\partial}^{k}q \|_{L^{2}_{t,x}([0,T]\times\Omega)}\] _where the first sum is bounded by the \(H^{r-1}([0,T]\times\Omega)\) Sobolev norm of the solution, and the second sum is bounded by a small constant multiple of the Gevrey norm (7), provided that \(2^{r-1}\tilde{\epsilon}\left|\frac{a}{b}\right|\) is sufficiently small. The sum \(S_{2}\) is treated as \(\psi_{2}\) in the proof of Theorem 1, but an ODE analogous to (44) is needed. In fact, we have_ \[\|\partial_{t}q\|_{L^{2}}^{2}+\int_{\Omega}\nabla q\cdot\partial_{t}\nabla qdx -\int_{\partial\Omega}\partial_{t}q\partial_{d}q\sigma(x)=\int_{\Omega}f \partial_{t}qdx \tag{63}\] _which, after use of the Robin boundary conditions, reduces to_ \[\|\partial_{t}q\|_{L^{2}}^{2}+\int_{\Omega}\nabla q\cdot\partial_{t}\nabla qdx +\frac{a}{b}\int_{\partial\Omega}q\partial_{t}qd\sigma(x)=\int_{\Omega}f \partial_{t}qdx, \tag{64}\] _and so_ \[\|\partial_{t}q\|_{L^{2}}^{2}+\frac{1}{2}\frac{d}{dt}\left(\|\nabla q\|_{L^{2 }}^{2}+\frac{a}{b}\int_{\partial\Omega}q^{2}d\sigma(x)\right)=\int_{\Omega}f \partial_{t}qdx. \tag{65}\] _Now we proceed as for the proof of Theorem 1. We omit further details._ ## 4. Acknowledgment WW was partially supported by an AMS-Simons travel grant.
2307.07935
S2R-ViT for Multi-Agent Cooperative Perception: Bridging the Gap from Simulation to Reality
Due to the lack of enough real multi-agent data and time-consuming of labeling, existing multi-agent cooperative perception algorithms usually select the simulated sensor data for training and validating. However, the perception performance is degraded when these simulation-trained models are deployed to the real world, due to the significant domain gap between the simulated and real data. In this paper, we propose the first Simulation-to-Reality transfer learning framework for multi-agent cooperative perception using a novel Vision Transformer, named as S2R-ViT, which considers both the Deployment Gap and Feature Gap between simulated and real data. We investigate the effects of these two types of domain gaps and propose a novel uncertainty-aware vision transformer to effectively relief the Deployment Gap and an agent-based feature adaptation module with inter-agent and ego-agent discriminators to reduce the Feature Gap. Our intensive experiments on the public multi-agent cooperative perception datasets OPV2V and V2V4Real demonstrate that the proposed S2R-ViT can effectively bridge the gap from simulation to reality and outperform other methods significantly for point cloud-based 3D object detection.
Jinlong Li, Runsheng Xu, Xinyu Liu, Baolu Li, Qin Zou, Jiaqi Ma, Hongkai Yu
2023-07-16T03:54:10Z
http://arxiv.org/abs/2307.07935v4
# S2R-ViT for Multi-Agent Cooperative Perception: Bridging the Gap from Simulation to Reality ###### Abstract Due to the lack of enough real multi-agent data and time-consuming of labeling, existing multi-agent cooperative perception algorithms usually select the simulated sensor data for training and validating. However, the perception performance is degraded when these simulation-trained models are deployed to the real world, due to the significant domain gap between the simulated and real data. In this paper, we propose the _first_ Simulation-to-Reality transfer learning framework for multi-agent cooperative perception using a novel Vision Transformer, named as S2R-ViT, which considers both the Deployment Gap and Feature Gap between simulated and real data. We investigate the effects of these two types of domain gaps and propose a novel uncertainty-aware vision transformer to effectively relief the Deployment Gap and an agent-based feature adaptation module with inter-agent and ego-agent discriminators to reduce the Feature Gap. Our intensive experiments on the public multi-agent cooperative perception datasets OPV2V and V2V4Real demonstrate that the proposed S2R-ViT can effectively bridge the gap from simulation to reality and outperform other methods significantly for point cloud-based 3D object detection. ## I Introduction The recent advancement in multi-agent cooperative perception shows potentials to overcome the limitation of single-agent perception that suffers from the challenges of perceiving range and occlusion [1, 2]. By leveraging agent-to-agent communication technology to share information, multi-agent cooperative perception systems can significantly enhance perception performance compared to the single-agent perception [3, 4]. Due to the difficulties of collecting multi-agent data with communication in the real world, it is expensive and not easy to collect enough real data in diverse and complex real-world environments [2]. Furthermore, the ground-truth data labeling and uniform coordinate projection for multi-agent cooperative perception systems is particularly time-consuming. Therefore, many existing multi-agent cooperative perception research works usually select the simulated data for model training and validating [3, 4]. However, when we apply the models trained with simulated data to the real world, the perception performance is typically degraded. This phenomenon is because of the significant domain gap between the simulated and real data. In this paper, our research is focused on utilizing labeled simulated data and unlabeled real-world data as transfer learning to reduce the domain gap for multi-agent cooperative perception. We observe that the domain gap from simulation to reality for multi-agent cooperative perception includes the following two perspectives. * **Deployment Gap:** As shown in Fig. 1, different with the ideal simulation setting, the multiple agents might have localization (positional and heading) errors due to the unavoidable GPS errors and communication latency (time delay) during the real-world agent-to-agent communication. * **Feature Gap:** As illustrated in Fig. 1, the point cloud feature distribution in real world might differ significantly with that of the simulated data, such as more complex driving scenarios, different LiDAR channel numbers, mixed traffic flow, various point cloud variations, and so on. In this paper, we propose the first Simulation-to-Reality (S2R) transfer learning framework for multi-agent cooperative perception using a novel Vision Transformer (ViT), named as S2R-ViT, by taking both Deployment Gap and Feature Gap into consideration. We choose the task of Vehicle-to-Vehicle (V2V) Cooperative Perception for the point cloud-based 3D object detection as algorithm development. Specifically, our framework takes the labeled point cloud data from simulation and the unlabeled data from real world as input, so as to largely utilize the simulated data. In machine learning research, this setting is widely called as Unsupervised Domain Adaptation [5] from source domain (simulation) to target domain (reality). The proposed S2R-ViT comprises two key components: Fig. 1: Illustration of the domain gap (_Deployment Gap_, _Feature Gap_) for multi-agent cooperative perception from simulation to reality. Here we use Vehicle-to-Vehicle (V2V) cooperative perception in autonomous driving as example. CAV indicates the Connected Autonomous Vehicles. (1) S2R-UViT: a novel S2R Uncertainty-aware Vision Transformer to effectively relief the _uncertainties brought by the Deployment Gap_. Specifically, S2R-UViT includes a Local-and-Global Multi-head Self Attention (LG-MSA) module to enhance feature interactions across all agents' spatial positions to tolerate the uncertainty drawbacks and also a Uncertainty-Aware Module (UAM) to enhance the ego-agent features by considering the shared other-agent features of different uncertainty levels. (2) S2R-AFA: S2R Agent-based Feature Adaptation to reduce the _Feature Gap_. S2R-AFA utilizes the inter-agent and ego-agent discriminators to extract domain-invariant features to bridge the Feature Gap. Finally, we conducted extensive experiments on two public datasets, namely simulated OPV2V [4] and real V2V4Real [2], to justify the effectiveness of our proposed method. Our contributions are summarized as follows. * To the best of our knowledge, we propose the **first research**, named S2R-ViT, on multi-agent cooperative perception from simulation to reality by investigating two types of domain gaps, _i.e._, Deployment Gap and Feature Gap, for point cloud-based 3D object detection. * We propose a novel Uncertainty-aware Vision Transformer (S2R-UViT) to effectively relieve the uncertainties brought by the Deployment Gap. * We design an Agent-based Feature Adaptation (S2R-AFA) module that includes inter-agent and ego-agent discriminators to effectively reduce the Feature Gap between simulation and reality. * We evaluate our proposed method on the large-scale simulated OPV2V dataset and the real V2V4Real dataset, whose experiments demonstrate our superior performance in point cloud-based 3D object detection. ## II Related Work **Multi-Agent Perception.** Multi-agent perception system can overcome the challenges of occlusion and short-range perceiving via agent-to-agent communication technology to achieve the large-range perceiving, which has attracted the attention of many researchers. Instead of sharing raw sensing data or detected outputs, state-of-the-art methods usually share the intermediate features extracted by neural networks, as they can achieve the best trade-off between accuracy and bandwidth requirements [1, 6, 7, 8]. V2VNet [3] employed a graph neural network to aggregate features extracted by LiDAR from each vehicle. When2com [8] utilizes a spatial confidence-aware communication strategy to use less communication to improve performance. OPV2V [4] utilized a self-attention module to fuse the received intermediate features. SyncNet [9] introduces a latency compensation module for time-domain synchronization. V2X-ViT [6] and CoBEVT [10] propose transformer or axial attention based methods to improve the performance. Although these methods have demonstrated impressive performance, all of them are implemented in the simulated data, while this paper aims to bridge the gap from simulation to reality. **Challenges in Multi-Agent Perception.** Multi-Agent perception system also introduces some new challenges, _e.g._, localization error, communication latency, adversarial attacks. These challenges might diminish the benefits of collaborations [6, 7, 11]. To ensure robustness for Multi-Agent perception, V2X-ViT [6] proposes to use a vision transformer for multi-agent perception and achieves robust performance under GPS error and communication delay. [12] proposes a pose regression module and consistency module before feature aggregation to correct pose errors. [8] proposes the first latency-aware collaborative perception system, which realizes a feature level synchronization. [13] investigates adversarial attacks in collaborative perception design a novel transfer attack approach in the intermediate collaborative perception. [11] proposes lossy communication-aware repair network to ensure the robustness of collaborative perception under lossy shared data during communication. To promote multi-agent perception research, several large-scale cooperative perception datasets are publicized, _e.g._, simulation based OPV2V [4] and V2XSet [6], real-world V2V4Real [2]. Because collecting the multi-agent perception data with communication in the real world and annotating its ground truth are time and labor consuming, many existing research Fig. 2: Overview of proposed **S2R-ViT** for multi-agent cooperative perception from simulation to reality, which leverages _S2R-UViT_ module to handle uncertainty in Deployment Gap and tackles the Feature Gap through _S2R-AFA_ module including the inter-agent discriminator \(D_{i}\) and the ego-agent discriminator \(D_{e}\). Source Domain: labeled simulated data, Target Domain: unlabeled real-world data. Best viewed in color. works are based on simulated data. In this paper, we focus on improving point cloud-based cooperative 3D object detection in real world by largely utilizing labelled simulated data and just unlabelled real-world data. **Domain Adaptation for Perception.** Domain adaptation is to adapt the machine learning model trained on source domain to the target domain. Many domain adaptation works are mainly focused on the RGB camera data [14, 15, 16, 17], while more domain adaptation works have been proposed to solve this problem in LiDAR data [18, 19, 20, 7]. Specifically, a Sparse Voxel Completion Network [18] is designed to complete the 3D surfaces of a sparse point cloud, and uses local adversarial learning to model the surface prior. Semantic Point Generation (SPG) [19] is presented to enhance the reliability of LiDAR detectors against domain gaps to generates semantic points at the 3D objects. CoSMix [20] is proposed for 3D LiDAR segmentation to mitigate the domain gap by creating two new intermediate domains of composite point clouds obtained by applying a novel mixing strategy at input level. [7] proposes a Multi-agent Perception Domain Adaption (MPDA) framework to bridge the domain gap of shared data in communication for multi-agent perception. In this paper, an agent-based feature adaptation module are proposed to reduce the feature gap between simulated to real data for multi-agent perception. ## III Methodology As illustrated in Fig. 2, the proposed S2R work is an end-to-end unified deep learning pipeline, including 1) V2V metadata sharing, 2) Feature extraction and sharing, 3) S2R-ViT, and 4) Detection header. ### _Overview of Architecture_ **1) V2V metadata sharing.** We select one of the CAVs as the ego vehicle to construct a spatial graph around it where each node is a CAV within the communication range. Upon receiving the relative pose and extrinsic information of the ego vehicle, all the other CAVs nearby will project their own LiDAR data to the ego vehicle's coordinate frame. **2) Feature extraction and sharing.** We leverage the anchor-based PointPillar method [21] to extract intermediate visual features from point clouds because of its low inference latency and optimized memory usage. Each CAV has its own LIDAR feature extraction module. The ego vehicle will receive the neighboring CAV visual features via communication after each CAV feature extraction. **3) S2R-ViT.** The intermediate features aggregated from other surrounding CAVs are fed into our major component named S2R-ViT, which consists of S2R-UViT and S2R-AFA modules. These two modules will be revealed with details in Sec. III-Band Sec. III-C, respectively. **4) Detection header.** After receiving the final fused feature maps, a prediction header is utilized for 3D bounding-box regression and classification. ### _S2R-UViT: Simulation-to-Reality Uncertainty-aware Vision Transformer_ The Deployment Gap from simulation to reality brings different uncertainties to both ego and neighboring agents, _e.g._, spatial bias by GPS errors, spatial misalignment in the coordinate projection because of communication latency. _How to effectively reduce the degradation effects of these uncertainty drawbacks is an essential problem and open question to the S2R multi-agent perception research._ In this paper, we propose to answer this question from two perspectives: uncertainties can be relieved by enhancing (1) the feature interactions across all agents' spatial positions more comprehensively and (2) the ego-agent features by considering the shared other-agent features of different uncertainty levels. These two perspectives motivate us to develop the novel Local-and-Global Multi-head Self Attention (LG-MSA) Module and Uncertainty-Aware Module (UAM) respectively. The Fig. 3 presents the two major modules of the proposed S2R-UViT. #### Iii-B1 Local-and-Global Multi-head Self Attention (LG-MSA) In order to enhance the feature interactions across all agents' spatial positions more comprehensively, we propose LG-MSA to promote both local and global feature interactions across all agents' spatial positions. In the proposed LG Fig. 3: Architecture of the proposed S2R-UViT: Simulation-to-Reality Uncertainty-aware Vision Transformer. MSA, a local feature-based attention is utilized to focus on the local details of spatial features, while a global feature-based attention is used to pay attention on the wide range of spatial features. Specifically, the fused features of the ego agent and other agents are split into two branches (_i.e._, local branch and global branch) to process local-based and global-based attentions, respectively, then these two features are concatenated together and fed into a Self Attention (SA) module to further capture local and global information. Inspired by [22, 23], after retrieving the whole features \(F_{e,o}\in\mathbb{R}^{H\times W\times kC}\) with spatial dimension \(H\times W\) from all agents (\(e\): ego-agent, \(o\): other agents, \(k\): number of all agents), we reshape it into \(\mathbb{R}^{h\times H\times W\times\frac{kC}{h}}\). After that, we take \(h=8\) as head number and \(n=2\) as window-type number, the multi-head \(h\) of standard multi-head Self Attention Module (MSA) [24] is evenly divided into two groups with different window sizes, _i.e._, \(4\times 4\) for the local branch and \(8\times 8\) for the global branch. In the _local branch_, the split feature \(F^{l}_{e,o}\in\mathbb{R}^{\frac{h}{n}\times H\times W\times\frac{kC}{h}}\) is fed into the MSA\({}_{L}\) with a small window size \(4\times 4\) to enhance the local details of spatial features. In the _global branch_, another split feature \(F^{q}_{e,o}\in\mathbb{R}^{\frac{h}{n}\times H\times W\times\frac{kC}{h}}\) is fed into the MSA\({}_{G}\) with a large window size \(8\times 8\) to capture global spatial feature information. Then we concatenate these two-branch output features and implement the feature interactions by a Self Attention (SA) module to obtain the promoted feature \(F^{p}_{e,o}\in\mathbb{R}^{H\times W\times kC}\). As show in Fig. 3(a), the proposed LG-MSA computation can be defined as \[F^{p}_{e,o}=\mathrm{SA}(\mathrm{Concat}(\mathrm{MSA}_{L}(F^{l}_{e,o}),\mathrm{ MSA}_{G}(F^{g}_{e,o}))). \tag{1}\] #### Iii-B2 Uncertainty-Aware Module (UAM) To enhance the ego-agent feature based on the shared other-agent features, their different uncertainty levels should not be neglected. When receiving the feature \(F^{p}_{e,o}\) for all agents, we split them into ego-agent feature \(F^{p}_{e}\in\mathbb{R}^{H\times W\times C}\) and other agent shared feature \(F^{p}_{o}\in\mathbb{R}^{H\times W\times(k-1)C}\). The shared other-agent feature \(F^{p}_{o}\) will be fed into an Uncertainty Prediction Network (UPN) to predict the uncertainty levels on spatial features, which generates a uncertainty-level map \(M\in\mathbb{R}^{H\times W\times(k-1)C}\) with the same spatial size of input. The UPN is an encoder-decoder based neural network, which is simplified from [25] in our implementation. Inspired by the natural selection in the mechanism of evolution, the feature values in the predicted uncertainty-level map \(M\) with high uncertainty levels (_i.e._, low confidences) are reset as 1 using the median as threshold. It results in a new uncertainty-level map \(M_{t}\). Then, \(M_{t}\) is multiplied with \(F^{p}_{e}\), by taking the shared other-agent features of different uncertainty levels into consideration, so as to produce the enhanced ego feature. In other words, only the low-uncertainty spatial features of other agents (related to non-ones in \(M_{t}\)) will contribute to enhance the ego-agent feature during the matrix multiplication. Finally, the enhanced ego feature and other agent shared feature are concatenated together to obtain the combined feature \(F^{h}_{e,o}\in\mathbb{R}^{H\times W\times kC}\). As show in Fig. 3(a), the computation of the proposed UAM can be formulated as \[F^{h}_{e,o}=\mathrm{Concat}(\Delta[\mathrm{UPN}(F^{p}_{o})]\oplus F^{p}_{e},F^{ p}_{o}), \tag{2}\] where \(\Delta[\cdot]\) represents the threshold process and denotes the matrix dot product. Combining these local and global attention branches with typical designs of Transformers [22, 23], including Layer Normalization (LN), MLPs, and skip-connections, forms our proposed S2R-UViT block, as shown in Fig. 3(b). The S2R-UViT block can be expressed as: \[F^{h}_{e,o}=\mathrm{S2RAttn}(\mathrm{LN}(F_{e,o}))+F_{e,o}, \tag{3}\] \[F^{h}_{e,o}=\mathrm{MLP}(\mathrm{LN}(F^{h}_{e,o}))+F^{h}_{e,o}, \tag{4}\] where \(F^{h}_{e,o}\) and \(F^{h}_{e,o}\) denote the output features of our S2RAttn module (_i.e._, the proposed LG-MSA and UAM) and MLP module. ### _S2R-AFA: Simulation-to-Reality Agent-based Feature Adaptation_ To reduce the Feature Gap from simulated feature \(F_{s}\) to real feature \(F_{r}\), we design two domain discriminators/classifiers before and after fusion as shown in Fig. 2, where all agent feature \(F_{s}\) and \(F_{r}\) before fusion are fed into the inter-agent discriminator \(D_{i}\) to classify whether it belongs to simulation or reality, and the fused ego features \(F^{e}_{s}\) and \(F^{e}_{r}\) after fusion are also classified into simulation or reality by the ego-agent discriminator \(D_{e}\). The binary cross entropy loss is used for binary domain (Sim/Real) classification. These two discriminators are adversarially optimized by the Agent-based Feature Adaptation loss \(\mathcal{L}_{AFA}\): \[\min_{G_{m}}\max_{D_{i},D_{e}}\mathcal{L}_{AFA}=\mathbb{E}_{s,r}[D_{i}(F_{s},F _{r})]+\mathbb{E}_{s,r}[D_{e}(F^{e}_{s},F^{e}_{r})], \tag{5}\] where \(\mathbb{E}_{s,r}\) indicates the domain classification error in simulation and reality respectively, \(G_{m}\) is our whole model (backbone, S2R-UViT, and detection header) which can be thought as the generator of Generative Adversarial Network [26]. Because of S2R-AFA, our generator model \(G_{m}\) will have the capability of extracting domain-invariant features of simulation and reality. For the 3D object detection, we use the smooth \(L_{1}\) loss for 3D bounding-box regression and the focal loss [27] for classification. The final loss is the combination of detection loss and the Agent-based Feature Adaptation loss as follows, \[\mathcal{L}_{total}=w_{1}\mathcal{L}_{det}+w_{2}\mathcal{L}_{AFA}, \tag{6}\] where \(w_{1}\) and \(w_{2}\) are balance weights with sum as 1. ## IV Experiment ### _Dataset_ We conduct experiments on two public benchmark datasets (OPV2V [4], V2V4Real [2]) for the V2V cooperative perception task. **OPV2V** is a large-scale _simulated_ dataset for V2V cooperative perception, which is collected by CARLA simulator and OpenCDA [28]. It contains 73 divergent scenes with various numbers of connected vehicles (\([1,5]\)), and its training/validation/testing set is split into 6,764, 1,981, and 2,719 frames, respectively. **V2V4Real** is a _real-world_, large-scale dataset with diverse driving scenarios, which is collected by two CAVs driving simultaneously in Columbus, Ohio, USA. It's split into the train/validation/test set with 14,210/2,000/3,986 frames, respectively. ### _Experiments Setup_ **Evaluation Metrics.** The final 3D vehicle detection accuracy are selected as our performance evaluation. Following [4, 6], we set the evaluation range as \(x\in[-140,140]\) meters, \(y\in[-40,40]\) meters, where all CAVs are included in this spatial range in the experiment. We measure the accuracy with Average Precisions (AP) at Intersection-over-Union (IoU) threshold of \(0.5\) and \(0.7\). **Experiment Settings.** In this work, we want to address the deployment gap and feature gap on LiDAR-based object detection and assess our model under two distinct settings: 1. [leftmargin=*] 2. _Deployment-Gap Scenario_: During the training, all models are trained in the perfect simulated OPV2V training set, then all of them are evaluated on OPV2V CARLA Towns and Culver City testing sets under two different setting (_e.g._, _Perfect_ and _Noisy_), respectively. We implement the _Noisy Setting_ following [6], the positional and heading noises of the transmitter are drawn from a Gaussian distribution with a default standard deviation of \(0.2\) m and \(0.2^{\circ}\) respectively to simulate the GPS errors, and communication latency is set to \(100\) ms for all the evaluated models. This Deployment-Gap Scenario only includes the Deployment Gap. 3. _Sim2Real Scenario_: We set the labeled training set of simulation dataset OPV2V [4] as the source domain and the unlabeled training set of the real-world dataset V2V4Real [2] as the target domain for all models during the training, by following the same setting in [2]. Then, all trained models are evaluated on the testing set of V2V4Real to report the performance. This Sim2Real Scenario includes both the Deployment Gap and Feature Gap from simulation to reality. Specifically, all models use the PointPillar [29] as the backbone with the voxel resolution of \(0.4\) m for both height and width. We adopt Adam optimizer [30] with an initial learning rate of \(10^{-3}\) and steadily decay it every \(10\) epochs using a factor of \(0.1\). We follow the same hyperparameters in V2X-ViT [6], and all models are trained on two RTX 3090 GPUs. **Compared Methods.** We evaluate six state-of-the-art V2V methods in this paper, which all use _Intermediate Fusion_ as the main fusion strategy: AttFuse [4], V2VNet [3], F-Cooper [1], V2X-ViT [6], CoBEVT [10], and V2VAM [11]. We first train these methods on the perfect setting of OPV2V training set, and then these methods are evaluated on the _Noisy Setting_ of OPV2V testing set and V2V4Real testing set to assess their performance. In addition, to show the effectiveness of reducing Feature Gap, two domain adaptation methods _i.e._, gradient reverse layer (GRL) [31] and adversarial gradient reverse layer (AdvGRL) [15] are utilized to back-propagate the reversed gradient to adversarially guide the model for generating domain-invariant features by one feature-level domain classifier (after fusion) and one object/proposal-level domain classifier (in detection header). ### _Quantitative Evaluation_ **Performance in Deployment-Gap Scenario.** Table I shows the performance comparison on _Deployment-Gap Scenario_, where all methods are evaluated on _Perfect Setting_ and _Noisy setting_, respectively. Under the _Perfect Setting_, all \begin{table} \begin{tabular}{l l|c c|c c} \hline \hline \multirow{2}{*}{Models} & \multirow{2}{*}{Setting} & \multicolumn{2}{c}{V2V CARLA Towns} & \multicolumn{2}{c}{V2V Culver City} \\ & & [email protected] & [email protected] & [email protected] & [email protected] \\ \hline \multirow{2}{*}{AttFuse [4]} & Perfect & 0.921 & 0.804 & 0.887 & 0.716 \\ & Noisy & 0.851 & 0.472 & 0.865 & 0.565 \\ \hline \multirow{2}{*}{V2VNet [3]} & Perfect & 0.915 & 0.828 & 0.884 & 0.757 \\ & Noisy & 0.845 & 0.413 & 0.868 & 0.617 \\ \hline \multirow{2}{*}{F-cooper [1]} & Perfect & 0.907 & 0.810 & 0.893 & 0.746 \\ & Noisy & 0.842 & 0.479 & 0.881 & 0.624 \\ \hline \multirow{2}{*}{V2X-ViT [6]} & Perfect & 0.902 & 0.792 & 0.903 & 0.764 \\ & Noisy & 0.801 & 0.395 & 0.882 & 0.606 \\ \hline \multirow{2}{*}{CoBEVT [10]} & Perfect & 0.925 & 0.852 & 0.904 & 0.776 \\ & Noisy & 0.862 & 0.519 & 0.889 & 0.665 \\ \hline \multirow{2}{*}{V2VAM [11]} & Perfect & 0.916 & 0.849 & 0.903 & **0.794** \\ & Noisy & 0.876 & 0.507 & 0.883 & 0.663 \\ \hline \multirow{2}{*}{S2R-UVIT} & Perfect & **0.928** & **0.867** & **0.912** & 0.768 \\ & Noisy & **0.879** & **0.579** & **0.900** & **0.681** \\ \hline \hline \end{tabular} \end{table} TABLE I: 3D detection performance on two OPV2V testing sets under _Deployment-Gap Scenario_ with _Perfect Setting_ and _Noisy Setting_. All methods are trained on _Perfect Setting_. Fig. 4: Robustness in _Deployment-Gap Scenario_ including GPS (positional, heading) errors and communication latency on the CARLA Towns testing set of OPV2V [4] with the simulated _Noisy Setting_. cooperative perception methods achieve outstanding performance. Nevertheless, when these methods deployed on the _Noisy Setting_ scenario, which have the deployment gap with the _Perfect Setting_ scenario, the V2X-ViT [6], CoBEVT [10], and V2VAM [11] drop \(39.7\%\), \(33.3\%\), and \(34.2\%\) on for AP\(@0.7\) on V2V CARLA Towns testing set. It indicates the highly negative impacts by V2V deployment gap. While our proposed S2R-UViT achieves the mostly best performance under both _Perfect Setting_ and _Noisy Setting_ scenarios, which is highlighted in Table I. To assess the models' sensitivity on different _Deployment-Gap Scenario_, we conduct the experiments on the V2V CARLA Towns testing set of OPV2V dataset with the difference of _Noisy Setting_. The Fig. 4 depicts the higher robustness of our S2R-UViT against other fusion methods in the deployment gap. **Performance in Sim2Real Scenario.** The 3D object detection result on the real-world V2V4Real testing set of _Sim2Real Scenario_ is presented in Table II. Among all intermediate fusion methods without domain adaptation, our proposed S2R-UViT achieves the best performance by eliminating the deployment gap in the real V2VReal testing set. After applying domain adaptation methods _e.g._, GRL [31] and AdvGRL [15], all methods have improved performance. For example, V2X-ViT is improved by \(12.1\%\)/\(5.3\%\) for [email protected]/0.7 with GRL, and \(13.0\%\)/\(5.4\%\) for [email protected]/0.7 with AdvGRL. Our S2R-ViT with S2R-AFA achieves the \(44.1\%\)/\(17.0\%\) for [email protected]/0.7 as the best performance with improvement of \(7.4\%\)/\(3.2\%\) than S2R-UViT. Furthermore, we visualize some 3D object detection results on V2V4Real testing set under _Sim2Real Scenario_ in Fig. 5, where our S2R-ViT generates more accurate 3D detection results. **Ablation Study.** As Table II depicts, all of our proposed components in S2R-ViT have contributed to more accurate detection performance. Specifically, our S2R-UViT achieves the best detection performance among all intermediate fusion methods, which is \(3.3\%\) and \(0.8\%\) higher than the second-best performance model CoBEVT in [email protected] and [email protected] respectively. Adding our S2R-AFA, our proposed S2R-ViT achieves \(44.1\%\) and \(17.0\%\) in [email protected] and [email protected], with the improvement of \(7.4\%\) in [email protected] and \(3.2\%\) in [email protected]. ## V Conclusions This paper is the first work that investigates the domain gap on multi-agent cooperation perception from simulation to reality, specifically focusing on the deployment gap and feature gap in point cloud-based 3D object detection. Based on the analysis, we present the first Simulation-to-Reality transfer learning framework using a novel Vision Transformer, named S2R-ViT, to mitigate these two types of domain gaps, which mainly contain an Uncertainty-aware Vision Transformer and an Agent-based Feature Adaptation module. The experiment shows the effectiveness of S2R-ViT. This research presents a significant step forward in the multi-agent cooperation perception from simulation to reality. \begin{table} \begin{tabular}{l c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{2}{c}{V2V4Real Testing} \\ & [email protected] & [email protected] \\ \hline AttFuse [4] & 0.225 & 0.094 \\ AttFuse w/ GRL & 0.356 & 0.139 \\ AttFuse w/ AdvGRL & 0.366 & 0.137 \\ \hline V2VNet [3] & 0.268 & 0.108 \\ V2VNet w/ GRL & 0.376 & 0.122 \\ V2VNet w/ AdvGRL & 0.358 & 0.103 \\ \hline F-Cooper [1] & 0.236 & 0.091 \\ F-Cooper w/ GRL & 0.372 & 0.116 \\ F-Cooper w/ AdvGRL & 0.364 & 0.135 \\ \hline V2X-ViT [6] & 0.274 & 0.103 \\ V2X-ViT w/ GRL & 0.395 & 0.156 \\ V2X-ViT w/ AdvGRL & 0.404 & 0.157 \\ \hline CoBEVT [10] & 0.334 & 0.130 \\ CoBEVT w/ GRL & 0.398 & 0.163 \\ CoBEVT w/ AdvGRL & 0.393 & 0.163 \\ \hline V2VAM [11] & 0.332 & 0.120 \\ V2VAM w/ GRL & 0.390 & 0.146 \\ V2VAM w/ AdvGRL & 0.401 & 0.161 \\ \hline S2R-UViT & **0.367** & **0.138** \\ S2R-UViT w/ GRL & 0.414 & 0.141 \\ S2R-UViT w/ AdvGRL & 0.414 & 0.157 \\ S2R-UViT w/ S2R-AFA & **0.441** & **0.170** \\ \hline \hline \end{tabular} \end{table} TABLE II: 3D detection performance on V2V4Real testing set under _Sim2Real Scenario_. All methods with domain adaptation are trained following the setting in _Sim2Real Scenario_. S2R-UViT w/ S2R-AFA indicates our S2R-ViT. Fig. 5: Visualization example of point cloud-based 3D object detection on V2V4Real testing set under the _Sim2Real Scenario_. Green and red 3D bounding boxes represent ground truth and prediction respectively. Best viewed in color.
2310.05535
Commissioning and first measurements of the initial X-ray and γ-ray detectors at FACET-II
The upgraded Facility for Advanced Accelerator Experimental Tests (FACET-II) at SLAC National Accelerator Laboratory has been designed to deliver ultra-relativistic electron and positron beams with unprecedented parameters, especially in terms of high peak current and low emittance. For most of the foreseen experimental campaigns hosted at this facility, the high energy radiation produced by these beams at the Interaction Point will be a valuable diagnostic to assess the different physical processes under study. This article describes the X-ray and {\gamma}-ray detectors installed for the initial phase of FACET-II. Furthermore, experimental measurements obtained with these detectors during the first commissioning and user runs are presented and discussed, illustrating the working principles and potential applications of these detectors.
P. San Miguel Claveria, D. Storey, G. J. Cao, A. Di Piazza, H. Ekerfelt, S. Gessner, E. Gerstmayr, T. Grismayer, M. Hogan, C. Joshi, C. H. Keitel, A. Knetsch, M. Litos, A. Matheron, K. Marsh, S. Meuren, B. O'Shea, D. A. Reis, M. Tamburini, M. Vranic, J. Wang, V. Zakharova, C. Zhang, S. Corde
2023-10-09T08:57:19Z
http://arxiv.org/abs/2310.05535v1
# Commissioning and first measurements of the initial X-ray and \(\gamma\)-ray detectors at FACET-II ###### Abstract The upgraded Facility for Advanced Accelerator Experimental Tests (FACET-II) at SLAC National Accelerator Laboratory has been designed to deliver ultra-relativistic electron and positron beams with unprecedented parameters, especially in terms of high peak current and low emittance. For most of the foreseen experimental campaigns hosted at this facility, the high energy radiation produced by these beams at the Interaction Point will be a valuable diagnostic to assess the different physical processes under study. This article describes the X-ray and \(\gamma\)-ray detectors installed for the initial phase of FACET-II. Furthermore, experimental measurements obtained with these detectors during the first commissioning and user runs are presented and discussed, illustrating the working principles and potential applications of these detectors. X-ray detectors, \(\gamma\)-ray detectors, beam-plasma interaction, Strong-Field QED ## I Introduction The detection and measurement of high energy photons (X-rays and \(\gamma\)-rays) produced in laboratory remains an important challenge in several experimental contexts, largely due to the low probability of interaction of these photons with matter. Furthermore, since the cross-section of the photon-matter interactions have a strong dependence on the incoming photon energy, these detectors are typically designed for a defined energy range, outside of which their sensitivity significantly drops. At the new Facility for Advanced Accelerator Experimental Tests (FACET-II), which accelerates electron - and ultimately positron - bunches to maximum delivered beam energy of 13 GeV using the middle kilometer of Linear Accelerator (LINAC) of the SLAC National Accelerator Laboratory, several experimental campaigns rely on the X-ray and \(\gamma\)-ray photons produced by these particle beams to retrieve valuable information of the key physical processes under study. Yet, depending on the experiment, the relevant spectral range of these photons can span from several to hundreds of keV all the way up to the beam energy. At FACET-II, the X-ray and \(\gamma\)-ray photons are produced at the Interaction Point (IP) where the relativistic electrons interact with matter and/or with an Ultra-High-Intensity (UHI) laser pulse. After the interaction, these high energy photons co-propagate in vacuum with the relativistic particle bunch until the electrons are deflected down by the imaging spectrometer dipole magnet placed \(\approx 13\) m downstream of the IP. Both the photons and electrons exit the vacuum pipe through a 5 mm thick Al window placed \(\approx 20\) m downstream of the IP, after which they are detected at the so-called Dump Table, an optical table immediately prior to the beam dump where the detector hardware is mounted (see Fig. 1). At the location of the dump table, the photon axis is vertically separated by \(\approx 60\) mm from the dispersed electron axis for the nominal deflection of the spectrometer dipole magnet. This article starts with a brief introduction of the working principles of these high energy radiation detectors together with a description of the installed hardware. Afterwards, the article shows how different experiments at FACET-II benefit from these detectors, presenting the first X-ray and \(\gamma\)-ray measurements obtained during the initial user-assisted commissioning runs. ## II Design of GAMMA detectors at FACET-II Prior to the initial runs of FACET-II, a collaborative effort involving users from different experiments was carried out with the goal of developing a unified set of diagnostics that would meet the measurement requirements of the initially planned experiments. Based on the outcomes of several simulation tools described in the next paragraphs, the first set of two scintillation-based detectors were developed and installed. Referred to as GAMMA1 and GAMMA2, they are designed to acquire X-ray/gamma-ray angular and spectral information. When a high energy photon propagates through the scintillator, some of its energy is deposited in the bulk of the material, some of which is then transformed to visible light. The amount of deposited energy depends on the choice of the scintillation material, but also on the incoming photon energy. For instance, a photon with energy \(\lesssim\) 1 keV will deposit, on average, almost its entire energy in most commercially available scintillators, i.e. it will be absorbed. In contrast, a 1 GeV photon will deposit a much smaller fraction of its energy in the passage through the scintillator screen. In this article we will refer to the _spectral response_ of the detector as the average fraction of the incoming photon energy \(\hbar\omega\) that is deposited in the scintillator \(\Gamma_{\rm dep}(\hbar\omega)\). Given the wide photon spectra produced during different experimental campaigns at FACET-II, understanding the spectral response of these X-ray and \(\gamma\)-ray detectors over the relevant photon energies is of key importance to understand the sensitivity of these detectors. The spectral responses have been calculated using the GEANT4 simulation toolkit [1]. The main advantage of using GEANT4 with respect to tabulated data of X-ray absorption is to account for the photon-matter interactions that happen prior to the interaction of the incoming photons with the scintillation screens. Namely, these simulations account for the secondary particle production that can ultimately contribute to the total energy deposited in the scintillator. For this purpose, the angular distributions of these secondary particles as well as the spatial distribution of the detectors need to be accounted for in the simulations. In the GEANT4 simulations used here, this was achieved by tracking both the primary and secondary particles through a simplified version of the FACET-II geometrical set-up. In these simulations all the scintillating screens, as well as other elements on the photon axis, had a transverse spatial extent of \(10\times 10~{}{\rm cm}^{2}\), corresponding to the acceptance angle of the different vacuum components from the IP to the GAMMA1 and GAMMA2 detectors. Furthermore, all these elements were placed with the same longitudinal spacing as in the experimental set-up. Figure 2(a) shows an example of the spectral response of two GAMMA1 scintillators as computed using GEANT4, with and without the effect of the Al exit window. For each photon energy, \(10^{6}\) photons were used to compute the averaged energy deposited in the scintillation screen. We observe that at low energies (\(10\)-\(20~{}{\rm keV}\)) the Al window absorbs the photons and therefore no energy is deposited in the scintillator, meaning that those photons cannot be detected. For high energy photons (\(\gtrsim 2~{}{\rm MeV}\)) the Al window has the opposite effect on the amount of energy deposited in the scintillator: the secondary particles produced during the passage through the Al exit window reach the detectors and increase the amount of energy deposited in the scintillator. ### _Gamma1_ The GAMMA1 detector at FACET-II is a scintillation-based X-ray and \(\gamma\)-ray detector designed to measure the integrated radiation yield and its angular distribution. This detector has two scintillation screens that can be individually inserted into the photon-axis: a DRZ-FINE\({}^{\rm TM}\) screen (manufactured by Mitsubishi Chemical Group) and a pixelated CsI array (manufactured by Epic-Crystal). The CsI array, formed by \(165\times 165\) crystals of \(0.5\times 0.5\times 3~{}{\rm mm}^{3}\) size, offers better sensitivity than the DRZ-FINE, both in terms of the spectral response (see Fig. 2a) and light output (number of scintillation photons emitted for a given amount of deposited energy). This better sensitivity comes at the cost of a worse spatial resolution, given by the transverse size of an individual CsI crystal (0.5 mm). The visible light emitted by the scintillator is imaged via a Nikon NIKKOR 50mm f/1.2 objective on an Allied Vision Manta G-125 GigE camera. Similarly to the effect of the Al window on the spectral responses discussed above, a foil of high-Z material, such as 0.1 mm tungsten, can be installed on the upstream face of the DRZ-FINE scintillator in order to increase the sensitivity of the DRZ-fine to high photon energies (\(\gtrsim 2\) MeV). ### _Gamma2_ The GAMMA2 detector at FACET-II is a scintillation-based X-ray and \(\gamma\)-ray detector designed to assess the spectral Fig. 1: Sketch of the relevant beam line elements for the high energy radiation measurement at FACET-II. Fig. 2: Spectral response of GAMMA1 scintillation screens (a) and of the GAMMA2 scintillation screen (DRZ-fine) with different conversion filters (b), as computed using GEANT4. See text for details regarding the filter and scintillator layout. distribution of the incoming high-energy photons. It consists of a set of filters placed immediately prior to a second DRZ-FINE scintillating screen centered on the photon axis at \(\sim 70\) cm distance downstream of the GAMMA1 scintillator (see Fig. 1). The set of filters, distributed as in an axis-symmetric pie chart around the photon axis (see Fig. 5(a)), are glued onto the upstream face of the GAMMA2 scintillating screen. The rear face of the GAMMA2 scintillating screen is imaged via a Nikon NIKKOR 50 mm f/1.2 objective on a Manta G-125 GigE camera. The working principle of this detector relies on the modification of the spectral response by the different filter materials. For X-ray photons with energies below 100 keV, a pair of Ross filters [17] can be used to accurately measure the amount of radiation at a given spectral range. For higher photon energies, step filters of different materials and thicknesses have been used to reconstruct the incoming spectra from the transmission rates [13]. However, as explained earlier, for photon energies above 1 MeV these filters act as converters, and thus both the absorption and the secondary particle production need to be taken into account. The conversion phenomenon occurring at these high photon energies actually plays a central role to extend the GAMMA2 detector sensitivity up to the 10's of MeV energy range, where the absorption/transmission detectors are very insensitive. For the first set of user-assisted commissioning runs at FACET-II, a set of Cu and W step filters were mounted on the GAMMA2 detector. One of the filter placements, referred as "Gap" in Fig. 5(a), was left empty for normalisation purposes. The associated spectral responses are shown on Fig. 2(b). It should be noted that the Al window as well the GAMMA1 scintillation are included in the simulations performed to compute these spectral responses, which allowed for an accounting of their absorption and production of secondary particles. These curves show the trend explained above: for low incident photon energies (\(\lesssim 1~{}\mathrm{MeV}\)) the thicker filters lead to a lower deposited energy in the scintillator, and thus lower signal (transmission mode), whereas for high incident photon energies (\(\gtrsim 1~{}\mathrm{MeV}\)) the thicker filters lead to a stronger signal (conversion mode). As will be shown in the following sections, comparing the relative signals on the GAMMA2 scintillator behind each filter can be used, with the help of simulations, to assess the spectral distribution of the incoming X-ray and \(\gamma\)-ray photons. It should be noted that the choice of an axis-symmetric pie distribution of the GAMMA2 filters is optimised for cylindrically symmetric radiation profiles. Yet, deviations from this ideal distribution can be corrected during the data analysis by weighting the signals after each of the GAMMA2 filters using the corresponding GAMMA1 angular distribution recorded upstream. ## III Commissioning results The first set of consistent measurements of high-energy photons at FACET-II was carried out by inserting Al foils into the electron beam axis at the IP to produce bremsstrahlung photons. These Al foils of thicknesses ranging from 0.1 mm to 2 mm have been installed at the IP of FACET-II for the "Near-field-CTR-based self-focusing in beam-multifoil collisions" E332 experiment [18]. This experiment aims to measure a significant production of \(\gamma\)-ray photons when the Near-Field-CTR effect dominates the beam-solid interaction over multiple scattering, leading to a strong focusing effect on the beam and a high transfer efficiency from the electron beam energy to the \(\gamma\)-ray radiation. During the initial commissioning runs the beam parameters were such that the photons produced in the beam-solid interactions were originated predominantly via bremsstrahlung. By inserting Al foils of different thicknesses, the linearity of the detectors was assessed. The result of this test for GAMMA1 with the CsI scintillating screen is shown in Fig. 3(a), indicating an acceptable linear relation between the GAMMA1 signal and the thickness of the Al foil used to produce bremsstrahlung radiation. The GAMMA2 signal produced by these photons behind each filter was also recorded and compared with the theoretical predictions. For this analysis, a 1 mm thick W foil was inserted at the IP to maximize the flux of bremsstrahlung photons. The result of this analysis is shown in Fig. 3(b). In this plot, both the simulated signals and the experimental signals are normalised by the corresponding GAMMA1 signals as explained in Sec. II-B, but also by the no-filter signal. For the theoretical values, the spectral responses shown in Fig. 2(b) are applied to the analytical bremsstrahlung spectrum [11] produced by 10 GeV electrons propagating through a W foil. A fair agreement is observed between the experimental measurement and the theoretical values, benchmarking the modelling of the GAMMA2 detector described above. For this Bremsstrahlung measurements the GAMMA2 detector works in conversion mode, i.e. the thickest filters have the highest signals. ## IV Probing Strong-field QED at FACET-II Similarly to the seminal E144 experiment at the FFTB facility [5], FACET-II hosts an experimental campaign to study the Strong-Field regime of QED in the collision of a 10-TW class laser pulse with the 10-13 GeV electrons. Fig. 3: (a) Integrated GAMMA1 signal of Bremsstrahlung photons for different Al foil thicknesses (blue dots) and linear fit (red line) (b) Experimental and theoretical GAMMA2 signals of Bremsstrahlung photons produced with a W target of 1 mm thickness inserted at the IP. The main goal of this campaign is to observe the transition from perturbative to non-perturbative electron laser interaction as well as to observe electron-positron pair production in the tunneling regime [15, 14]. In these collisions, the beam electrons that are scattered by the electromagnetic field of the laser pulse emit high energy photons via (non)linear Compton scattering [7, 4]. If detected, these photons can be used to retrieve information about the collision. For instance, theoretical calculations predict that when \(a_{0}>1\) (\(a_{0}\) being the normalised laser vector potential) the divergence of the inverse-Compton photons is proportional to \(a_{0}\)[21]. Therefore measuring the divergence of the emitted photons can be used to track energy jitters of the laser and the true \(a_{0}\) experienced by the colliding electrons on a shot-to-shot basis [9]. It should nevertheless be noted that for the cases of few-cycle pulses or pulses significantly deviating from a Gaussian shape a more detailed analysis is needed, requiring a good experimental characterisation of the laser parameters. In order to test if the GAMMA1 detector is sensitive to this variation of \(\gamma\)-ray divergence for the FACET-II experimental configuration, GEANT4 simulations were performed to compute the spatially resolved energy deposition on the pixelised CsI crystal. In this simulation, the incoming photons were initialised using a Monte-Carlo algorithm that produced the angular and spatial distributions dictated by numerical QED simulations [20, 16]. Figure 4(a) shows the horizontal (over the laser-polarisation direction) lineout of the simulated GAMMA1 signals for three different values of \(a_{0}\). The root-mean-square values of this distribution confirms the linear relation between the measured \(\gamma\)-ray divergence and \(a_{0}\). During the commissioning runs on August 2022, first laser-electron collisions were achieved at the IP of FACET-II. Scattered electrons, with up to 2.5 GeV energy loss as well as the associated \(\gamma\) photons were recorded by the different detectors of the experimental area. In this initial configuration, a laser pulse of \(a_{0}\lesssim 1\) collided with the 10 GeV electron beam at a horizontal angle of \(\approx 30^{\circ}\). At the collision point the electron-beam waist was much larger than the \(\approx 2~{}\mu\mathrm{m}\) laser focal spot, and the electron beam waist was not set at the laser-IP. Preliminary analysis of these collisions suggests that linear Compton scattering was dominant in the measured \(\gamma\)-ray signal. Yet, scans of the spatial and temporal laser-beam overlap were performed and allowed for characterisation of the collision parameters. In these scans, the GAMMA1 detector provided a clean signature of the collision. Figure 4(b) shows data from a temporal overlap scan that manifests the correlation between the measured \(\gamma\)-ray horizontal pointing and the pointing (angle) of the scattered electrons as measured from the FACET-II electron spectrometer. It should be noted that, due to the \(30^{\circ}\) angle of incidence, the temporal scan also encodes information on the horizontal spatial electron distribution. Both the \(\gamma\)-ray pointing and the scattered electron angle are measured in the non-dispersive plane of the electron spectrometer, which is along the laser polarisation direction. The scattered electron pointing was measured at around \(9~{}\mathrm{GeV}\), the spectrometer being set to image the \(8~{}\mathrm{GeV}\) electrons from the IP to the screen. The clear correlation observed in Fig. 4(b) between the \(\gamma\)-ray pointing and the angle of the scattered electrons corresponds to a converging electron beam with negative transverse position-momentum correlation. As the laser delay \(\Delta T\) is scanned, the collision happens at different transverse positions of the converging electron beam, resulting in the observed time-dependent electron and \(\gamma\) angles. This data shows, in a similar way to a laser-wire scanner [3], how the detection of the high-energy inverse-Compton photon produced in laser-electron collisions can provide a valuable signature of the interaction that can complement and improve the measurements of the scattered electrons, the latter suffering from high background levels of non-scattered electrons. ## V Beam-driven Plasma Wakefield Accelerator The E300 experiment, "Energy Doubling of Narrow Energy Spread Witness Bunch while Preserving Emittance with a High Pump-to-Witness Energy Transfer Efficiency in a Plasma Wakefield Accelerator", is the main beam-driven plasma wakefield accelerator experiment that will explore the current experimental challenges of this accelerator technology at the FACET-II facility [12]. Among these challenges, the preservation of the transverse quality of the accelerated beam, i.e. of the normalised emittance, is one of next milestones for the application of plasma acceleration technologies to high-energy particle colliders. To achieve this goal, controlling the matching of the beam betatron oscillations in and out of the plasma [2] as well as mitigating the transverse Hosing instability [10] is needed. Recently, simulations have shown that the betatron radiation produced by the accelerated beam can be used to diagnose the presence and mitigation of these two sources of emittance growth (mismatch propagation and Hosing instability) [6]. The non-destructive nature of this novel diagnostic could help optimising the beam-plasma interaction and to preserve the emittance of the accelerated beam. During the initial commissioning runs of this experiment, the region of the IP was filled with \(\mathrm{H_{2}}\) gas with static pressures \(P\) ranging from 0.05 to 5 Torr. Similarly to what was done at FACET-I [8], the beam was able to field-ionise the \(\mathrm{H_{2}}\) molecules over \(\approx 3\) meters and drive strong plasma waves. Fig. 4: (a) Simulated horizontal distribution of the GAMMA1 signal of the high energy radiation produced during laser-beam collisions for three different values of laser normalised vector potential \(a_{0}\). (b) Correlation between horizontal angle of laser-scattered electrons and \(\gamma\)-ray pointing of a temporal synchronisation scan of laser-beam collisions. Despite the large shot-to-shot fluctuations of the beam's longitudinal parameters due to the development of microbunching instabilities along the LINAC, significant energy loss and betatron radiation was consistently observed at every \(\mathrm{H_{2}}\) pressure. Furthermore, multi-GeV particle acceleration was measured and is currently under study. From the GAMMA2 measurement of these betatron photons it was possible to retrieve a fitted critical frequency \(\omega_{c}\) of a synchrotron-like spectrum. Results of this study are shown in Fig 5. Figure 5(a) shows a GAMMA2 image of the scintillation signal behind each filter. The observed hierarchy of these signals, i.e. higher signal after the thinner filters, indicates that photon absorption dominates over secondary particle production in the GAMMA2 filters. Therefore the detector works in transmission mode, in contrast to the conversion mode of the bremsstrahlung measurements [Fig. 3(b)], and thus most of the betatron radiation photons are below the \(\sim 1\:\mathrm{MeV}\) limit mentioned in Sec. II-B. After averaging the signal behind each GAMMA2 filter over a selection of events (the 20 events with the highest \(\gamma\)-ray yield at each \(\mathrm{H_{2}}\) pressure), the normalised signals - see Sec. III for details of the normalisation procedure - are plotted in Fig. 5(b) for \(P=0.08\) Torr and \(P=1.5\) Torr. Despite the large errorbars originating from the aforementioned shot-to-shot fluctuations, the comparison of the GAMMA2 signals for the two different pressures shows that, as pressure is increased, a much larger relative increase of the GAMMA2 signal behind the thicker filters (a factor of \(\approx 4\) for W 3 mm) than after the thinner filters (a factor of \(\approx 1.2\) for W 0.1mm) is observed. Qualitatively, this feature is explained by the shift of the betatron spectrum towards higher energies due to higher plasma densities at higher \(\mathrm{H_{2}}\) pressure, which in turn results in more transmission of the higher energy photons and a more significant secondary particle production on the thicker filters. A more quantitative analysis of this data is carried out by fitting a synchrotron spectrum, defined by a critical frequency \(\omega_{c}\), to the experimental data [19, 13]. For each pressure, two fits are performed, one using the Cu filters and the other one using the W filters. For \(P=0.08\) Torr (\(n_{p}\approx 2.5\times 10^{15}\:\mathrm{cm^{-3}}\) assuming single \(\mathrm{H_{2}}\) ionisation), the fitted critical frequencies are \(7\:\mathrm{keV}\) and \(29\:\mathrm{keV}\) for the Cu filter and for the W filters respectively. The corresponding fitted signals are plotted with the blue dashed lines on Figure 5(b). For \(P=1.5\:\mathrm{Torr}\) (\(n_{p}\approx 5\times 10^{16}\:\mathrm{cm^{-3}}\) assuming single \(\mathrm{H_{2}}\) ionisation) the fitted critical frequencies are \(24\:\mathrm{keV}\) and \(123\:\mathrm{keV}\) respectively. It should be noted that at this plasma density, significant energy losses of more than 5 GeV were observed, which can substantially modify the shape of the emitted spectra and is not taken into account in this fitting analysis. Moreover, a discrepancy between in the absolute values of the retrieved critical frequencies from the Cu and W filters is observed and is currently under study. This preliminary analysis shows a first study of the sensitivity of the GAMMA2 detector for the reconstruction of the betatron spectra produced at FACET-II. With the future optimisation of the beam performance at FACET-II, mainly in terms of shot-to-shot stability thanks to the mitigation of longitudinal instabilities, as well as foreseen improvements in the data analysis, the reconstruction of the betatron spectrum should constrain the beam parameter space and associated beam dynamics in the plasma. ## VI Conclusions In this article, we have reported on the design, commissioning and first measurements of X-ray and \(\gamma\)-ray detectors for the initial phase of the new accelerator facility FACET-II. As a results of a collaborative simulation campaign, two scintillation-based detectors, named GAMMA1 and GAMMA2, have been manufactured and installed to measure the broadband high-energy radiation produced at the Interaction Point of different experimental campaigns hosted at the facility. During the first user-assisted commissioning runs of FACET-II, bremsstrahlung, linear Compton scattering, and betatron high energy photons have been produced and measured using the GAMMA1 and GAMMA2 detectors, allowing several commissioning tests and experimental studies of their working principles. The results of the studies presented here show the potential uses of these detectors in the context of several experimental campaigns, from Strong-Field QED to beam-driven plasma wakefield acceleration experiments.
2302.12258
Data leakage in cross-modal retrieval training: A case study
The recent progress in text-based audio retrieval was largely propelled by the release of suitable datasets. Since the manual creation of such datasets is a laborious task, obtaining data from online resources can be a cheap solution to create large-scale datasets. We study the recently proposed SoundDesc benchmark dataset, which was automatically sourced from the BBC Sound Effects web page. In our analysis, we find that SoundDesc contains several duplicates that cause leakage of training data to the evaluation data. This data leakage ultimately leads to overly optimistic retrieval performance estimates in previous benchmarks. We propose new training, validation, and testing splits for the dataset that we make available online. To avoid weak contamination of the test data, we pool audio files that share similar recording setups. In our experiments, we find that the new splits serve as a more challenging benchmark.
Benno Weck, Xavier Serra
2023-02-23T09:51:03Z
http://arxiv.org/abs/2302.12258v1
# Data Leakage in Cross-Modal Retrieval Training: a Case Study ###### Abstract The recent progress in text-based audio retrieval was largely propelled by the release of suitable datasets. Since the manual creation of such datasets is a laborious task, obtaining data from online resources can be a cheap solution to create large-scale datasets. We study the recently proposed SoundDesc benchmark dataset, which was automatically sourced from the BBC Sound Effects web page. In our analysis, we find that SoundDesc contains several duplicates that cause leakage of training data to the evaluation data. This data leakage ultimately leads to overly optimistic retrieval performance estimates in previous benchmarks. We propose new training, validation, and testing splits for the dataset that we make available online. To avoid weak contamination of the test data, we pool audio files that share similar recording setups. In our experiments, we find that the new splits serve as a more challenging benchmark. Benno Weck\({}^{1,2}\), Xavier Serra\({}^{2}\)\({}^{1}\) Huawei Technologies, Munich Research Center, Germany [email protected] \({}^{2}\) Universitat Pompeu Fabra, Music Technology Group, Spain [email protected], [email protected] text-based audio retrieval, cross-modal, duplicates, data leakage, deep learning ## 1 Introduction Retrieving audio through textual search queries was traditionally approached by extracting metadata from all audio files in the collection and selecting items by text-based matching algorithms. With the advent of deep-learning-based methods, it became feasible to map search queries directly into the audio content domain in order to retrieve items at a large scale. This form of content-based audio retrieval is commonly referred to as _text-based audio retrieval_. The research in this relatively young field is mostly driven by the availability of large-scale datasets. These datasets serve as a source for the necessary training data and, additionally, allow for a comparative evaluation of different approaches. For text-based audio retrieval, the most commonly used datasets are _Clotho_[1] and _AudioCaps_[2]. Both datasets were originally designed for the task of automatic audio captioning but lend themselves well to cross-modal retrieval. Nevertheless, there are certain drawbacks to both: (i) Clotho is limited in size and variability of the audio content and and (ii) the audio content in AudioCaps is not freely accessible The recently presented _SoundDesc_ dataset [3] addresses both shortcomings: it is large in size while keeping a wide variation in content topics and its audio is freely available for research purposes. Moreover, the authors propose it as a benchmark for text-based audio retrieval. The dataset was sourced from the BBC Sound Effects Archive website1. Due to the semi-automatic nature of the dataset creation process, it is more likely to contain undesired artefacts. We study the data distribution among the publicly available splits of the dataset and identify a couple of defects. Primarily, we detect several duplicate recordings. These duplicate recordings cause an unwanted overlap of the training and the evaluation part of the dataset, a so-called data leakage. This leakage ultimately leads to overly optimistic retrieval scores reported on the test set. We are convinced that these defects need to be corrected so that no erroneous conclusions are made and to maximise the potential of the data. This is why, in this work, we set out to propose an updated version of the training, validation, and test splits of the SoundDesc dataset. More specifically, our contributions are as follows: Footnote 1: [https://sound-effects.bbcrewind.co.uk/](https://sound-effects.bbcrewind.co.uk/) * We identify duplicates and overlapping recordings in the dataset using an off-the-shelf audio fingerprinting system. * We show that these duplicates lead to a data leakage problem and overly optimistic retrieval scores. * We propose new dataset splits that avoid weak contamination between development and evaluation data by grouping recordings that potentially share the same recording process. * We make the new splits available online.2 ## 2 Related Work ### Text-based audio retrieval Text-based audio retrieval (sometimes also called language-based audio retrieval) can be described as the problem of ranking a set of audio files according to how closely they match a free-form query text. This is a form of a cross-modal retrieval between two modalities, namely natural language text and audio. For this task, text queries are usually provided as single-sentence descriptions of the audio, also referred to as captions. A common approach in cross-modal retrieval is to employ a separate encoder model for each modality and map the respective outputs to a common representation space. All submission in the Language-based audio retrieval task of the _DCASE 2022_ challenge [4] used this model architecture. Often these bi-encoder architectures rely on pretrained audio and text models [3, 5, 6]. ### Dataset curation The goal of machine learning is to build a system that can generalise, i.e. perform well on previously unseen input data [7]. To judge this generalisation capability, machine learning practitioners usually keep a part of their data as a held-out set for evaluation. This is commonly referred to as training and test splits of a dataset. We usually assume that training and test data are independent of each other and identically distributed. If information about the evaluation data is accessible to the machine learning model during training, these assumptions are violated and we speak of _data leakage_[8]. This problem was studied in the context of audio datasets. For example, Sturm [9] uncovers several problems in a benchmark dataset widely used in music information retrieval research. They find some exact duplicates in the dataset and show how certain characteristics of the data can be confounded with the ground-truth labels if left uncontrolled. For instance, filtering the dataset so that all musical excerpts of an artist are kept in one split can have a significant impact on the performance measure. This phenomenon is sometimes called "artist effect" [10]. As a positive example, Fonseca et al. explain their considerations when constructing the splits for the _FSD50K_ dataset [11] - a dataset collected from the online platform Freesound [12]. They differentiate between data leakage and _weak contamination_. In their definition, this contamination can happen if items in a dataset are similar in some regard even if they are not the _same_. Similar to what Sturm [9] describes about grouping songs of the same artist, they group recordings of the same content uploader. ## 3 Data Leakage in SoundDesc While employing the newly released SoundDesc dataset for our work on language-based audio retrieval, we accidentally stumbled across several duplicate audio items3 in the proposed training and test splits. This led us to performing a more thorough investigation about the general structure of this dataset and the proposed splits. Footnote 3: For example: [https://sound-effects.bbcrewind.co.uk/search?q=07054072&200R&2007058026](https://sound-effects.bbcrewind.co.uk/search?q=07054072&200R&2007058026) ### The SoundDesc dataset The SoundDesc dataset was recently proposed by Koepke et al. [3]. It is a collection of audio recordings and sound effects sourced from the _BBC Sound Effects Archive_ website4 and contains 32979 audio files with associated textual descriptions. Additionally, each item in the dataset is labelled with one primary category and potentially additional categories. The authors of the dataset publish the training, validation and testing splits along with benchmark results on these splits. Footnote 4: [https://sound-effects.bbcrewind.co.uk/](https://sound-effects.bbcrewind.co.uk/) ### Detecting duplicates and overlapping recordings To automatically detect duplicates in the dataset, it requires a system to measure the similarity between recordings. We employed the publicly available audio fingerprinting software _Panako_[13] since it was already successfully used to detect duplicates in a similar setting [14]. The _Panako_ algorithm was applied with its default configuration to generate a set of potentially matching audio recording pairs. After manually reviewing a small subset, we decided to only keep pairs according to the following heuristic to filter out spurious matches: Pairs are considered duplicates if (i) the number of seconds containing sub-fingerprint matches exceeds 50% of the total match duration, and (ii) a score of 25 or higher is required for a match to be considered. We are left with 3601 distinct matching pairs. We find that there are not only exact duplicates but also reprocessed recordings and recordings with partial overlaps. These overlaps occur for example at the end and the start of another, or as one or more excerpts of a longer recording. For simplicity, we will refer to all cases as _duplicates_. ### The effect of duplicates on the benchmark results After finding the duplicates in the dataset we want to assess their influence on the validity of the performance metrics measured in the published evaluation split. To do so we first identify all pairs of duplicates that are split between the training and the evaluation part (validation & testing) of the dataset. We then create a new training set that is a subset of the original by excluding all items that have a duplicate recording in the evaluation data. This way we can keep the test set unchanged and compare our results to the dataset creators' results. In total, we exclude 1388 of the 23085 files in the training set. To investigate the effect of the reduced training set size, we similarly construct three additional training sets by excluding the same number of files at random. We refer to the new training subsets as _deduplicated_ and _random_, respectively. To assess the impact on the model training, in our experiments, we adopt the Collaborative-Experts (CE) [15] model architecture as it was used by the SoundDesc authors in their benchmark [3] experiment. An implementation of this model is released together with the dataset.5 After model training, we perform retrieval on the entire original test set. Retrieval performance is measured as the recall at different levels by considering only the top \(k\) elements for each query (R@\(k\)). We report these metrics for the full test set and the subset of duplicates in the test set (648 items). All results are given in Table 1 as the mean and the standard deviation computed over three runs with different random initialisations. Footnote 5: [https://github.com/akoepke/audio-retrieval-benchmark/](https://github.com/akoepke/audio-retrieval-benchmark/) From the table, we can see that models trained without access to the duplicates in the training set (deduplicated) score significantly lower in all metrics than models trained with the full training set (original). This large drop in performance is most likely only partially due to the fact that the former models have a smaller training set to learn from since models trained on a randomly reduced training set (random 1-3) only suffer a minor hit in performance. This illustrates the influence of duplicates on the retrieval scores. This effect is also apparent when only looking at the metrics measured on test items that have a duplicate in the training set (duplicates only). Not surprisingly, we find that models can achieve significantly higher retrieval scores in the test subset of only duplicates than the entire test set when duplicates are left untreated (original). For example, more than half of the duplicates are retrieved as the highest-ranked result (duplicates only/original R@1 = 52.2). After deduplication, the scores reported for the subset (deduplicated) are even below the scores of the entire test set. These findings illustrate that there is a data leakage problem in the publicly available splits of the SoundDesc dataset that leads to overly optimistic benchmark results. We argue that, in its current form, it can not be used as a benchmark dataset since it does not allow us to measure if any progress is made in solving the problem of text-based audio retrieval. ## 4 Proposing a new benchmark To establish a new benchmark on SoundDesc, the flaws discussed above need to be accounted for when splitting the data. Additionally, we study if forms of weak contamination of the test data can be avoided using the metadata associated with the dataset. ### Treating duplicates As a minimal improvement, the discovered data leakage in SoundDesc needs to be fixed. Simply removing the duplicates from the training set would reduce the dataset size, which is not the preferred solution. Instead, we propose to partition the data so that pairs of duplicates remain in the same split. We form groups of recordings by assigning group labels to each pair of duplicates and by merging groups that share the same recording. Items without any duplicates are left ungrouped. Finally, we split the data and keep 15% for validation and testing, each. We use stratified sampling to maintain the same relative category distribution in all splits. We refer to this split as _clean_. ### Avoiding weak contamination A bulk of the SoundDesc is sourced from the BBC natural history unit (NHU) archive. It comprises a large number of nature sounds, such as recordings of animals. We surmise that some of these recordings might have a perceptual likeness even if they are no actual duplicates. Especially recordings that share the same recording setting could have substantial similarities with each other. For example, if multiple vocalisations of a bird are snippets of a long recording session, they might overlap in ambient noise, loudness, etc. In the context of machine learning these overlaps can be considered unwanted artefacts since those artefacts could favour models that memorise rather than generalise well. To avoid weak contamination of the test data in SoundDesc we propose to not split recordings with these kinds of potential overlaps when partitioning the dataset. To automatically identify potential groups of recordings, \begin{table} \begin{tabular}{l l l l} \hline \hline training set & R@1 & R@5 & R@10 \\ \hline \multicolumn{4}{c}{full test set} \\ \cline{2-4} original & 31.3 \(\pm\) 0.3 & 60.9 \(\pm\) 0.6 & 70.9 \(\pm\) 0.7 \\ deduplicated & 26.6 \(\pm\) 0.6 & 55.5 \(\pm\) 1.1 & 66.1 \(\pm\) 0.5 \\ random 1 & 29.9 \(\pm\) 0.5 & 59.1 \(\pm\) 0.3 & 69.2 \(\pm\) 0.3 \\ random 2 & 29.8 \(\pm\) 0.4 & 58.5 \(\pm\) 0.3 & 68.6 \(\pm\) 0.5 \\ random 3 & 30.2 \(\pm\) 0.1 & 59.2 \(\pm\) 0.4 & 69.5 \(\pm\) 0.1 \\ \multicolumn{4}{c}{duplicates only} \\ \cline{2-4} original & 52.2 \(\pm\) 0.9 & 82.8 \(\pm\) 0.4 & 89.8 \(\pm\) 0.6 \\ deduplicated & 21.6 \(\pm\) 0.6 & 49.7 \(\pm\) 1.8 & 61.4 \(\pm\) 0.9 \\ random 1 & 49.7 \(\pm\) 2.5 & 80.0 \(\pm\) 1.2 & 87.8 \(\pm\) 0.6 \\ random 2 & 50.4 \(\pm\) 0.6 & 81.5 \(\pm\) 1.6 & 88.8 \(\pm\) 0.5 \\ random 3 & 49.8 \(\pm\) 1.4 & 81.0 \(\pm\) 0.9 & 87.9 \(\pm\) 1.2 \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison of retrieval results for CE models trained with different training data we rely on metadata of the NHU archive:6 the date of the recording, the name of the recordist(s) and the topic of the recording. The topic is given for all NHU recordings at the start of the description text and separated from the rest with a single dash. Recordings that do not have a recording date or a recordist name associated will be left ungrouped. We can assign 12444 recordings to groups. Table 2 shows examples of the resulting groups. We merge the newly defined groups with the groups of the _clean_ split and use the same stratification process to divide the data. We refer to the resulting splits as _group-filtered_. Footnote 6: This metadata is also accessible through the BBC Sound Effects Archive website but was not included in the SoundDesc dataset. ### Benchmark results To set a benchmark for our proposed splits, we compare the previously introduced CE model with our own model. Our system relies on a bi-encoder architecture that has shown promising results for the task of text-based audio retrieval [5]. We follow the same training procedure described in our previous work [5] and only make few minor adjustments to the model architecture: As an audio encoder, we employ a pre-trained _PANNs_ model [16] that we keep fixed during training. The audio embedding sequence extracted by this encoder is reduced by computing the mean and maximum across the time dimension and stacking the resulting vectors. The stacked vectors are then mapped to a dimensionality of 768 using a multi-layer perceptron (MLP). As text encoder, we employ a pre-trained _distilroberta-base_[17, 18] model. We take the first vector in the extracted text embedding sequence as the encoder result and similarly process it with an MLP. We use the same evaluation metrics as described above. Table 3 compares the results obtained by the two methods on each of our proposed splits. It is apparent from the figures in the table that there are no significant differences between the results achieved by the different models in either of the splits. As expected, the results for the clean split are in the range of the results of the experiments with the deduplicated training data discussed in Section 3.3. Interestingly, the retrieval scores are significantly lower when the group-filtered split is used. This suggests that it is harder to retrieve the correct recording in this split of the data. A possible explanation for these results may be that there was indeed weak contamination of the test data present in SoundDesc and models could make use of overlaps in the data to solve the retrieval problem. ## 5 Conclusion In this paper, we demonstrated that a data leakage problem in the publicly available splits of _SoundDesc_ leads to overly optimistic retrieval results. Using an off-the-shelf audio fingerprinting software, we identified that the data leakage stems from duplicates in the dataset. We define two new splits for the dataset: a _cleaned_ split to remove the leakage and a _group-filtered_ to avoid other kinds of weak contamination of the test data. From the results achieved by two different retrieval models, we conclude that our splits of the dataset serve as a more challenging benchmark for text-based audio retrieval. \begin{table} \begin{tabular}{l l l l} \hline \hline \multicolumn{1}{l}{training set} & R@1 & R@5 & R@10 \\ \hline \multicolumn{5}{c}{clean split} \\ \cline{2-3} CE & 27.3 \(\pm\) 0.6 & 55.9 \(\pm\) 0.4 & 66.5 \(\pm\) 0.5 \\ Ours & 28.0 \(\pm\) 0.6 & 55.5 \(\pm\) 0.7 & 65.6 \(\pm\) 0.6 \\ \cline{2-3} & \multicolumn{3}{c}{group-filtered split} \\ \cline{2-3} CE & 20.4 \(\pm\) 0.6 & 43.9 \(\pm\) 0.2 & 53.7 \(\pm\) 0.5 \\ Ours & 20.8 \(\pm\) 0.3 & 44.3 \(\pm\) 0.8 & 54.2 \(\pm\) 0.6 \\ \hline \hline \end{tabular} \end{table} Table 3: Comparison of retrieval results achieved by two different methods on our newly proposed SoundDesc splits. \begin{table} \begin{tabular}{l l l l} \hline \hline Date & Recordist name & Topic (start of description) & Description (rest) \\ \hline 1996-11-21 & Graham Ross & Camel Market & close-up mournful calls from camel. background voices in crowd. \\ & & Camel Market & close-up calls from camel. Sounds of crowd \& individual voices. \\ & &... &... \\ & & Rajasthan Musicians & medium close-up playing of flutes. Also sounds of drumming. Some background \\ & & Rajasthan Musicians & chatter from crowd. \\ & & Rajasthan Musicians & Sounds of masaks being played. Joined by singers \& later drums. \\ & &... &... \\ 1977-05-31 & Lyndon Bird & Green Tree Frog (Hyla Cinerea) & Chorus close-up with crickets and distant traffic \\ & & Green Tree Frog (Hyla Cinerea) & Chorus close-up with crickets \\ & &... &... \\ & Roy Horton & Common Bee Fly (Bombylius Major) & close-up hum from fly hovering. Birds in background. \\ & & Common Bee Fly (Bombylius Major) & close-up hum from fly hovering. Birds in background. Also sheep, cockerel and voices in background. \\ & &... &... \\ \hline \hline \end{tabular} \end{table} Table 2: Examples of metadata associated with grouped recordings in our proposed split.
2305.16099
FAVANO: Federated AVeraging with Asynchronous NOdes
In this paper, we propose a novel centralized Asynchronous Federated Learning (FL) framework, FAVANO, for training Deep Neural Networks (DNNs) in resource-constrained environments. Despite its popularity, ``classical'' federated learning faces the increasingly difficult task of scaling synchronous communication over large wireless networks. Moreover, clients typically have different computing resources and therefore computing speed, which can lead to a significant bias (in favor of ``fast'' clients) when the updates are asynchronous. Therefore, practical deployment of FL requires to handle users with strongly varying computing speed in communication/resource constrained setting. We provide convergence guarantees for FAVANO in a smooth, non-convex environment and carefully compare the obtained convergence guarantees with existing bounds, when they are available. Experimental results show that the FAVANO algorithm outperforms current methods on standard benchmarks.
Louis Leconte, Van Minh Nguyen, Eric Moulines
2023-05-25T14:30:17Z
http://arxiv.org/abs/2305.16099v2
# Favas: Federated Averaging with Asynchronous ###### Abstract In this paper, we propose a novel centralized Asynchronous Federated Learning (FL) framework, Favas for training Deep Neural Networks (DNNs) in resource-constrained environments. Despite its popularity, "classical" federated learning faces the increasingly difficult task of scaling synchronous communication over large wireless networks. Moreover, clients typically have different computing resources and therefore computing speed, which can lead to a significant bias (in favor of "fast" clients) when the updates are asynchronous. Therefore, practical deployment of FL requires to handle users with strongly varying computing speed in communication/resource constrained setting. We provide convergence guarantees for Favas in a smooth, non-convex environment and carefully compare the obtained convergence guarantees with existing bounds, when they are available. Experimental results show that the Favas algorithm outperforms current methods on standard benchmarks. ## 1 Introduction Federated learning, a promising approach for training models from networked agents, involves the collaborative aggregation of locally computed updates, such as parameters, under centralized orchestration (Konecny et al., 2015; McMahan et al., 2017; Kairouz et al., 2021). The primary motivation behind this approach is to maintain privacy, as local data is never shared between agents and the central server (Zhao et al., 2018; Horvath et al., 2022). However, communication of training information between edge devices and the server is still necessary. The central server aggregates the local models to update the global model, which is then sent back to the devices. Federated learning helps alleviate privacy concerns, and it distributes the computational load among networked agents. However, each agent must have more computational power than is required for inference, leading to a computational power bottleneck. This bottleneck is especially important when federated learning is used in heterogeneous, cross-device applications. Most approaches to centralized federated learning (FL) rely on synchronous operations, as assumed in many studies (McMahan et al., 2017; Wang et al., 2021). At each global iteration, a copy of the current model is sent from the central server to a selected subset of agents. The agents then update their model parameters using their private data and send the model updates back to the server. The server aggregates these updates to create a new shared model, and this process is repeated until the shared model meets a desired criterion. However, device heterogeneity and communication bottlenecks (such as latency and bandwidth) can cause delays, message loss, and stragglers, and the agents selected in each round must wait for the slowest one before starting the next round of computation. This waiting time can be significant, especially since nodes may have different computation speeds. To address this challenge, researchers have proposed several approaches that enable asynchronous communication, resulting in improved scalability of distributed/federated learning (Xie et al., 2019; Chen et al., 2020, 2021; Xu et al., 2021). In this case, the central server and local agents typically operate with inconsistent versions of the shared model, and synchronization in lockstep is not required, even between participants in the same round. As a result, the server can start aggregating client updates as soon as they are available, reducing training time and improving scalability in practice and theory. Contributions.Our work takes a step toward answering this question by introducing FAVAS, a centralized federated learning algorithm designed to accommodate clients with varying computing resources and support asynchronous communication. * In this paper, we introduce a new algorithm called FAVAS that uses an unbiased aggregation scheme for centralized federated learning with asynchronous communication. Our algorithm does not assume that clients computed the same number of epochs while being contacted, and we give non-asymptotic complexity bounds for FAVAS in the smooth nonconvex setting. We emphasize that the dependence of the bounds on the total number of agents \(n\) is improved compared to Zakerinia et al. (2022) and does not depend on a maximum delay. * Experimental results show that our approach consistently outperforms other asynchronous baselines on the challenging TinyImageNet dataset (Le and Yang, 2015). Our proposed algorithm FAVAS is designed to allow clients to perform their local steps independently of the server's round structure, using a fully local, possibly outdated version of the model. Upon entering the computation, all clients are given a copy of the global model and perform at most \(K\geq 1\) optimization steps based on their local data. The server randomly selects a group of \(s\) clients in each server round, which, upon receiving the server's request, submit an _unbiased_ version of their progress. Although they may still be in the middle of the local optimization process, they send reweighted contributions so that fast and slow clients contribute equally. The central server then aggregates the models and sends selected clients a copy of the current model. The clients take this received server model as a new starting point for their next local iteration. ## 2 Related Works Federated Averaging (FedAvg), also known as local SGD, is a widely used approach in federated learning. In this method, each client updates its local model using multiple steps of stochastic gradient descent (SGD) to optimize a local objective function. The local devices then submit their model updates to the central server for aggregation, and the server updates its own model parameters by averaging the client models before sending the updated server parameters to all clients. FedAvg has been shown to achieve high communication efficiency with infrequent synchronization, outperforming distributed large mini-batches SGD (Lin et al., 2019). However, the use of multiple local epochs in FedAvg can cause each device to converge to the optima of its local objective rather than the global objective, a phenomenon known as client drift. This problem has been discussed in previous work; see (Karimireddy et al., 2020). Most of these studies have focused on synchronous federated learning methods, which have a similar update structure to FedAvg (Wang et al., 2020; Karimireddy et al., 2020; Qu et al., 2021; Makarenko et al., 2022; Mao et al., 2022; Tyurin and Richtarik, 2022). However, synchronous methods can be disadvantageous because they require all clients to wait when one or more clients suffer from high network delays or have more data, and require a longer training time. This results in idle time and wasted computing resources. Moreover, as the number of nodes in a system increases, it becomes infeasible for the central server to perform synchronous rounds among all participants, and synchrony can degrade the performance of distributed learning. A simple approach to mitigate this problem is node sampling, e.g. Smith et al. (2017); Bonawitz et al. (2019), where the server only communicates with a subset of the nodes in a round. But if the number of stragglers is large, the overall training process still suffers from delays. Synchronous FL methods are prone to stragglers. One important research direction is based on FedAsync (Xie et al., 2019) and subsequent works. The core idea is to update the global model immediately when the central server receives a local model. However, when staleness is important, performance is similar to FedAvg, so it is suboptimal in practice. ASO-Fed (Chen et al., 2020) proposes to overcome this problem and handles asynchronous FL with local streaming data by introducing memory-terms on the local client side. AsyncFedED (Wang et al., 2022) also relies on the FedAsync instantaneous update strategy and also proposes to dynamically adjust the learning rate and the number of local epochs to staleness. Only one local updated model is involved in FedAsync-like global model aggregations. As a result, a larger number of training epochs are required and the frequency of communication between the server and the workers increases greatly, resulting in massive bandwidth consumption. From a different perspective, QuAFL (Zakerinia et al., 2022) develops a concurrent algorithm that is closer to the FedAvg strategy. QuAFL incorporates both asynchronous and compressed communication with convergence guarantees. Each client must compute \(K\) local steps and can be interrupted by the central server at any time. The client updates its model with the (compressed) central version and its current private model. The central server randomly selects \(s\) clients and updates the model with the (compressed) received local progress (since last contact) and the previous central model. QuAFL works with old variants of the model at each step, which slows convergence. However, when time, rather than the number of server rounds, is taken into account, QuAFL can provide a speedup because the asynchronous framework does not suffer from delays caused by stragglers. A concurrent and asynchronous approach aggregates local updates before updating the global model: FedBuff (Nguyen et al., 2022) addresses asynchrony using a buffer on the server side. Clients perform local iterations, and the base station updates the global model only after \(Z\) different clients have completed and sent their local updates. The gradients computed on the client side may be stale. The main assumption is that the client computations completed at each step come from a uniform distribution across all clients. Fedbuff is asynchronous, but is also sensitive to stragglers (must wait until \(Z\) different clients have done all local updates). Similarly, Koloskova et al. (2022) focus on Asynchronous SGD, and provide guarantees depending on some \(\tau_{max}\). Similar to Nguyen et al. (2022) the algorithm is also impacted by stragglers, during the transitional regime at least. A recent work by Fraboni et al. (2022) extend the idea of Koloskova et al. (2022) by allowing multiple clients to contribute in one round. But this scheme also favors fast clients. Liu et al. (2021) does not run on buffers, but develops an Adaptive Asynchronous Federated Learning (AAFL) mechanism to deal with speed differences between local devices. Similar to FedBuff, in Liu et al. (2021)'s method, only a certain fraction of the locally updated models contribute to the global model update. Most convergence guarantees for asynchronous distributed methods depend on staleness or gradient delays (Nguyen et al., 2022; Toghani and Uribe, 2022; Koloskova et al., 2022). Only Mishchenko et al. (2022) analyzes the asynchronous stochastic gradient descent (SGD) independently of the delays in the gradients. However, in the heterogeneous (non-IID) setting, convergence is proved up to an additive term that depends on the dissimilarity limit between the gradients of the local and global objective functions. ## 3 Algorithm We consider optimization problems in which the components of the objective function (i.e., the data for machine learning problems) are distributed over \(n\) clients, i.e., \[\min_{w\in\mathbb{R}^{d}}R(w);\ R(w)=\frac{1}{n}\sum_{i=1}^{n}\mathbb{E}_{(x, y)\sim p_{\mathrm{data}}^{i}}[\ell(\mathrm{NN}(x,w),y)], \tag{1}\] where \(d\) is the number of parameters (network weights and biases), \(n\) is the total number of clients, \(\ell\) is the training loss (e.g., cross-entropy or quadratic loss), \(\mathrm{NN}(x,w)\) is the DNN prediction function, \(p_{\mathrm{data}}^{i}\) is the training distribution on client \(i\). In FL, the distributions \(p_{\mathrm{data}}^{i}\) are allowed to differ between clients (statistical heterogeneity). Each client maintains three key values in its local memory: the local model \(w^{i}\), a counter \(q^{i}\), and the value of the initial model with which it started the iterations \(w_{init}^{i}\). The counter \(q^{i}\) is incremented for each SGD step the client performs locally until it reaches \(K\), at which point the client stops updating its local model and waits for the server request. Upon the request to the client \(i\), the local model and counter \(q^{i}\) are reset. If a server request occurs before the \(K\) local steps are completed, the client simply pauses its current training process, reweights its gradient based on the number of local epochs (defined by \(E_{t+1}^{i}\)), and sends its current _reweighted_ model to the server. In Zakerinia et al. (2022), we identified the client update \(w^{i}=\frac{1}{s+1}w_{t-1}+\frac{s}{s+1}w^{i}\) as a major shortcoming. When the number of sampled clients \(s\) is large enough, \(\frac{s}{s+1}w^{i}\) dominates the update and basically no server term are taken into consideration. This leads to a significant client drift. As a consequence, QuAFL does not perform well in the heterogeneous case (see Section 5). Second, one can note that the updates in QuAFL are biased in favor of fast clients. Indeed each client computes gradients at its own pace and can reach different numbers of epochs while being contacted by the central server. It is assumed that clients compute the _same_ number of local epochs in the analysis from Zakerinia et al. (2022), but it is not the case in practice. As a consequence, we propose FAVAS to deal with asynchronous updates without favoring fast clients. A first improvement is to update local weight directly with the received central model. Details can be found in Algorithm 1. Another idea to tackle gradient unbiasedness is to reweight the contributions from each of the \(s\) selected clients: these can be done either by dividing by the (proper) number of locally computed epochs, or by the expected value of locally computed epochs. In practice, we define the reweight \(\alpha^{i}=\mathbb{E}[E_{t+1}^{i}\wedge K]\), or \(\alpha^{i}=\mathbf{P}(E_{t+1}^{i}>0)(E_{t+1}^{i}\wedge K)\), where \(\wedge\) stands for \(\min\). We assume that the server performs a number of training epochs \(T\geq 1\). At each time step \(t\in\{1,\ldots,T\}\), the server has a model \(w_{t}\). At initialization, the central server transmits identical parameters \(w_{0}\) to all devices. At each time step \(t\), the central server selects a subset \(\mathcal{S}_{t}\) of \(s\) clients uniformly at random and requests their local models. Then, the requested clients submit their _reweighted_ local models back to the server. When all requested models arrive at the server, the server model is updated based on a simple average (see Line 10). Finally, the server multicasts the updated server model to all clients in \(\mathcal{S}_{t}\). In particular, all clients \(i\notin\mathcal{S}_{t}\) continue to run their individual processes without interruption. ``` Input :Number of steps \(T\), LR \(\eta\), Selection Size \(s\), Maximum local steps \(K\) ; /* At the Central Server 1Initialize 2Initialize 3Server sends \(w_{0}\) to all clients; 4 5 end for 6 7Server sends \(w_{0}\) to all clients; 8 9 end for 10 11 Update central server model \(w_{t}\leftarrow\frac{1}{s+1}w_{t-1}+(\frac{1}{s+1}\sum_{i\in\mathcal{S}_{t}}w_ {unbiased}^{i})\); 12 13forall clients \(i\in\mathcal{S}_{t}\)do 14Server sends \(w_{t}\) to client \(i\); 15 16 end for 17 18 end for 19 20 end for 21 22 23 end for 24 25 26 end for 27 28functionClientLocalTraining(); 29while\(q^{i}<K\)do 30 Compute local stochastic gradient \(\widetilde{g^{i}}\) at \(w^{i}\); 31 Update local model \(w^{i}\gets w^{i}-\eta\widetilde{g^{i}}\); 32 Update local counter \(q^{i}\gets q^{i}+1\); 33 34 end for 35 36Wait(); 37 38 end function ``` **Algorithm 1**FAVAS over \(T\) iterations. In red are highlighted the differences with QuAFL. **Remark 1**.: _In FAVAS's setting, we assume that each client \(i\in\{1,...,n\}\) keeps a full-precision local model \(w^{i}\). In order to reduce the computational cost induced by the training process, FAVAS can also be implemented with a quantization function \(Q\). First, each client computes backpropagation with respect to its quantized weights \(Q(w^{i})\). That is, the stochastic gradients are unbiased estimates of \(\nabla f_{i}\left(Q\left(w^{i}\right)\right)\). Moreover, the activations computed at forward propagation are quantized. Finally, the stochastic gradient obtained at backpropagation is quantized before the SGD update. In our supplementary experiments, we use the logarithmic unbiased quantization method of Chmiel et al. (2021)._ ## 4 Analysis In this section we provide complexity bounds for FAVAS in a smooth nonconvex environment. We introduce an abstraction to model the stochastic optimization process and prove convergence guarantees for FAVAS. Preliminaries.We abstract the optimization process to simplify the analysis. In the proposed algorithm, each client asynchronously computes its own local updates without taking into account the server time step \(t\). Here in the analysis, we introduce a different, but statistically equivalent setting. At the beginning of each server timestep \(t\), each client maintains a local model \(w^{i}_{t-1}\). We then assume that all \(n\) clients _instantaneously_ compute local steps from SGD. The update in local step \(q\) for a client \(i\) is given by: \[\widetilde{h}^{i}_{t,q}=\widetilde{g}^{i}\left(w^{i}_{t-1}-\sum_{s=1}^{q-1} \eta\widetilde{h}^{i}_{t,s}\right), \tag{2}\] where \(\widetilde{g}^{i}\) represents the stochastic gradient that client \(i\) computes for the function \(f_{i}\). We also define \(n\) independent random variables \(E^{1}_{t},\dots,E^{n}_{t}\) in \(\mathbb{N}\). Each random variable \(E^{i}_{t}\) models the number of local steps the client \(i\) could take before receiving the server request. We then introduce the following random variable: \(\widetilde{h}^{i}_{t}=\sum_{q=1}^{E^{i}_{t}}\widetilde{h}^{i}_{t,q}\). Compared to Zakerinia et al. (2022), we do not assume that clients performed the same number of local epochs. Instead, we reweight the sum of the gradients by weights \(\alpha^{i}\), which can be either _stochastic_ or _deterministic_: \[\alpha^{i}=\begin{cases}\mathbf{P}(E^{i}_{t+1}>0)(E^{i}_{t+1}\wedge K)&\text{ stochastic version},\\ \mathbb{E}[E^{i}_{t+1}\wedge K]&\text{deterministic version}.\end{cases} \tag{3}\] And we can define the _unbiased_ gradient estimator: \(\widetilde{h}^{i}_{t}=\frac{1}{\alpha^{i}}\sum_{q=1}^{E^{i}_{t}\wedge K} \widetilde{h}^{i}_{t,q}\). Finally, a subset \(\mathcal{S}_{t}\) of \(s\) clients is chosen uniformly at random. This subset corresponds to the clients that send their models to the server at time step \(t\). In the current notation, each client \(i\in\mathcal{S}_{t}\) sends the value \(w^{i}_{t-1}-\eta\widetilde{h}^{i}_{t}\) to the server. We emphasise that in our abstraction, all clients compute \(E^{i}_{t}\) local updates. However, only the clients in \(\mathcal{S}_{t}\) send their updates to the server, and each client \(i\in\mathcal{S}_{t}\) sends only the \(K\) first updates. As a result, we introduce the following update equations: \[\begin{cases}w_{t}=\frac{1}{s+1}w_{t-1}+\frac{1}{s+1}\sum_{i\in\mathcal{S}_{t }}(w^{i}_{t-1}-\eta\frac{1}{\alpha^{i}}\sum_{s=1}^{E^{i}_{t}\wedge K} \widetilde{h}^{i}_{t,s}),\\ w^{i}_{t}=w_{t},&\text{for }i\in\mathcal{S}_{t},\\ w^{i}_{t}=w^{i}_{t-1},&\text{for }i\notin\mathcal{S}_{t}.\end{cases}\] Assumptions and notations. 1. [leftmargin=*,noitemsep,topsep=0pt,parsep=0pt,topsep=0pt] 2. _Uniform Lower Bound: There exists_ \(f_{*}\in\mathbb{R}\) _such that_ \(f(x)\geq f_{*}\) _for all_ \(x\in\mathbb{R}^{d}\)_._ 3. _Smooth Gradients: For any client_ \(i\)_, the gradient_ \(\nabla f_{i}(x)\) _is_ \(L\)_-Lipschitz continuous for some_ \(L>0\)_, i.e. for all_ \(x,y\in\mathbb{R}^{d}\)_:_ \(\|\nabla f_{i}(x)-\nabla f_{i}(y)\|\leq L\|x-y\|\)_._ 4. _Bounded Variance: For any client_ \(i\)_, the variance of the stochastic gradients is bounded by some_ \(\sigma^{2}>0\)_, i.e. for all_ \(x\in\mathbb{R}^{d}\)_:_ \(\mathbb{E}[\big{\|}\widetilde{g}^{i}(x)-\nabla f_{i}(x)\big{\|}^{2}]\leq\sigma ^{2}\)_._ 5. _Bounded Gradient Dissimilarity: There exist constants_ \(G^{2}\geq 0\) _and_ \(B^{2}\geq 1\)_, such that for all_ \(x\in\mathbb{R}^{d}\)_:_ \(\sum_{i=1}^{n}\frac{\|\nabla f_{i}(x)\|^{2}}{n}\leq G^{2}+B^{2}\|\nabla f(x)\|^ {2}\)_._ We define the notations required for the analysis. Consider a time step \(t\), a client \(i\), and a local step \(q\). We define \[\mu_{t}=\left(w_{t}+\sum_{i=1}^{n}w^{i}_{t}\right)/(n+1) \tag{4}\] the average over all node models in the system at a given time \(t\). The first step of the proof is to compute a preliminary upper bound on the divergence between the local models and their average. For this purpose, we introduce the Lyapunov function: \(\Phi_{t}=\left\|w_{t}-\mu_{t}\right\|^{2}+\sum_{i=1}^{n}\left\|w_{t}^{i}-\mu_{t} \right\|^{2}.\) Upper bounding the expected change in potential.A key result from our analysis is to upper bound the change (in expectation) of the aforementioned potential function \(\Phi_{t}\): **Lemma 2**.: _For any time step \(t>0\) we have:_ \[\mathbb{E}\left[\Phi_{t+1}\right]\leq(1-\kappa)\,\mathbb{E}\left[\Phi_{t} \right]+3\frac{s^{2}}{n}\eta^{2}\sum_{i=1}^{n}\mathbb{E}\left\|\hat{h}_{t+1}^{ i}\right\|^{2},\quad\text{with }\kappa=\frac{1}{n}\left(\frac{s(n-s)}{2(n+1)(s+1)}\right).\] The intuition behind Lemma 2 is that the potential function \(\Phi_{t}\) remains concentrated around its mean, apart from deviations induced by the local gradient steps. The full analysis involves many steps and we refer the reader to Appendix B for complete proofs. In particular, Lemmas 16 and 18 allow us to examine the scalar product between the expected node progress \(\sum_{i=1}^{n}\hat{h}_{t}^{i}\) and the true gradient evaluated on the mean model \(\nabla f(\mu_{t})\). The next theorem allows us to compute an upper-bound on the averaged norm-squared of the gradient, a standard quantity studied in nonconvex stochastic optimization. Convergence results.The following statement shows that FAVAS algorithm converges towards a first-order stationary point, as \(T\) the number of global epochs grows. **Theorem 3**.: _Assume **A1** to **A4** and assume that the learning rate \(\eta\) satisfies \(\eta\leq\frac{1}{20B^{2}bKLs}\). Then FAVAS converges at rate:_ \[\frac{1}{T}\sum_{t=0}^{T-1}\mathbb{E}\left\|\nabla f\left(\mu_{t}\right)\right\| ^{2}\leq\frac{2(n+1)F}{Ts\eta}+\frac{Ls}{n+1}(\frac{\sigma^{2}}{n}\sum_{i}^{ n}a^{i}+8G^{2}b)\eta+L^{2}s^{2}(\frac{720\sigma^{2}}{n}\sum_{i}^{n}a^{i}+5600bG^{2}) \eta^{2},\] _with \(F:=(f(\mu_{0})-f_{*})\), and_ \[\begin{cases}a^{i},b=\frac{1}{\mathbf{P}(E_{t+1}^{i}>0)^{2}}(\frac{\mathbf{P} (E_{t+1}^{i}>0)}{K^{2}}+\mathbb{E}[\frac{1(E_{t+1}^{i}>0)}{E_{t+1}^{i}\wedge K }]),\max_{i}(\frac{1}{\mathbf{P}(E_{t+1}^{i}>0)})\textbf{ for }\alpha^{i}=\mathbf{P}(E_{t+1}^{i}>0)(E_{t+1}^{i} \wedge K),\\ a^{i},b=\frac{1}{\mathbb{E}[E_{t+1}^{i}\wedge K]}+\frac{\mathbb{E}[(E_{t+1}^{i }\wedge K)^{2}]}{K^{2}\mathbb{E}[E_{t+1}^{i}\wedge K]},\max_{i}(\frac{\mathbb{ E}[(E_{t+1}^{i}\wedge K)^{2}]}{\mathbb{E}[E_{t+1}^{i}\wedge K]})\quad\textbf{ for }\alpha^{i}=\mathbb{E}[E_{t+1}^{i}\wedge K].\end{cases}\] Note that the previous convergence result refers to the average model \(\mu_{t}\). In practice, this does not pose much of a problem. After training is complete, the server can ask each client to submit its final model. It should be noted that each client communicates \(\frac{sT}{n}\) times with the server during training. Therefore, an additional round of data exchange represents only a small increase in the total amount of data transmitted. The bound in Theorem 3 contains 3 terms. The first term is standard for a general non-convex target and expresses how initialization affects convergence. The second and third terms depend on the statistical heterogeneity of the client distributions and the fluctuation of the minibatch gradients. Table 1 compares complexity bounds along with synchronous and asynchronous methods.One can note the importance of the ratio \(\frac{s}{n}\). Compared to Nguyen et al. (2022) or Koloskova et al. (2022), \begin{table} \begin{tabular}{l|c} \hline \hline Method & Units of time \\ \hline FedAvg & \(\left(\frac{FLa^{2}+(1-\frac{s}{2})KC^{2}}{sK}-F\mathbb{L}^{\frac{1}{2}} \mathbb{G}e^{-\frac{1}{2}}+LFB^{2}e^{-1}\right)C_{FedAvg}\) \\ FedBuff & \(\left(FL(\sigma^{2}+G^{2})\epsilon^{-2}+FL((\frac{s^{2}}{s^{2}}+1)(\sigma^{2}+ nG^{2}))^{\frac{1}{2}-\frac{s}{2}}+FLe^{-1}\right)C_{FedBuff}\) \\ AsyncSGD & \(\left(FL(3\sigma^{2}+4G^{2})\epsilon^{-2}+FLG(s\tau_{\text{avg}})\mathbb{L}^{ \frac{1}{2}-\frac{s}{2}}+(s\tau_{\text{max}}F)\mathbb{L}^{\frac{1}{2}}\mathbb{ G}e^{-\frac{1}{2}}\mathbb{G}e^{-\frac{1}{2}}+\frac{1}{s^{2}\sqrt{n}}n\sqrt{ FB}K^{2}L\epsilon^{-1}\) \\ QuAFAL & \(\frac{1}{E^{2}}FLK(\sigma^{2}+2KG^{2})\epsilon^{-2}+\frac{n\sqrt{n}}{E\sqrt{N}} FKL(\sigma^{2}+2KG^{2})\epsilon^{-\frac{1}{2}}+\frac{1}{s^{2}\sqrt{n}}n\sqrt{ FB}K^{2}L\epsilon^{-1}\) \\ FAVAS & \(FL(\sigma^{2}\sum_{i}^{n}\frac{n}{n}+8G^{2}b)\epsilon^{-2}+\frac{n}{s}FL^{2}(K^{2} \sigma^{2}+L^{2}K^{2}G^{2}+s^{2}\sigma^{2}\sum_{i}^{n}\frac{n}{n}+s^{2}G^{2}b )^{\frac{1}{2}-\frac{s}{2}}+nFB^{2}KLb^{-1}\) \\ \hline \hline \end{tabular} \end{table} Table 1: How long one has to wait to reach an \(\epsilon\) accuracy for non-convex functions. For simplicity, we ignore all constant terms. Each constant \(C_{\cdot}\) depends on client speeds and represents the unit of time one has to wait in between two consecutive server steps. \(L\) is the Lipschitz constant, and \(F:=(f(w_{0})-f_{*})\) is the initial conditions term. \(a_{i},b\) are constants depending on client speeds statistics, and defined in Theorem 3. FAVAS can potentially suffer from delayed updates when \(\frac{n}{n}\ll 1\), but FAVAS does _not_ favor fast clients at all. In practice, it is not a major shortcoming, and FAVAS is more robust to fast/slow clients distribution than FedBuff/AsyncSGD (see Figure 2). We emphasize both FedBuff and AsyncSGD rely on strong assumptions: neither the queuing process, nor the transitional regime are taken into account in their analysis. In practice, during the first iterations, only fast clients contribute. It induces a serious bias. Our experiments indicate that a huge amount of server iterations has to be accomplished to reach the stationary regime. Still, under this regime, slow clients are contributing with delayed information. Nguyen et al. (2022); Koloskova et al. (2022) propose to uniformly bound this delay by some quantity \(\tau_{max}\). We keep this notation while reporting complexity bounds in Table 1, but argue nothing guarantee \(\tau_{max}\) is properly defined (i.e. finite). All analyses except that of Mishchenko et al. (2022) show that the number of updates required to achieve accuracy grows linearly with \(\tau_{max}\), which can be very adverse. Specifically, suppose we have two parallel workers - a fast machine that takes only \(1\) unit of time to compute a stochastic gradient, and a slow machine that takes \(1000\) units of time. If we use these two machines to implement FedBuff/AsyncSGD, the gradient delay of the slow machine will be one thousand, because in the \(1\) unit of time we wait for the slow machine, the fast machine will produce one thousand updates. As a result, the analysis based on \(\tau_{max}\) deteriorates by a factor of \(1000\). In the literature, guarantees are most often expressed as a function of server steps. In the asynchronous case, this is _inappropriate_ because a single step can take very different amounts of time depending on the method. For example, with FedAvg or Scaffold (Karimireddy et al., 2020), one must wait for the slowest client for each individual server step. Therefore, we introduce in Table 1 constants \(C\) that depend on the client speed and represent the unit of time to wait between two consecutive server steps. Finally, optimizing the value of the learning rate \(\eta\) with Lemma 12 yields the following: **Corollary 4**.: _Assume **A1** to **A4**. We can optimize the learning rate by Lemma 12 and FAVAS reaches an \(\epsilon\) precision for a number of server steps \(T\) greater than (up to numerical constants):_ \[\frac{FL(\frac{\sigma^{2}}{n}\sum_{i}^{n}a^{i}+8G^{2}b)}{\epsilon^{2}}+(n+1) \left(\frac{FL^{2}(K^{2}\sigma^{2}+L^{2}K^{2}G^{2}+\frac{\tilde{s}^{2}\sigma^ {2}}{n}\sum_{i}^{n}a^{i}+s^{2}G^{2}b)^{\frac{1}{2}}}{s\epsilon^{\frac{3}{2}}} +\frac{FB^{2}KLb}{\epsilon}\right),\] _where \(F=(f(\mu_{0})-f_{*})\), and \((a^{i},b)\) are defined in Theorem 3._ The second term in Corollary 4 is better than the one from the QuAFL analysis (\(n^{3}\) of Zakerinia et al., 2022). Although this \((n+1)\) term can be suboptimal, note that it is only present at second order from \(\epsilon\) and therefore becomes negligible when \(\epsilon\) goes to \(0\)(Lu and De Sa, 2020; Zakerinia et al., 2022). **Remark 5**.: _Our analysis can be extended to the case of quantized neural networks. The derived complexity bounds also hold for the case when the quantization function \(Q\) is biased. We make only a weak assumption about \(Q\) (we assume that there is a constant \(r_{d}\) such that for any \(x\in\mathbb{R}^{d}\)\(\|Q(x)-x\|^{2}\leq r_{d}\)), which holds for standard quantization methods such as stochastic rounding and deterministic rounding. The only effect of quantization would be increased variance in the stochastic gradients. We need to add to the upper bound given in Theorem 3 an "error floor" of \(12L^{2}r_{d}\), which remains independent of the number of server epochs. For stochastic or deterministic rounding, \(r_{d}=\Theta(d\frac{1}{2^{2k}})\), where \(b\) is the number of bits used. The error bound is the cost of using quantization as part of the optimization algorithm. Previous works with quantized models also include error bounds (Li et al., 2017; Li and Sa, 2019)._ ## 5 Numerical Results We test FAVAS on three image classification tasks: MNIST (Deng, 2012), CIFAR-10 (Krizhevsky et al., 2009), and TinyImageNet (Le and Yang, 2015). For the MNIST and CIFAR-10 datasets, two training sets are considered: an IID and a non-IIID split. In the first case, the training images are randomly distributed among the \(n\) clients. In the second case, each client takes two classes (out of the ten possible) without replacement. This process leads to heterogeneity among the clients. The standard evaluation measure for FL is the number of server rounds of communication to achieve target accuracy. However, the time spent between two consecutive server steps can be very different for asynchronous and synchronous methods. Therefore, we compare different synchronous and asynchronous methods w.r.t. _total simulation time_ (see below). We also measured the loss and accuracy of the model in terms of server steps and total local client steps (see Appendix C.3). In all experiments, we track the performance of each algorithm by evaluating the server model against an unseen validation dataset. We present the test accuracy and variance, defined as \(\sum_{i=1}^{n}\|w_{t}^{i}-w_{t}\|^{2}\). We decide to focus on non-uniform timing experiments as in Nguyen et al. (2022), and we base our simulation environment on QuAFL's code1. After simulating \(n\) clients, we randomly group them into fast or slow nodes. We assume that at each time step \(t\) (for the central server), a set of \(s\) clients is randomly selected without replacement. We assume that the clients have different computational speeds, and refer to Appendix C.2 for more details. We assume that only one-third of the clients are slow, unless otherwise noted. We compare FAVAS with the classic synchronous approach FedAvg (McMahan et al., 2017) and two newer asynchronous metods QuAFL (Zakerinia et al., 2022) and FedBuff (Nguyen et al., 2022). Details on implementing other methods can be found in Appendix C.1. Footnote 1: [https://github.com/ShayanTalaei/QuAFL](https://github.com/ShayanTalaei/QuAFL) We use the standard data augmentations and normalizations for all methods. FAVAS is implemented in Pytorch, and experiments are performed on an NVIDIA Tesla-P100 GPU. Standard multiclass cross entropy loss is used for all experiments. All models are fine-tuned with \(n=100\) clients, \(K=20\) local epochs, and a batch of size \(128\). Following the guidelines of Nguyen et al. (2022), the buffer size in FedBuff is set to \(Z=10\). In FedAvg, the total simulated time depends on the maximum number of local steps \(K\) and the slowest client runtime, so it is proportional to the number of local steps and the number of global steps. In QuAFL and FAVAS on the other hand, each global step has a predefined duration that depends on the central server clock. Therefore, the global steps have similar durations and the total simulated time is the sum of the durations of the global steps. In FedBuff, a global step requires filling a buffer of size \(Z\). Consequently, both the duration of a global step and the total simulated time depend on \(Z\) and on the proportion of slow clients (see Appendix C.2 for a detailed discussion). We first report the accuracy of a shallow neural network trained on MNIST. The learning rate is set to \(0.5\) and the total simulated time is set to \(5000\). We also compare the accuracy of a Resnet20 (He et al., 2016) with the CIFAR-10 dataset (Krizhevsky et al., 2009), which consists of 50000 training images and 10000 test images (in 10 classes). For CIFAR-10, the learning rate is set to \(0.005\) and the total simulation time is set to \(10000\). In Figure 1, we show the test accuracy of FAVAS and competing methods on the MNIST dataset. We find that FAVAS and other asynchronous methods can offer a significant advantage over FedAvg when time is taken into account. However, QuAFL does not appear to be adapted to the non-IID environment. We identified client-side updating as a major shortcoming. While this is not severe when each client optimizes (almost) the same function, the QuAFL mechanism suffers from significant client drift when there is greater heterogeneity between clients. FedBuff is efficient when the number of stragglers is negligible compared to \(n\). However, FedBuff is sensitive to the fraction of slow clients and may get stuck if the majority of clients are classified as slow and a few are classified as fast. In fact, fast clients will mainly feed the buffer, so the central updates will be heavily biased towards fast clients, and little information from slow clients will be considered. Figure 2 illustrates this phenomenon, where one-ninth of the clients are classified as fast. To provide a fair comparison, Table 2 gives the average performance of 10 random experiments with the different methods on the test set. In Figure 2(a), we report accuracy on a non-IID split of the CIFAR-10 dataset. FedBuff and FAVAS both perform better than other approaches, but FedBuff suffers from greater variance. We explain \begin{table} \begin{tabular}{c c c c} \hline \hline \multirow{2}{*}{Methods} & \multirow{2}{*}{IID split} & non-IID split & non-IID split \\ & & (\(\frac{2}{3}\) fast clients) & (\(\frac{1}{9}\) fast clients) \\ \hline FedAvg & \(93.4\pm 0.3\) & \(38.7\pm 7.7\) & \(44.8\pm 6.9\) \\ QuAFL & \(92.3\pm 0.9\) & \(40.7\pm 6.7\) & \(45.5\pm 4.0\) \\ FedBuff & **96.0**\(\pm\)\(0.1\) & \(85.1\pm 3.2\) & \(67.3\pm 5.5\) \\ FAVAS & \(95.1\pm 0.1\) & \(\textbf{88.9}\pm 0.9\) & \(\textbf{87.3}\pm 2.3\) \\ \hline \hline \end{tabular} \end{table} Table 2: Final accuracy on the test set (average and standard deviation over 10 random experiments) for the MNIST classification task. The last two columns correspond to Figures 1 and 2. Figure 1: Test accuracy on the MNIST dataset with a non-IID split in between \(n=100\) total nodes, \(s=20\). this limitation by the bias FedBuff provides in favor of fast clients. We also tested FAVAS on the TinyImageNet dataset (Le and Yang, 2015) with a ResNet18. TinyImageNet has 200 classes and each class has 500 (RGB) training images, 50 validation images and 50 test images. To train ResNet18, we follow the usual practices for training NNs: we resize the input images to \(64\times 64\) and then randomly flip them horizontally during training. During testing, we center-crop them to the appropriate size. The learning rate is set to \(0.1\) and the total simulated time is set to \(10000\). Figure 2(b) illustrates the performance of FAVAS in this experimental setup. While the partitioning of the training dataset follows an IID strategy, TinyImageNet provides enough diversity to challenge federated learning algorithms. Figure 2(b) shows that FAVAS scales much better on large image classification tasks than any of the methods we considered. **Remark 6**.: _We also evaluated the performance of FAVAS with and without quantization. We ran the code 2 from LUQ (Chmiel et al., 2021) and adapted it to our datasets and the FL framework. Even when the weights and activation functions are highly quantized, the results are close to their full precision counterpart (see Figure 7 in Appendix C)._ Footnote 2: [https://openreview.net/forum?id=clwYez4n8e8](https://openreview.net/forum?id=clwYez4n8e8) ## 6 Conclusion We have presented FAVAS the first (centralised) Federated Learning method of federated averaging that accounts for asynchrony in resource-constrained environments. We established complexity bounds under verifiable assumptions with explicit dependence on all relevant constants. Empirical evaluation shows that FAVAS is more efficient than synchronous and asynchronous state-of-the-art mechanisms in standard CNN training benchmarks for image classification. Figure 3: Test accuracy on CIFAR-10 and TinyImageNet datasets with \(n=100\) total nodes. Central server selects \(s=20\) clients at each round. Figure 2: Test accuracy and variance on the MNIST dataset with a non-IID split between \(n=100\) total nodes. In this particular experiment, one-ninth of the clients are defined as fast.
2306.06902
Transformer-based GAN for Terahertz Spatial-Temporal Channel Modeling and Generating
Terahertz (THz) communications are envisioned as a promising technology for 6G and beyond wireless systems, providing ultra-broad continuous bandwidth and thus Terabit-per-second (Tbps) data rates. However, as foundation of designing THz communications, channel modeling and characterization are fundamental to scrutinize the potential of the new spectrum. Relied on time-consuming and costly physical measurements, traditional statistical channel modeling methods suffer from the problem of low accuracy with the assumed certain distributions and empirical parameters. In this paper, a transformer-based generative adversarial network modeling method (T-GAN) is proposed in the THz band, which exploits the advantage of GAN in modeling the complex distribution, and the powerful expressive capability of transformer structure. Experimental results reveal that the distribution of channels generated by the proposed T-GAN method shows good agreement with the original channels in terms of the delay spread and angular spread. Moreover, T-GAN achieves good performance in modeling the power delay angular profile, with 2.18 dB root-mean-square error (RMSE).
Zhengdong Hu, Yuanbo Li, Chong Han
2023-06-12T07:12:53Z
http://arxiv.org/abs/2306.06902v1
# Transformer-based GAN for Terahertz ###### Abstract Terahertz (THz) communications are envisioned as a promising technology for 6G and beyond wireless systems, providing ultra-broad continuous bandwidth and thus Terahertz-second (Tbps) data rates. However, as foundation of designing THz communications, channel modeling and characterization are fundamental to scrutinize the potential of the new spectrum. Relied on time-consuming and costly physical measurements, traditional statistical channel modeling methods suffer from the problem of low accuracy with the assumed certain distributions and empirical parameters. In this paper, a transformer-based generative adversarial network modeling method (T-GAN) is proposed in the THz band, which exploits the advantage of GAN in modeling the complex distribution, and the powerful expressive capability of transformer structure. Experimental results reveal that the distribution of channels generated by the proposed T-GAN method shows good agreement with the original channels in terms of the delay spread and angular spread. Moreover, T-GAN achieves good performance in modeling the power delay angular profile, with 2.18 dB root-mean-square error (RMSE). ## I Introduction With the exponential growth of the number of interconnected devices, the sixth generation (6G) is expected to achieve intelligent connections of everything, anywhere, anytime [1], which demands Tbit/s wireless data rates. To fulfill the demand, Terahertz (THz) communications gain increasing attention as a vital technology of 6G systems, thanks to the ultra-broad bandwidth ranging from tens of GHz to hundreds of GHz [2]. The THz band is promising to address the spectrum scarcity and capacity limitations of current wireless systems, and realize long-awaited applications, extending from wireless cognition, localization/positioning, to integrated sensing and communication [3]. To design reliable THz wireless systems, one fundamental challenge lies in developing an accurate channel model to portray the propagation phenomena. Due to the high frequencies, new characteristics occur in the THz band, including frequency-selective absorption loss and rough-surface scattering [4]. However, traditional statistical channel modeling methods suffer from the problem of low accuracy with the assumed certain distributions and empirical parameters. For example, a geometric based stochastic channel model (GSCM) assumes that the positions of scatters follow certain statistical distributions, such as the uniform distribution within a circle around the transmitters and receivers [5]. However, the positions of scatters are hard to characterize by certain statistical distributions, making the GSCM not accurate for utilization in the THz band. To this end, an accurate channel modeling method for the THz band is needed. Recently, deep learning (DL) is popular and widely applied in wireless communications [6]. Among different kinds of DL methods, the generative adversarial network (GAN) has the advantage of modeling complex distribution accurately without any statistical assumptions, based on which GAN can be utilized to develop channel models. The authors in [7] train GAN to approximate the probability distribution functions (PDFs) of stochastic channel response. In [8], a GAN based channel modeling method is proposed and demonstrated over a AWGN channel. Nevertheless, these works only address uncomplicated scenarios, necessitating broader applicability to more intricate and practical channels. Considering more complex channels, GAN is designed in [9] to generate synthetic channel matrix samples close to the distribution of real channel samples, obtained from clustered delay line (CDL) channel model. In [10], a model-driven GAN-based channel modeling method is developed in intelligent reflecting surface (IRS) aided communication system. Theses methods [9, 10] learn the distribution of channel matrices to model the channel, and employ convolutional layers to extract image-like features from the channel matrices. However, it is inefficient to directly generate the sparse THz channel matrices with a high dimension, which contain few propagation paths. Moreover, convolutional layers in the GAN are hard to capture the long-range dependencies among elements of channel matrices. In this paper, a transformer-based GAN spatial-temporal channel modeling method (T-GAN) is proposed in the THz band. In contrast to synthesizing the high-dimensional channel matrices, T-GAN models the channel by generating spatial-temporal channel parameters, which reduces the number of parameters to be learned. Moreover, the transformer structure is integrated to excavate global dependencies among channel parameters. This can enhance the consistency of generated channel parameters, which leads to an improved generation quality of T-GAN. The contributions of this paper are listed as follows. * We formulate the THz channel modeling problem into a task of learning the distribution of spatial-temporal channel parameters. This reduces the required number of learned parameters, compared with learning the high-dimensional channel matrices in the THz band. * We propose a T-GAN based THz spatial-temporal channel modeling and generating method, which integrates the transformer structure with the GAN framework. In this method, T-GAN models the channel by generating channels parameters, including the path gain, phase, delay and azimuth angle of arrival. Relying on the capability of transformer in exploiting the dependencies among the channel parameters, the T-GAN can learn the joint spatial-temporal channel distribution accurately. The rest of the paper is organized as follows. Sec. II details the proposed T-GAN based channel modeling method. Sec. III demonstrates the performance of the proposed T-GAN method. The paper is concluded in Sec. IV. **Notation:**\(a\) is a scalar. **a** denotes a vector. **A** represents a matrix. \(\mathbb{E}\{\cdot\}\) describes the expectation. \(\nabla\) denotes the gradient operation. ## II Transformer-based GAN Channel Modeling In this section, the channel modeling problem is first formulated into a channel distribution learning problem. Then, the basic framework of the proposed transformer-based GAN (T-GAN) is elaborated. Next, the transformer encoder structure is introduced, which is a key component in the proposed T-GAN. Finally, the detailed structure of T-GAN is presented by integrating the transformer encoder structure into the GAN framework. ### _Problem Formulation_ The THz channel can be represented as \[h(\tau)=\sum_{l=0}^{L-1}\alpha_{l}e^{j\alpha_{l}}\delta(\tau-\tau_{l}), \tag{1}\] which contains \(L\) multi-path components (MPCs). Every MPC can be characterized by a set of parameters as \[\mathbf{x}_{l}=[\alpha_{l},\phi_{l},\tau_{l},\theta_{l}], \tag{2}\] where \(\alpha_{l}\) denotes the path gain of the \(l^{th}\) MPC, \(\phi_{l}\) represents the phase, \(\tau_{l}\) denotes the delay, and \(\theta_{l}\) represents the azimuth angle of arrival (AoA). Then, the THz channel can be characterized by \[\mathbf{x}=[\mathbf{x}_{1},\mathbf{x}_{2},\cdots,\mathbf{x}_{L}], \tag{3}\] where the number of MPCs \(L\) is set as 15. The problem of channel modeling can then be described as the generation of channel parameters that forms a distribution of channels. The generating process can be represented by the function \[\hat{\mathbf{x}}=G(\mathbf{z}|c), \tag{4}\] where \(\mathbf{z}\) denotes a random vector sampled from a normal distribution, the variable \(c\) is the condition information representing the distance between the transmitter and receiver. Through the function G, the target channel distribution \(p_{r}(\mathbf{x}|c)\) conditioned on the distance can be approximated by the generated distribution \(p_{g}(\hat{\mathbf{x}}|c)\). ### _Framework of Proposed T-GAN_ The T-GAN is designed to learn the channel generating function. The framework of the proposed T-GAN is shown in Fig. 1, which consists of two sub-networks, namely, generator \(G\) and discriminator \(D\). The generator is aimed at generating the fake channel \(G(\mathbf{z}|c)\) conditioned on the distance information \(c\) to fool the discriminator, while the discriminator serves as a classifier, trying to distinguish between the real channel \(\mathbf{x}\) and fake channel \(G(\mathbf{z}|c)\). The two networks are then trained in an adversarial manner, which can be considered as a two-player zero-sum minimax game. Specifically, the training objective can be represented by \[\min_{G}\max_{D}\mathbb{E}_{\mathbf{x}\sim p_{r}}[\log D(\mathbf{x}|c)]+ \mathbb{E}_{\mathbf{z}\sim p_{x}}[\log(1-D(G(\mathbf{z}|c)))], \tag{5}\] where \(p_{r}\) and \(p_{z}\) represent the distributions of real channels and noise vector, respectively. The generator minimizes \((1-D(G(\mathbf{z}|c))\) that represents the probability of the generated channel detected as fake, while the discriminator maximizes this probability. Therefore, the generator and discriminator compete against each other with the opposite objectives in the training process. Through the adversarial training, the Nash equilibrium can be achieved, such that the generator and discriminator cannot improve their objectives by changing only their own network. However, training with the objective function in (5) is unstable, since the training objective is potentially not continuous with respect to the generator's parameters [11]. Therefore, the improved version of GAN, namely, Wasserstein GAN with gradient penalty [11] is adopted. The modified objective function is expressed as \[\begin{split}\min_{G}\max_{D}\mathbb{E}_{\mathbf{x}\sim p_{r}}[D (\mathbf{x}|c)]+&\mathbb{E}_{\mathbf{z}\sim p_{z}}[(1-D(G( \mathbf{z}|c)))]\\ +&\lambda\mathbb{E}_{\tilde{\mathbf{x}}}[(\|\nabla _{\tilde{\mathbf{x}}}D(\tilde{\mathbf{x}}|c)\|-1)^{2})],\end{split} \tag{6}\] where the last term is the gradient penalty term to enforce Lipschitz constraint that the gradient of the GAN network is upper-bounded by a maximum value, the symbol \(\tilde{\mathbf{x}}\) is the uniformly sampled point between the points of \(\mathbf{x}\) and \(G(\mathbf{z}|c)\). Moreover, the parameter \(\lambda\) is the penalty coefficient. ### _Transformer Encoder Structure_ In T-GAN, the channel is inputted as a sequence of MPCs as in (3). Hence, the transformer en Fig. 1: Framework of the proposed T-GAN. utilized to capture the dependencies among the MPCs, and the relationships among the parameters in a MPC. As depicted in the left part of Fig. 2, the transformer encoder consists of 6 stacked identical layers. Every identical layer can be further divided into two sub-layers, multi-head attention layer and feed-forward layer. In both of the two sub-layers, the residual connection is applied by adding the input and the output of the sub-layer represented by \(x+\mathrm{Sublayer}(x)\). Moreover, the two sub-layers are followed by layer normalization, which can normalize the input and improve the stability of training. In the multi-head attention layer, multiple attention layers are applied to the input channel in parallel, so that the model can capture the information of the channel in different subspaces. The implementation of a single attention layer are introduced first. Considering a input channel \(\mathbf{X}=(\mathbf{x}_{1},\cdots,\mathbf{x}_{L})\in\mathbb{R}^{L\times d_{x}}\), it is composed of \(L\) MPCs and every MPC is represented by a vector \(\mathbf{x}_{l}\in\mathbb{R}^{1\times d_{x}}\). Firstly, every MPC in the sequence is transformed by \[\mathbf{q}_{l} =\mathbf{x}_{l}\mathbf{W}^{q}, \tag{7}\] \[\mathbf{k}_{l} =\mathbf{x}_{l}\mathbf{W}^{k},\] (8) \[\mathbf{v}_{l} =\mathbf{x}_{l}\mathbf{W}^{v}, \tag{9}\] where \(\mathbf{W}^{q}\in\mathbb{R}^{d_{x}\times d_{k}}\), \(\mathbf{W}^{k}\in\mathbb{R}^{d_{x}\times d_{k}}\), \(\mathbf{W}^{v}\in\mathbb{R}^{d_{x}\times d_{v}}\) are the learned transformation parameters. The symbols \(\mathbf{q}_{l}\in\mathbb{R}^{1\times d_{k}}\), \(\mathbf{k}_{l}\in\mathbb{R}^{1\times d_{k}}\) and \(\mathbf{v}_{l}\in\mathbb{R}^{1\times d_{v}}\) denote query, key and value respectively. The correlation between the query vector and the key vector shows how much attention should be paid to the value vector in the output. To give a concise representation, the vectors are packed into matrices represented by \[\mathbf{Q} =\mathbf{X}\mathbf{W}^{q}, \tag{10}\] \[\mathbf{K} =\mathbf{X}\mathbf{W}^{k},\] (11) \[\mathbf{V} =\mathbf{X}\mathbf{W}^{v}, \tag{12}\] where \(\mathbf{Q}\in\mathbb{R}^{L\times d_{k}}\), \(\mathbf{K}\in\mathbb{R}^{L\times d_{k}}\), \(\mathbf{V}\in\mathbb{R}^{L\times d_{v}}\) are the matrix representations of query, key and value. Then, the output can be calculated as \[\mathrm{Attention}(\mathbf{Q},\mathbf{K},\mathbf{V})=\mathrm{softmax}(\frac{ \mathbf{Q}\mathbf{K}^{\mathbf{T}}}{\sqrt{d_{k}}})\mathbf{V}, \tag{13}\] where \(\mathrm{Attention}(\mathbf{Q},\mathbf{K},\mathbf{V})\in\mathbb{R}^{L\times d_{v}}\) is the output of the attention layer, the term \(\mathrm{softmax}(\frac{\mathbf{Q}\mathbf{K}^{\mathbf{T}}}{\sqrt{d_{k}}})\) is the calculated attention matrix assigned to the value vector in matrix \(\mathbf{V}\). The \(\mathrm{softmax}\) is the operation for normalizing the attention weights, defined as \[\mathrm{softmax}(\mathbf{x})=\frac{e^{x_{i}}}{\sum e^{x_{i}}}, \tag{14}\] where \(x_{i}\) is the element in vector \(\mathbf{x}\), and the \(\mathrm{softmax}\) operation ensures that the sum of the output equals one. With the single attention layer introduced, the multi-head attention layer is formed by concatenating the result of \(h=4\) attention layers, which can be represented by \[\mathrm{Head}_{i} =\mathrm{Attention}(\mathbf{Q}_{i},\mathbf{K}_{i},\mathbf{V}_{i}), \tag{15}\] \[\mathbf{X}^{o} =\mathrm{Concat}(\mathrm{Head}_{1},\mathrm{Head}_{2},\cdots, \mathrm{Head}_{h})\mathbf{W}^{o}, \tag{16}\] where \(i=1,\cdots,4\) indexes the attention layer, the term \(\mathrm{Head}_{i}\in\mathbb{R}^{L\times d_{v}}\) denotes the result of the \(i^{th}\) parallel attention layer, \(\mathbf{W}^{o}\in\mathbb{R}^{hd_{v}\times d_{x}}\) is the linear matrix that transforms the concatenated result \(\mathbb{R}^{L\times hd_{v}}\) into the output \(\mathbf{X}^{o}\in\mathbb{R}^{L\times d_{v}}\). The output of the multi-head attention layer is then passed to the feedforward layer, which is just two dense layers with ReLU activation. The ReLU activation function is defined as \[f(x)=\max(0,x). \tag{17}\] Then, the feedforward operation can be characterized by \[\mathrm{FFN}(\mathbf{X}^{o})=\max(0,\mathbf{X}^{o}\mathbf{W_{1}}+\mathbf{b}_ {1})\mathbf{W}_{2}+\mathbf{b}_{2}, \tag{18}\] Fig. 2: Structure of the transformer encoder (left) and the proposed T-GAN (right). where \(\mathbf{X}^{o}\in\mathbb{R}^{L\times d_{x}}\) denotes the input to the feedforward layer. Moreover, \(\mathbf{W}_{1}\in\mathbb{R}^{d_{x}\times d_{x}}\) and \(\mathbf{W}_{1}\in\mathbb{R}^{d_{x}\times d_{x}}\) are the linear transformation matrices, and \(\mathbf{b}_{1}\in\mathbb{R}^{d_{x}\times 1}\) and \(\mathbf{b}_{2}\in\mathbb{R}^{d_{x}\times 1}\) are the bias terms for the two dense layers. ### _Structure of Proposed T-GAN_ The detailed architecture of proposed transformer based GAN network is shown in the right part of Fig. 2. The input to the generator includes the noise vector \(\mathbf{z}\in\mathbb{R}^{32\times 1}\) and the condition variable \(c\in\mathbb{R}^{1\times 1}\). In the Embedding layer, the two inputs \(\mathbf{z}\) and \(c\) are first concatenated into \(\mathbb{R}^{33\times 1}\), and are then transformed by one dense layer with LeakyReLU function into vector \(\mathbb{R}^{Ld_{x}\times 1}\), where \(L=15\) and \(d_{m}=4\). The LeakyReLU function is represented by \[f(x)=\begin{cases}x,&\mathrm{if}\ x\geq 0\\ \alpha x,&\mathrm{if}\ x<0\end{cases}, \tag{19}\] where \(\alpha\) is the slope coefficient when the value of neuron \(x\) is negative. Then, the vector is reshaped into the matrix \(\mathbb{R}^{L\times d_{m}}\), and are linearly transformed into the sequence \(\mathbf{X}_{\mathrm{embedding}}\in\mathbb{R}^{L\times d_{x}}\) with one dense layer. The parameter \(d_{x}=128\) is the dimension of the embedding representation. The Embedding layer is then followed by the positional encoding, to encode the position information into the sequence \(\mathbf{X}\). The operation can be represented by \[\mathbf{X}=\mathbf{X}_{\mathrm{embedding}}+\mathbf{P}\mathbf{E}, \tag{20}\] where \(\mathbf{P}\mathbf{E}\in\mathbb{R}^{L\times d_{x}}\) is the learned positional information of the sequence \(\mathbf{X}\). Furthermore, the encoded sequence is forwarded to the transformer encoder structure as introduced in Sec. II-C. Following the transformer structure, one Flatten layer and two dense layers are applied to get the output of generator \(\hat{\mathbf{x}}\in\mathbb{R}^{60\times 1}\). The two dense layers have 240 and 60 neurons, respectively. Then, together with the condition variable \(c\), the fake channel \(\hat{\mathbf{x}}\) or real channel \(\mathbf{x}\in\mathbb{R}^{60\times 1}\) is passed to the discriminator. The structures of the discriminator and generator are symmetric, with the similar embedding and transformer encoder structure, except that the noise vector in the generator is replaced by the real channel or fake channel in the discriminator. In the Embedding layer, the channel and condition variable are concatenated and transformed. Then, the position encoding learns the position information. Afterwards, the transformer encoder structure are applied. Next, the output of the transformer structure is transformed by two dense layers both with only one neurons. Finally, the Sigmoid activation function restricts the output of the discriminator in the range of [0,1], which is defined by \[f(x)=\frac{1}{1+e^{-x}}. \tag{21}\] ## III Experiment and Performance Evaluation In this section, the experiment settings including the dataset and training procedures are elaborated. Moreover, the performance of the T-GAN are evaluated by comparing distributions of the generated channels with the modelled original channels, in terms of delay spread, angular spread and power delay angular profile. ### _Dataset and Setup_ In the experiment, the dataset is generated by QuaDRiGa [13] with the extracted statistics from the THz measurement [12]. The measurement campaign is conducted in an indoor corridor scenario at 306-321 GHz, as depicted in Fig. 3. The dataset consists of 50000 channel samples, in which 80\(\%\) of the dataset is for training and 20 \(\%\) is for testing. Moreover, every channel sample can be represented by a number of \(L=15\) MPCs as in (3), and every feature of MPCs including path gain, phase, delay and AoA angle is normalized into the range of [0,1] by min-max standardization. Besides, the angle of the line-of-sight (LoS) is set at zero degree, which provides a reference point for the generating of other MPCs. The training procedure of the proposed GAN network is explained in detail as follows. Firstly, the input noise vector \(\mathbf{z}\in\mathbb{R}^{32\times 1}\) is generated by the multivariate normal distribution, which can provide the capabilities to transform into the desired distribution. The gradient penalty parameter \(\lambda\) in (6) is set as 10, which works well in the training process. Moreover, the stochastic gradient descent (SGD) optimizer is applied for the generator network, and the adaptive moment estimation (Adam) optimizer is chosen for the discriminator network. In addition, the learning rates of the two optimizers are both set as 0.0001 to stabilize the training. The number of epochs for training the proposed T-GAN is set as 10000. A epoch is defined as a complete training cycle through the training dataset, during which the generator and discriminator are trained iteratively, once and three times, respectively. All the experimental results are implemented on a PC with AMD Ryzen Threadripper 3990X @ 2.19 GHz and one Nvidia GeForce RTX 3090 Ti GPUs. In addition, the training of GAN network is carried out in the Tensorflow framework. ### _Delay Spread_ Delay spread characterizes the power dispersion of multipath components in the temporal domain. It is an important Fig. 3: Measurement layout in the indoor corridor scenario [12]. metric to measure the small-scale fading, which can be computed by \[\begin{split}\bar{\tau}&=\frac{\sum_{i=0}^{N_{\tau}}i \Delta\tau P_{\tau}(i)}{\sum_{i=0}^{N_{\tau}}P_{\tau}(i)},\\ \tau_{rms}&=\sqrt{\frac{\sum_{i=0}^{N_{\tau}}(i\Delta \tau-\bar{\tau})^{2}P_{\tau}(i)}{\sum_{i=0}^{N_{\tau}}P_{\tau}(i)}},\end{split} \tag{22}\] where \(N_{\tau}\) denotes the number of sampling points in the temporal domain, \(\bar{\tau}\) denotes the mean delay weighted by the power, \(\tau_{rms}\) refers to the root-mean-square (RMS) delay spread, \(\Delta\tau\) denotes the sampling time interval, and \(P_{\tau}(i)\) denotes the power at the delay of \(i\Delta\tau\). The cumulative probability function (CDF) plot of delay spread for the original and generated channels is depicted in Fig. 4(a). It can be observed that the CDF of delay spread for the generated channels matches the original channels well. Moreover, the average values of delay spread for the original and generated channels are 27.42 ns and 25.67 ns, respectively, which are very close. This shows that the T-GAN can well capture the channel characteristics in the temporal domain. ### _Angular Spread_ Angular spread describes how the power scatters in the spatial domain, which can be represented by \[\begin{split}\bar{\theta}&=\frac{\sum_{i=0}^{N_{ \theta}}i\Delta\theta P_{\theta}(i)}{\sum_{i=0}^{N_{\theta}}P_{\theta}(i)},\\ \theta_{rms}&=\sqrt{\frac{\sum_{i=0}^{N_{\theta}}(i \Delta\theta-\bar{\theta})^{2}P_{\theta}(i)}{\sum_{i=0}^{N_{\theta}}P_{\theta} (i)}},\end{split} \tag{23}\] where \(N_{\theta}\) denotes the number of sampling points in the spatial domain, \(\bar{\theta}\) denotes the angle weighted by the power, \(\theta_{rms}\) refers to the RMS angular spread, \(\Delta\theta\) defines the angle interval, and \(P_{\theta}(i)\) refers to the power at the AoA of \(i\Delta\theta\). The CDF plot of angular spread for the original and generated Fig. 4: Delay spread and angular spread for the original and generated channels. Fig. 5: Average PDAP for the original (left) and generated channels (right). channels is depicted in Fig. 4(b). The CDF of angular spread for the generated channels has a good agreement with the original channels. Moreover, the mean values of angular spread for the original and generated channels are \(56.82^{\circ}\) and \(53.78^{\circ}\), respectively, which shows a small deviation. This suggests that the proposed T-GAN can well characterize the power distribution in the spatial domain. ### _Power Delay Angular Profile_ Power delay angular profile characterizes the distribution of power in the spatial-temporal domain. In the experiment, the average power delay angular profiles (PDAPs) for the original and generated channels are compared as in Fig. 5. The generated channels shows good agreement with the original channels. To obtain a quantitative comparison, the deviation between the average PDAPs can be measured by root-mean-square error (RMSE), calculated as \[\text{RMSE}=\sqrt{\frac{1}{N_{\tau}N_{\theta}}\sum(\overline{\text{PDAP}}_{r} (i,j)-\overline{\text{PDAP}}_{g}(i,j))^{2}}, \tag{24}\] where \(N_{\tau}\) and \(N_{\theta}\) denote the number of sampling points in the temporal and spatial domains, respectively. Moreover, \(\text{PDAP}(i,j)\) denotes the power at the delay of \(i\Delta\tau\) and the AoA of \(j\Delta\theta\), and the terms \(\overline{\text{PDAP}}_{r}\) and \(\overline{\text{PDAP}}_{g}\) refer to the average PDAPs for the original and generated channels, respectively. The calculated RMSE value is 2.18 dB, which shows a small derivation, considering the wide range of the average PDAP values from -140 dB to -200 dB shown in Fig. 5. This proves that the proposed T-GAN can capture the features of channel in the joint spatial-temporal domain. This is attributed to the powerful capability of transformer structure in T-GAN to exploit the dependencies among different domains of channel. Moreover, to measure the similarity quantitatively, Structure Similarity Index Measure (SSIM) is introduced, which is widely applied to evaluate the quality and similarity of images [14]. The range of SSIM is from 0 to 1, and the value of SSIM is larger when the similarity between images is higher. The PDAPs of the generated channels are compared with the original channels at the same distance. The CDF of SSIM is shown in Fig. 6. It can be observed that the proposed T-GAN can achieve high SSIM values, which is above 0.8 for 80 percent of the generated channels. This further demonstrates the good performance of T-GAN in modeling the channels. ## IV Conclusion In this paper, we proposed a T-GAN based THz spatial-temporal channel modeling method, which can capture the distribution of channel in the THz band. Moreover, the transformer structure is exploited in T-GAN to excavate dependencies among the channel parameters. Finally, we validate the performance of T-GAN with the THz dataset. T-GAN can generate the channels that have good agreement with the original channels in terms of delay spread, angular spread and power delay angular profile. With the capability of channel modeling and generating, T-GAN has the potential to assist the design of THz communication system, especially the end-to-end system.
2305.00823
Numerical Approximation of Stochastic Volterra Integral Equation Using Walsh Function
This paper provides a numerical approach for solving the linear stochastic Volterra integral equation using Walsh function approximation and the corresponding operational matrix of integration. A convergence analysis and error analysis of the proposed method for stochastic Volterra integral equations with Lipschitz functions are presented. Numerous examples with available analytical solutions demonstrate that the proposed method solves linear stochastic Volterra integral equations more precisely than existing techniques. In addition, the numerical behaviour of the method for a problem with no known analytical solution is demonstrated.
Prit Pritam Paikaray, Sanghamitra Beuria, Nigam Chandra Parida
2023-05-01T13:43:34Z
http://arxiv.org/abs/2305.00823v1
# Numerical Approximation of Stochastic Volterra Integral Equation Using Walsh Function ###### Abstract This paper provides a numerical approach for solving the linear stochastic Volterra integral equation using Walsh function approximation and the corresponding operational matrix of integration. A convergence analysis and error analysis of the proposed method for stochastic Volterra integral equations with Lipschitz functions are presented. Numerous examples with available analytical solutions demonstrate that the proposed method solves linear stochastic Volterra integral equations more precisely than existing techniques. In addition, the numerical behaviour of the method for a problem with no known analytical solution is demonstrated. Stochastic Volterra integral equation Brownian motion It\(\hat{o}\) integral Walsh approximation Lipschitz condition ## 1 Introduction Numerous fields, including the physical sciences, biological sciences, agricultural sciences, and financial mathematics, which includes option pricing, make extensive use of stochastic differential equations (SDE) [2, 3, 4]. In these fields, stochastic Volterra integral equations (SVIE) play a crucial role. In a manner similar to other differential equations, many SDEs are practically impossible to solve, and the SVIE makes the problem even more challenging. Therefore, the numerical approximation method becomes vital when solving such problems. The approximate solution to many SVIEs can be estimated using various numerical techniques. Recently, orthogonal functions including block pulse function (BPF), Haar wavelet, Legendre polynomials, Laguerre polynomials, and Chebyshev's polynomials have been utilised to approximate the solution of SVIE [5, 6, 7, 8, 9, 10, 11, 12, 15]. The Walsh functions provide an orthonormal system that accepts just the values \(-1\) and \(1\). Because of this, a lot of mathematicians think of the Walsh system, which was developed in 1923 [14] and has many uses in digital technology, as an artificial orthonormal system. The fact that a computer can accurately estimate any Walsh function's current value at any given time gives it a significant edge over traditional trigonometric functions. The Walsh function was utilized by Chen and Hsiao in 1975 to resolve the variational problems [12]. They applied a similar idea in 1979 to resolve the integral equation [13]. The technique's key property is that it transforms the problem into an algebraic system, which is then solved to yield an approximate solution to the problem. In this paper, we apply the Walsh function [14] to approximate the solution \(x(t)\) of the following linear SVIE \[x(t)=f(t)+\int_{0}^{t}k_{1}(s,t)x(s)ds+\int_{0}^{t}k_{2}(s,t)x(s)dB(s) \tag{1}\] where \(x(t)\), \(f(t)\), \(k_{1}(s,t)\) and \(k_{2}(s,t)\) for \(s,t\in[0,T)\), represent the stochastic processes based on the same probability space \((\Omega,F,P)\) and \(x(t)\) is unknown. Here \(B(t)\) is a Brownian motion [2, 3] and \(\int_{0}^{t}k_{2}(s,t)x(s)dB(s)\) is the It\(\hat{o}\) integral. In most of the previous works, the evaluation is primarily based on the assumption that the derivatives \(f^{\prime}(t)\), \(\frac{\partial^{2}k_{i}}{\partial s\partial t}\) for \(i=1,2\), exists and bounded. Whereas in this paper, by converting BPF approximation to Walsh function approximation, we anticipate solely Lipschitz continuity of the functions \(f(t),k_{1}(s,t)\) and \(k_{2}(s,t)\) which gives the same rate of convergence which is linear but it permits to consider more general form of SVIE that has to be integrated. In the last section, the approximate solution is compared with the exact solution numerically to check the validity of the method. ## 2 Walsh Function and its Properties **Definition 1** (Rademacher Function).: Rademacher function \(r_{i}(t)\), \(i=1,2,\ldots\), for \(t\in[0,1)\) is defined by [14] \[r_{i}(t)=\left\{\begin{aligned} & 1& i=0,\\ sgn(sin(2^{i}\pi t))&\text{otherwise}\end{aligned}\right.\] where, \[sgn(x)=\left\{\begin{aligned} & 1& x>0,\\ & 0& x=0,\\ &-1& x<0.\end{aligned}\right.\] **Definition 2** (Walsh Function).: The \(n^{th}\) Walsh function for \(n=0,1,2,\cdots,\) denoted by \(w_{n}(t)\), \(t\in[0,1)\) is defined [14] as \[w_{n}(t)=(r_{q}(t))^{b_{q}},(r_{q-1}(t))^{b_{q-1}}.(r_{q-2}(t))^{b_{q-2}} \ldots(r_{1}(t))^{b_{1}}\] where \(n=b_{q}2^{q-1}+b_{q-1}2^{q-2}+b_{q-2}2^{q-3}+\ldots+b_{1}2^{0}\) is the binary expression of \(n\). Therefore, \(q\), the number of digits present in the binary expression of \(n\) is calculated by \(q=\left[\,\log_{2}n\,\right]+1\) in which \(\left[\,\cdot\,\right]\) is the greatest integer less than or equal to \({}^{\prime}\). The first \(m\) Walsh functions for \(m\in\mathbb{N}\) can be written as an \(m\)-vector by \[W(t)=\left[w_{0}(t)\quad w_{1}(t)\quad w_{2}(t)\ldots w_{m-1}(t)\right]^{T}.\] The Walsh functions satisfy the following properties: #### Orthonormality The set of Walsh functions is orthonormal. i.e., \[\int_{0}^{1}w_{i}(t)w_{j}(t)dt=\begin{cases}1&\text{i=j,}\\ 0&\text{otherwise.}\end{cases}\] #### Completeness For every \(f\in L^{2}([0,1))\) \[\int_{0}^{1}f^{2}(t)dt=\sum_{i=0}^{\infty}f_{i}^{2}||w_{i}(t)||^{2}\] where \(f_{i}=\int_{0}^{1}f(t)w_{i}(t)dt\). #### Walsh Function Approximation Any real-valued function \(f(t)\in L^{2}[0,1)\) can be approximated as \[f_{m}(t)=\sum_{i=0}^{m-1}c_{i}w_{i}(t)\] where, \(c_{i}=\int_{0}^{1}f(t)w_{i}(t)dt\). The matrix form of the approximation is given by \[f(t)=F^{T}T_{W}W(t) \tag{2}\] where \(F=\left[f_{0}\quad f_{1}\quad f_{2}\ldots f_{m-1}\right]^{T}\) and \(f_{i}=\int_{ih}^{(i+1)h}f(s)ds\) and \(T_{W}\) is called the operational matrix for Walsh function. One can see from [15] that, \[T_{W}T_{W}^{T}=mI\,\text{and}\,T_{W}^{T}=T_{W}\] Similarly, \(k(s,t)\in L^{2}([0,1)\times[0,1))\) can be approximated by \[k_{m}(s,t)=\sum_{i=0}^{m-1}\sum_{j=0}^{m-1}c_{ij}w_{i}(s)w_{j}(t)\] where, \(c_{ij}=\int_{0}^{1}\int_{0}^{1}k(s,t)w_{i}(s)w_{j}(t)dtds\). with the matrix form as \[k(s,t)=W^{T}(s)T_{W}KT_{W}W(t)=W^{T}(t)T_{W}K^{T}T_{W}W(s) \tag{3}\] where \(K=[k_{ij}]_{m\times m},k_{ij}=\int_{ih}^{(i+1)h}\int_{jh}^{(j+1)h}k(s,t)dtds\). In the next section, we will find a relation between block pulse function and Walsh function which later used to convert the SVIE to algebraic equation. ## 3 Relationship between Walsh Function and Block Pulse Functions (BPFs) **Definition 3** (Block Pulse Functions).: For a fixed positive integer \(m\), an \(m\)-set of BPFs \(\phi_{i}(t),t\in[0,1)\) for \(i=0,1,...,m-1\) is defined as \[\phi_{i}(t)=\begin{cases}1&\text{if }\dfrac{i}{m}\leq t<\dfrac{(i+1)}{m},\\ 0&\text{otherwise}\end{cases}\] \(\phi_{i}\) is known as the \(i\)th BPF. The set of all \(m\) BPFs can be written concisely as an \(m\)-vector, \(\Phi(t)=\left[\phi_{0}(t)\quad\phi_{1}(t)\quad\phi_{2}(t)\dots\phi_{m-1}(t) \right]^{T}\), \(t\in[0,1)\). The BPFs are disjoint, complete, and orthogonal [1]. The BPFs in vector form satisfy \[\Phi(t)\Phi(t)^{T}X=\tilde{X}\Phi(t)\text{ and }\Phi^{T}(t)A\Phi(t)=\hat{A} \Phi(t)\] where, \(X\in\mathbb{R}^{m\times 1},\tilde{X}\) is the \(m\times m\) diagonal matrix with \(\tilde{X}(i,i)=X(i)\text{ for }i=1,2,3\cdots m,A\in\mathbb{R}^{m\times m}\) and \(\hat{A}=\left[a_{11}\quad a_{22}\quad\dots\quad a_{mm}\right]^{T}\) is the \(m\)-vector with elements equal to the diagonal entries of \(A\). The integration of BPF vector \(\Phi(t)\), \(t\in[0,1)\) can be performed by [1] \[\int_{0}^{t}\Phi(\tau)d\tau=P\Phi(t),t\in[0,1), \tag{4}\] where, \(P\) is called deterministic operational matrix of integration. Hence, the integral of every function \(f(t)\in L^{2}[0,1)\) can be approximated as \[\int_{0}^{t}f(s)ds=F^{T}P\Phi(t)\] Similarly, It\(\hat{o}\) integral of BPF vector \(\Phi(t)\), \(t\in[0,1)\) can be performed by [6] as \[\int_{0}^{t}\Phi(\tau)dB(\tau)=P_{S}\Phi(t),t\in[0,1) \tag{5}\] where, \(P_{S}\) is called the stochastic operational matrix of integration. Hence, the It\(\hat{o}\) integral of every function \(f(t)\in L^{2}[0,1)\) can be approximated as in [6] by \[\int_{0}^{t}f(s)dB(s)=F^{T}P_{S}\Phi(t).\] The following theorem describes a relationship between the Walsh function and the block pulse function. **Theorem 3.1**.: Let the \(m\)-set of Walsh function and BPF vectors are \(W(t)\) and \(\Phi(t)\) respectively. Then the BPF vectors \(\Phi(t)\) can be used to approximate \(W(t)\) as \(W(t)=T_{W}\Phi(t)\), \(m=2^{k}\), and \(k=0,1,\dots\), where \(T_{W}=\left[c_{ij}\right]_{m\times m}\), \(c_{ij}=w_{i}(\eta_{j})\), for some \(\eta_{j}=\left(\frac{j}{m},\frac{j+1}{m}\right)\) and \(i,j=0,1,2,\dots m-1\). Proof.: Let \(w_{i}(t)\), \(i=0,1,2,\ldots m-1\), where \(m=2^{k}\), be the \(i^{th}\) element of the Walsh function vector. By expanding \(w_{i}(t)\) into an \(m\)-term vectors of BPFs we have \(w_{i}(t)=\sum_{j=0}^{m-1}c_{ij}\phi_{j}(t)=C_{i}^{T}\Phi(t)\), \(i=0,1,2,\ldots m-1\), where \(C_{i}^{T}\) is the \(i^{th}\) row and \(c_{ij}\) is the\((i,j)^{th}\) element of matrix \(T_{W}\) \[c_{ij}=\frac{1}{h}\int_{0}^{1}w_{i}(t)\phi_{j}(t)dt=\frac{1}{h}\int_{jh}^{(j+1 )h}w_{i}(t)dt.\] By using Mean value theorem for integral we can write \[c_{ij}=\frac{1}{h}\int_{jh}^{(j+1)h}w_{i}(t)dt=\frac{1}{h}\big{(}(j+1)h-jh \big{)}w_{i}(\eta_{j})=w_{i}(\eta_{j})\] where \(\eta_{j}\in\big{(}\frac{j}{m},\frac{j+1}{m}\big{)}\), \(m=\frac{1}{h}\). Since \(w_{i}(t)\) is constant in the interval \(\big{(}\frac{j}{m},\frac{j+1}{m}\big{)}\), we choose \(c_{ij}=w_{i}(\frac{2j+1}{2m})\), \(i,j=0,1,2,\ldots m-1\). Hence \(W(t)=T_{W}\Phi(t)\). From the above theorem, it is easy to see \(\Phi(t)=\frac{1}{m}T_{W}W(t)\). With the use of above condition, we prove the following theorem: **Lemma 3.2** (Integration of Walsh function).: Suppose that \(W(t)\) is a Walsh function vector, then the integral of \(W(t)\) w.r.t. \(t\) is given by \(\int_{0}^{t}W(s)ds=\wedge W(t)\), where \(\wedge=\frac{1}{m}T_{W}PT_{W}\) and \[P=\frac{1}{h}\begin{bmatrix}1&2&2&\ldots&2\\ 0&1&2&\ldots&2\\ \vdots&\vdots&\vdots&\ddots&\vdots\\ 0&0&0&\ldots&1\end{bmatrix}\] Proof.: Let \(W(t)\) be a Walsh function vector, then the integral of \(W(t)\) w.r.t. \(t\) is \[\int_{0}^{t}W(s)ds=\int_{0}^{t}T_{W}\Phi(s)ds=T_{W}\int_{0}^{t} \Phi(s)ds\] \[=T_{W}P\Phi(t)=\frac{1}{m}\Big{(}T_{W}PT_{W}\Big{)}W(t)=\wedge W(t)\] where \(\wedge=\frac{1}{m}\Big{(}T_{W}PT_{W}\Big{)}\) Here, \(\wedge\) is called as the Walsh operational matrix of integration. **Lemma 3.3** (Stochastic integration of Walsh function).: Suppose that \(W(t)\) is a Walsh function vector, then the It\(\hat{o}\) integral of \(W(t)\) is given by \(\int_{0}^{t}W(s)dB(s)=\wedge_{S}W(t)\), where \(\wedge_{S}=\frac{1}{m}T_{W}P_{S}T_{W}\) and \[P_{S}=\begin{bmatrix}B(\frac{h}{2})&B(h)&\ldots&B(h)\\ 0&B(\frac{3h}{2})-B(h)&\ldots&B(2h)-B(h)\\ \vdots&\vdots&\ddots&\vdots\\ 0&0&\ldots&B(\frac{(2m-1)h}{2})-B((m-1)h)\end{bmatrix}.\] Proof.: Let \(W(t)\) be a Walsh function vector, then the It\(\hat{o}\) integral of \(W(t)\) is \[\int_{0}^{t}W(s)dB(s)=\int_{0}^{t}T_{W}\Phi(s)dB(s)=T_{W}\int_{0}^{t}\Phi(s)dB (s)\] \[=T_{W}P_{S}\Phi(t)=\frac{1}{m}\Big{(}T_{W}P_{S}T_{W}\Big{)}W(t)=\wedge_{S}W(t),\] where \(\wedge_{S}=\frac{1}{m}\Big{(}T_{W}P_{S}T_{W}\Big{)}\). Here, \(\wedge_{S}\) is called the Walsh operational matrix for It\(\hat{o}\) integral. ## 4 Numerical Solution of Stochastic Volterra Integral Equation We consider following linear stochastic Volterra integral equation(SVIE) \[x(t)=f(t)+\int_{0}^{t}k_{1}(s,t)x(s)ds+\int_{0}^{t}k_{2}(s,t)x(s)dB(s) \tag{6}\] where \(x(t)\), \(f(t)\), \(k_{1}(s,t)\) and \(k_{2}(s,t)\) for \(s,t\in[0,T)\), are the stochastic processes defined on the same probability space \((\Omega,F,P)\) and \(x(t)\) is unknown. Also \(B(t)\) is Brownian motion process and \(\int_{0}^{t}k_{2}(s,t)x(s)dB(s)\) is the Ito Integral. Using equation (2) and(3) in (6) we have \[X^{T}T_{W}W(t) = F^{T}T_{W}W(t)+\int_{0}^{t}W^{T}(t)T_{W}K_{1}^{T}T_{W}W(s)W^{T}(s )T_{W}Xds \tag{7}\] \[+\int_{0}^{t}W^{T}(t)T_{W}K_{2}^{T}T_{W}W(s)W^{T}(s)T_{W}XdB(s)\] \[= F^{T}T_{W}W(t)+W^{T}(t)T_{W}K_{1}^{T}T_{W}\int_{0}^{t}W(s)W^{T}( s)T_{W}Xds\] \[+W^{T}(t)T_{W}K_{2}^{T}T_{W}\int_{0}^{t}W(s)W^{T}(s)T_{W}XdB(s)\] Now \[\int_{0}^{t}W(s)W^{T}(s)T_{W}Xds\] \[=\int_{0}^{t}T_{W}\Phi(s)\Phi^{T}(s)T_{W}T_{W}Xds\] \[=mT_{W}\int_{0}^{t}\Phi(s)\Phi^{T}(s)Xds\] \[=mT_{W}\tilde{X}\int_{0}^{t}\Phi(s)ds\] \[=mT_{W}\tilde{X}P\frac{1}{m}T_{W}W(t).\] Hence \[\int_{0}^{t}W(s)W^{T}(s)T_{W}Xds=T_{W}\tilde{X}PT_{W}W(t) \tag{8}\] Similarly, \[\int_{0}^{t}W(s)W^{T}(s)T_{W}XdB(s)=mT_{W}\tilde{X}P_{S}\frac{1}{m}T_{W}W(t)= T_{W}\tilde{X}P_{S}T_{W}W(t) \tag{9}\] Substituting (8) and (9) in (7) and using the condition of orthonormality, we get \[X^{T}T_{W}W(t) = F^{T}T_{W}W(t)+mW^{T}(t)T_{W}K_{1}^{T}\tilde{X}PT_{W}W(t)\] \[+mW^{T}(t)T_{W}K_{2}^{T}\tilde{X}P_{S}T_{W}W(t)\] \[= F^{T}T_{W}W(t)+W^{T}(t)T_{W}H_{1}T_{W}W(t)\] \[+W^{T}(t)T_{W}H_{2}T_{W}W(t)\] \[= F^{T}T_{W}W(t)+m\hat{H_{1}}^{T}T_{W}W(t)+m\hat{H_{2}}^{T}T_{W}W(t)\] which implies that, \[\Big{(}X^{T}-F^{T}-m\hat{H_{1}}^{T}-m\hat{H_{2}}^{T}\Big{)}T_{W}W(t)=0 \tag{10}\] where \(H_{1}=mK_{1}^{T}\tilde{X}P\), \(H_{2}=mK_{2}^{T}\tilde{X}P_{S}\). Hence \[\Big{(}X-F-m\hat{H_{1}}-m\hat{H_{2}}\Big{)}=[0]_{m\times 1} \tag{11}\] can be solved to obtain a non trivial solution of the given stochastic Volterra integral equation (6). ## 5 Error Analysis In this section, we analyze the error between the approximate solution and the exact solution of the stochastic Volterra integral equation. Before we start the analysis let us define, \(\|X\|_{2}=E(|X|^{2})^{\frac{1}{2}}\). **Theorem 5.1**.: If \(f\in L^{2}[0,1)\) satisfies the Lipschitz condition with Lipschitz constant \(C\), then \(\|e_{m}(t)\|_{2}=O(h)\), where \(e_{m}(t)=|f(t)-\sum_{i=0}^{m-1}c_{i}w_{i}(t)|\) and \(c_{i}=\int_{0}^{1}f(s)w_{i}(s)ds\). Proof.: Let \(f_{m}(t)=\sum_{i=0}^{m-1}c_{i}w_{i}(t)\) where \(c_{i}=\int_{0}^{1}f(s)w_{i}(s)ds\). Suppose \(f\) satisfies the Lipschitz condition. Now, \[e_{m}(t)=|f(t)-f_{m}(t)|\leq\omega(\frac{1}{2^{k}},f)\leq Ch.\] Here \(\omega(\frac{1}{2^{k}},f)\) is called the modulus of continuity of the function \(f\)[17]. Therefore, \[\|e_{m}(t)\|_{2}\leq Ch=O(h).\] **Theorem 5.2**.: Suppose \(k\in L^{2}\big{(}[0,1)\times[0,1)\big{)}\) satisfies the Lipschitz condition with Lipschitz constant \(L\). If \(k_{m}(x,y)=\sum_{i=0}^{m-1}\sum_{j=0}^{m-1}c_{ij}w_{i}(x)w_{j}(y)\), \(c_{ij}=\int_{0}^{1}\int_{0}^{1}k(s,t)w_{i}(s)w_{j}(t)dtds\), then \(\|e_{m}(x,y)\|_{2}=O(h)\), where \(|e_{m}(x,y)|=|k(x,y)-k_{m}(x,y)|\). Proof.: It is clear from [17] that, \[k_{m}(x,y) = \sum_{i=0}^{m-1}\sum_{j=0}^{m-1}\bigg{(}\int_{0}^{1}\int_{0}^{1}k (s,t)w_{i}(s)w_{j}(t)dtds\bigg{)}w_{i}(x)w_{j}(y)\] \[= \sum_{i=0}^{m-1}\sum_{j=0}^{m-1}(\int_{0}^{1}\int_{0}^{1}k(s,t)w_ {i}(s)w_{i}(x)w_{j}(t)w_{j}(y)dtds)\] \[= \int_{0}^{1}\int_{0}^{1}k(s,t)D_{m}(t\oplus y)D_{m}(s\oplus x)dtds\] \[= 2^{k}.2^{k}\int_{\Delta_{i}^{(k)}}\int_{\Delta_{j}^{(k)}}k(s,t)dtds\] where, \(D_{m}(t)=\sum_{i=0}^{m-1}w_{i}(t)\) is called the Dirichlet kernel [17]. Hence, \[|k_{m}(X)-k(X)|=2^{2k}\int_{\Delta_{i}^{(k)}}\int_{\Delta_{j}^{(k)}}|k(T)-k(X )|dT\] where \(X=(x,y)\), \(T=(s,t)\). Also note that if \(k\) is uniformly Lipschitz with Lipschitz constant \(L\), then \[|k_{m}(X)-k(X)|\leq 2^{2k}\int_{\Delta_{i}^{(k)}}\int_{\Delta_{j}^{(k)}}L|T-X |dT\] Therefore, \[\|k_{m}(X)-k(X)\|_{2}\leq\sqrt{2}Lh=O(h).\] **Theorem 5.3**.: Suppose \(x_{m}(t)\) be the approximate solution of the linear SVIE (1). If * \(f\in L^{2}[0,1)\), \(k_{1}(s,t)\) and \(k_{2}(s,t)\in L^{2}\big{(}[0,1)\times[0,1)\big{)}\) satisfies the Lipschitz condition with Lipschitz constants \(C\), \(L_{1}\) and \(L_{2}\) respectively, * \(|x(t)|\leq\sigma\), \(|k_{1}(s,t)|\leq\rho_{1}\) and \(|k_{2}(s,t)|\leq\rho_{2}\) then \[\|x(t)-x_{m}(t)\|_{2}^{2}=O(h^{2})\] Proof.: Let (1) be the given SVIE and \(x_{m}(t)\) be the approximation to the solution using the Walsh function. Then \[x(t)-x_{m}(t) = f(t)-f_{m}(t)\] \[+ \int_{0}^{t}\big{(}k_{1}(s,t)x(s)-k_{1m}(s,t)x_{m}(s)\big{)}ds\] \[+ \int_{0}^{t}\big{(}k_{2}(s,t)x(s)-k_{2m}(s,t)x_{m}(s)\big{)}dB(s)\] that implies, \[|x(t)-x_{m}(t)| \leq |f(t)-f_{m}(t)|\] \[+ \bigg{|}\int_{0}^{t}\big{(}k_{1}(s,t)x(s)-k_{1m}(s,t)x_{m}(s) \big{)}ds\bigg{|}\] \[+ \bigg{|}\int_{0}^{t}\big{(}k_{2}(s,t)x(s)-k_{2m}(s,t)x_{m}(s) \big{)}dB(s)\bigg{|}.\] We know that, \((a+b+c)^{2}\leq 5a^{2}+5b^{2}+5c^{2}\) \[|x(t)-x_{m}(t)|^{2} \leq 5|f(t)-f_{m}(t)|^{2}\] \[+ 5\bigg{|}\int_{0}^{t}\big{(}k_{1}(s,t)x(s)-k_{1m}(s,t)x_{m}(s) \big{)}ds\bigg{|}^{2}\] \[+ 5\bigg{|}\int_{0}^{t}\big{(}k_{2}(s,t)x(s)-k_{2m}(s,t)x_{m}(s) \big{)}dB(s)\bigg{|}^{2}.\] which implies that \[E\big{(}|x(t)-x_{m}(t)|^{2}\big{)} \leq 5E\bigg{(}|f(t)-f_{m}(t)|^{2}\bigg{)}+5I_{1}+5I_{2}.\] where, \[I_{1}=E\bigg{(}\bigg{|}\int_{0}^{t}\big{(}k_{1}(s,t)x(s)-k_{1m}(s,t)x_{m}(s) \big{)}ds\bigg{|}^{2}\bigg{)},\] and \[I_{2}=E\bigg{(}\bigg{|}\int_{0}^{t}\big{(}k_{2}(s,t)x(s)-k_{2m}(s,t)x_{m}(s) \big{)}dB(s)\bigg{|}^{2}\bigg{)}\] Now for \(i=1,2\), we have \[|k_{i}(s,t)x(s)-k_{im}(s,t)x_{m}(s)|\leq |k_{i}(s,t)||x(s)-x_{m}(s)|\] \[+ |k_{i}(s,t)-k_{im}(s,t)||x(s)|\] \[+ |k_{i}(s,t)-k_{im}(s,t)||x(s)-x_{m}(s)|\] For \(i=1,2\), let \(|k_{i}(s,t)|\leq\rho_{i}\), \(|x(s)|\leq\sigma\) and using Theorem 5.2, we get \[|k_{i}(s,t)x(s)-k_{im}(s,t)x_{m}(s)|\leq\sqrt{2}L_{i}h\sigma+(\rho_{i}+\sqrt{ 2}L_{i}h)|x(t)-x_{m}(t)| \tag{12}\] which gives, \[I_{1} \leq E\bigg{(}\bigg{(}\int_{0}^{t}\bigg{|}k_{1}(s,t)x(s)-k_{1m}(s,t)x_{m} (s)\bigg{|}ds\bigg{)}^{2}\bigg{)}\] \[\leq E\bigg{(}\bigg{(}\int_{0}^{t}\big{(}\sqrt{2}L_{i}h\sigma+(\rho_{i }+\sqrt{2}L_{i}h)|x(t)-x_{m}(t)|\big{)}ds\bigg{)}^{2}\bigg{)}\] By Cauchy- Schwarz inequality, for \(t>0\) and \(f\in L^{2}[0,1)\) \[\bigg{|}\int_{0}^{t}f(s)ds\bigg{|}^{2}\leq t\int_{0}^{t}|f|^{2}ds\] this implies, \[I_{1} \leq E\bigg{(}2\int_{0}^{t}\bigg{(}(\sqrt{2}L_{1}h\sigma)^{2}+(\rho_{ 1}+\sqrt{2}L_{1}h)^{2}|x(t)-x_{m}(t)|^{2}\bigg{)}ds\bigg{)}\] Therefore, \[I_{1} \leq 2(\sqrt{2}L_{1}h\sigma)^{2}+2(\rho_{1}+\sqrt{2}L_{1}h)^{2}E\bigg{(} \int_{0}^{t}|x(t)-x_{m}(t)|^{2}ds\bigg{)} \tag{13}\] Now, \[I_{2}\leq E\bigg{(}\int_{0}^{t}\bigg{|}k_{2}(s,t)x(s)-k_{2m}(s,t)x_{m}(s) \bigg{|}^{2}ds\bigg{)}\] \[\leq 2E\bigg{(}\int_{0}^{t}\big{(}(\sqrt{2}L_{2}h\sigma)^{2}+(\rho_{ 2}+\sqrt{2}L_{2}h)^{2}|x(t)-x_{m}(t)|^{2}\big{)}ds\bigg{)}\] Hence, \[I_{2}\leq 2(\sqrt{2}L_{2}h\sigma)^{2}+2(\rho_{2}+\sqrt{2}L_{2}h)^{2}E\bigg{(} \int_{0}^{t}|x(t)-x_{m}(t)|^{2}ds\bigg{)} \tag{14}\] Using Theorem 5.1, equation (13) and (14) in (12), we get \[E\big{(}|x(t)-x_{m}(t)|^{2}\big{)} \leq 5C^{2}h^{2}\] \[+ 5\bigg{(}2(\sqrt{2}L_{1}h\sigma)^{2}+2(\rho_{1}+\sqrt{2}L_{1}h)^ {2}E\bigg{(}\int_{0}^{t}|x(t)-x_{m}(t)|^{2}ds\bigg{)}\bigg{)}\] \[+ 5\bigg{(}2(\sqrt{2}L_{2}h\sigma)^{2}+2(\rho_{2}+\sqrt{2}L_{2}h)^ {2}E\bigg{(}\int_{0}^{t}|x(t)-x_{m}(t)|^{2}ds\bigg{)}\bigg{)}\] \[E\big{(}|x(t)-x_{m}(t)|^{2}\big{)} \leq R_{1}+R_{2}\int_{0}^{t}E\big{(}|x(s)-x_{m}(s)|^{2}\big{)}ds \tag{15}\] where, \[R_{1}=5\bigg{(}C^{2}h^{2}+2(\sqrt{2}L_{1}h\sigma)^{2}+2(\sqrt{2}L_{2}h\sigma) ^{2}\bigg{)}\] and \[R_{2}=5\bigg{(}2(\rho_{1}+\sqrt{2}L_{1}h)^{2}+2(\rho_{2}+\sqrt{2}L_{2}h)^{2} \bigg{)}\] By using Gronwall's inequality, we have \[E\big{(}|x(t)-x_{m}(t)|^{2}\big{)} \leq R_{1}\exp\bigg{(}\int_{0}^{t}R_{2}ds\bigg{)}. \tag{16}\] which implies that, \[\|x(t)-x_{m}(t)\|_{2}^{2}=E\big{(}|x(t)-x_{m}(t)|^{2}\big{)}\leq R_{1}e^{R_{2} }=O(h^{2}) \tag{17}\] ## 6 Numerical Examples In this section, we use the proposed method to solve a variety of SVIEs. The first three examples compare approximate and analytical results to demonstrate the method's convergence. Because an analytical solution is practically impossible to find, the last example illustrates approximate solutions for \(m=32,64,\) and \(128\) to indicate convergence. The computations are carried out using Matlab 2013(a). Define error \(E\) as \(\|E\|_{\infty}=\underset{1\leq i\leq m}{max}|X_{i}-Y_{i}|,\) where \(X_{i}\), \(Y_{i}\) are the Walsh coefficient of exact solution and approximate solution respectively. The method's iterations in the following instances is \(n\), the mean of error \(E\) is \(\bar{x}_{E}\), and the standard deviation for error E is \(s_{E}\). **Example 6.1**.: [7] Consider the linear stochastic Volterra integral equation \[x(t)=1+\int_{0}^{t}cos(s)x(s)ds+\int_{0}^{t}sin(s)x(s)dB(s),s,t\in[0,0.5)\] with the exact solution \(x(t)=\frac{1}{12}e^{\frac{-t}{4}+sin(t)+\frac{sin(2t)}{8}+\int_{0}^{t}sin(s)dB (s)}\), for \(0\leq t<0.5\). **Example 6.2**.: Consider the linear stochastic Volterra integral equation shown below \[x(t)=f(t)+\int_{0}^{t}(s+t)x(s)ds+\int_{0}^{t}e^{-3(s+t)}x(s)dB(s)\] where \(s,t\in[0,1)\) in which \(f(t)=t^{2}+sin(1+t)-cos(1+t)-2sin(t)-\frac{7t^{4}}{12}+\frac{1}{40}B(t)\). \begin{table} \begin{tabular}{l l l l l} \hline n & \(\bar{x}_{E}\) & \(s_{E}\) & \multicolumn{2}{c}{95\% interval of confidence for error mean.} \\ \cline{3-5} & & & Lower & Upper \\ \hline 30 & 0.00543042339 & 0.00472214581 & 0.00374062521 & 0.00712022157 \\ 50 & 0.00626993437 & 0.00442989552 & 0.00504202998 & 0.00749783876 \\ 75 & 0.00705047567 & 0.00481071208 & 0.00596170903 & 0.00813924231 \\ 100 & 0.00640558992 & 0.00481776079 & 0.00546130880 & 0.00734987103 \\ 125 & 0.00689936851 & 0.00500025280 & 0.00602278555 & 0.00777595148 \\ 150 & 0.00686900260 & 0.00580316061 & 0.00594030348 & 0.00779770171 \\ 200 & 0.00682115439 & 0.00600805207 & 0.00598848085 & 0.00765382792 \\ \hline \end{tabular} \end{table} Table 1: Mean error, standard deviation of error, and interval of confidence for mean error in Example 6.1 with m=8 Figure 1: Example 6.1’s approximate and exact solutions for m=32 and m=64 \begin{table} \begin{tabular}{l c c c c} \hline n & \(\bar{x}_{E}\) & \(s_{E}\) & \multicolumn{2}{c}{95\% interval of confidence for error mean.} \\ \cline{3-5} & & & Lower & Upper \\ \hline 30 & 0.00637765274 & 0.00360745366 & 0.00508674202 & 0.00766856345 \\ 50 & 0.00720684095 & 0.00586365605 & 0.00558151841 & 0.00883216349 \\ 75 & 0.00649984610 & 0.00488908128 & 0.00539334285 & 0.00760634936 \\ 100 & 0.00625583011 & 0.00474702145 & 0.00532541390 & 0.00718624631 \\ 125 & 0.00675880050 & 0.00523369353 & 0.00584129357 & 0.00767630743 \\ 150 & 0.00650117417 & 0.00478655986 & 0.00573516505 & 0.00726718328 \\ 200 & 0.00627666571 & 0.00451326428 & 0.00565115920 & 0.00690217223 \\ \hline \end{tabular} \end{table} Table 2: Mean error, standard deviation of error, and interval of confidence for mean error in Example 6.1 with m=32 \begin{table} \begin{tabular}{l c|c|c} \hline \(t\) & \(m=2^{\circ}\) & \(m=2^{\circ}\) & \(m=2^{\prime}\) \\ \hline 0.1 & 0.2588463226 & 0.2786937102 & 0.2764612638 \\ 0.2 & 0.2385970504 & 0.2482827054 & 0.2598350172 \\ 0.3 & 0.2298470282 & 0.2362610436 & 0.2560260944 \\ 0.4 & 0.2432612452 & 0.2547810418 & 0.2706006264 \\ 0.5 & 0.3326364002 & 0.3482867788 & 0.3683207574 \\ 0.6 & 0.3276372986 & 0.3354036892 & 0.3570142536 \\ 0.7 & 0.3811228220 & 0.3973418530 & 0.4218934758 \\ 0.8 & 0.4407132464 & 0.4640863230 & 0.4959036476 \\ 0.9 & 0.5010298772 & 0.5335500110 & 0.5753660980 \\ \hline \end{tabular} \end{table} Table 3: Numerical result for m=32, m=64, and m=128 with n=50 in Example 6.2 Figure 2: Example 6.1’s error trend for m=32,n=30, and n=100 **Example 6.3**.: [16] Consider the stock model with \(C(t)\) as the risk-less cash bond and \(S(t)\) as the single risky asset. \[dC(t)=\sin(t)C(t)dt,C_{0}=1\] \[S(t)=\frac{1}{10}+\int_{0}^{t}ln(1+s)S(s)ds+\int_{0}^{t}sS(s)dB(s)\] with the exact solution \(C(t)=e^{1-cos(t)}\) and \(S(t)=\frac{1}{10}e^{(1+t)ln(1+t)-t-\frac{s^{3}}{6}+\int_{0}^{t}sdB(s)}\). We will compare the exact solution of \(S(t)\) with the approximate solution using our method. \begin{table} \begin{tabular}{l c c c c} \hline \(m\) & \(\bar{x}_{E}\) & \(s_{E}\) & \multicolumn{2}{c}{95\% interval of confidence for error mean.} \\ \cline{4-5} & & & Lower & Upper \\ \hline 4 & 0.00483812406 & 0.00199063228 & 0.00396569099 & 0.00571055712 \\ 8 & 0.00380827206 & 0.00251831518 & 0.00270457176 & 0.00491197235 \\ 16 & 0.00432163487 & 0.00307689213 & 0.00297312744 & 0.00567014230 \\ 32 & 0.00673390644 & 0.00484960640 & 0.00460847272 & 0.00885934015 \\ 64 & 0.00714118035 & 0.00451414584 & 0.00516276871 & 0.00911959199 \\ 128 & 0.00713451215 & 0.00627103849 & 0.00438610835 & 0.00988291594 \\ \hline \end{tabular} \end{table} Table 4: Mean error, standard deviation of error, and interval of confidence for mean error of Example 6.3 with n=20 Figure 4: Stock model’s approximate and exact solutions for m=32 and m=128 of Example 6.3 Figure 3: Example 6.2’s approximate solution for m=32, m=64 and m=128 with 50 iterations ## 7 Conclusion Due to the difficulty in determining the exact solution for the majority of SVIEs, numerical techniques are required to address these problems. Historically, several numerical solutions have been developed to approximate the solution of SVIEs. In addition, this article proposes a numerical method for approximating SVIE solutions. It also includes quantitative estimates for specific SVIEs. Error analysis of the methodology has been conducted to validate its dependability. As demonstrated in a number of preceding examples, numerical analysis demonstrates that the Walsh function approximation is preferable to existing methods for more precisely solving linear SVIEs. This concept could be expanded to include nonlinear SVIEs and SVIEs with singular kernels, which can be used to solve numerous physical problems.
2301.03605
Field-Theoretic Analysis of Hadronization Using Soft Drop Jet Mass
One of the greatest challenges in quantum chromodynamics is understanding the hadronization mechanism, which is also crucial for carrying out precision physics with jet substructure. In this Letter, we combine recent advancements in our understanding of the field theory-based nonperturbative structure of the soft drop jet mass with precise perturbative calculations of its multi-differential variants at next-to-next-to-leading logarithmic (NNLL) accuracy. This enables a systematic study of its hadronization power corrections in a completely model-independent way. We calibrate and test hadronization models and their interplay with parton showers by comparing our universality predictions with various event generators for quark and gluon initiated jets in both lepton-lepton and hadron-hadron collisions. We find that hadronization models perform better for quark jets relative to gluon jets. Our results provide the necessary toolkit for precision studies with the soft drop jet mass motivating future analyses using real world collider data. The nontrivial constraints derived in our framework are useful for improving the modeling of hadronization and its interface with parton showers in next generation event generators.
Anna Ferdinand, Kyle Lee, Aditya Pathak
2023-01-09T19:00:01Z
http://arxiv.org/abs/2301.03605v1
# Field-Theoretic Analysis of Hadronization Using Soft Drop Jet Mass ###### Abstract One of the greatest challenges in quantum chromodynamics is understanding the hadronization mechanism, which is also crucial for carrying out precision physics with jet substructure. In this _Letter_, we combine recent advancements in our understanding of the field theory-based nonperturbative structure of the soft drop jet mass with precise perturbative calculations of its multi-differential variants at next-to-next-to-leading logarithmic (NNLL) accuracy. This enables a systematic study of its hadronization power corrections in a completely model-independent way. We calibrate and test hadronization models and their interplay with parton showers by comparing our universality predictions with various event generators for quark and gluon initiated jets in both lepton-lepton and hadron-hadron collisions. We find that hadronization models perform better for quark jets relative to gluon jets. Our results provide the necessary toolkit for precision studies with the soft drop jet mass motivating future analyses using real world collider data. The nontrivial constraints derived in our framework are useful for improving the modeling of hadronization and its interface with parton showers in next generation event generators. + Footnote †: preprint: DESY–22–159 + Footnote †: preprint: DESY–22–159 The study of jets and their substructure has become a very active program at high energy particle colliders in the last decade [1; 2]. A key development has been the use of jet grooming techniques [3; 4; 5; 6; 7; 8; 9; 10] that allow for theoretical control by eliminating contamination from the wide-angle soft radiation from the underlying event and pile-up, and by reducing hadronization effects. In particular, the soft drop (SD) grooming [6; 7; 8] has received the most widespread attention, inspiring many theoretical calculations both for jets in vacuum [11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38] and in medium [39; 40; 41; 42; 43], as well as several experimental analyses [44; 45; 46; 47; 48; 49; 50; 51; 52; 53]. Among various groomed observables, the SD jet mass is by far the most extensively studied within both theoretical [6; 54; 55; 56; 57; 58; 59; 60] and experimental communities [61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71], and has been explored for a variety of phenomenological applications, such as quantifying medium modification [61; 39; 68], and precision top quark mass [72; 73; 74] and strong coupling constant [75; 76] measurements. Theoretically, the SD jet mass is the most precisely studied groomed jet observable, with predictions available at next-to-next-to-next-to-leading logarithmic accuracy (N\({}^{3}\)LL) matched to next-to-next-to-leading order (NNLO) predictions for dijets at \(e^{+}e^{-}\) collisions [77], and next-to-next-to-leading-logarithmic (NNLL) accuracy [76] for jets at the LHC. At this level of precision, the hadronization power corrections become comparable in size to perturbative accuracy, and cannot be accounted for using hadronization models [78; 79; 80] that are tuned to lower precision parton showers. In the recent years, there has been significant progress in understanding these hadronization effects in the SD jet mass [81; 82; 83] using a field theory-based formalism [84; 85; 86; 87; 88; 89; 90], which allows for a model-independent description of nonperturbative (NP) power corrections for precision phenomenology. Furthermore, this formalism imposes powerful constraints on jet-flavor, kinematics and grooming parameter dependence of these NP corrections. Hence, by comparing these predictions with event generators, we now have a unique opportunity to carry out a nontrivial characterization of these hadronization models and their interplay with parton showers, which is often difficult to interpret and test. In this _Letter_, using the state-of-the-art theoretical advancements in our understanding of the SD jet mass, we achieve a systematic and complete field theory-based study of hadronization effects and test our predictions with multiple event generators. _Hadronization corrections to groomed jet mass._--Compared to the ungroomed jet mass, the SD jet mass exhibits a much larger region of applicability for pertur Figure 1: An example of fit for nonperturbative parameters in Pythia 8.306 simulation of groomed jet mass. The insets show distribution of low energy particles as heat maps around the soft drop stopping subjets in the transverse plane. bation theory. This region is referred to as the soft drop operator expansion (SDOE) region, which is defined below and shown in Fig. 1 between the vertical lines. Here, hadronization effects can be studied using factorization in a systematic expansion. Using soft collinear effective theory (SCET) [91; 92; 93; 94], in Ref. [81] the leading hadronization corrections in the SDOE region were shown to depend on three \(\mathcal{O}(\Lambda_{\rm QCD})\) NP universal constants \(\{\Omega_{1\kappa}^{\mathfrak{o}},\Upsilon_{1,0\kappa}^{\mathfrak{o}}, \Upsilon_{1,1\kappa}^{\mathfrak{o}}\}\), which solely depend on the parton \(\kappa=q,g\) initiating the jet, and are completely independent of the jet kinematics, such as the jet \(p_{T}\) (or \(E_{J}\)), rapidity \(\eta_{J}\), radius \(R\), and the SD parameters [7], the energy cut \(z_{\rm cut}\) and the angular modulation parameter \(\beta\), such that \[\frac{1}{\sigma_{\kappa}}\frac{{\rm d}\sigma_{\kappa}}{{\rm d}m_{J} ^{2}} = \frac{1}{\hat{\sigma}_{\kappa}}\frac{{\rm d}\hat{\sigma}_{\kappa} }{{\rm d}m_{J}^{2}}-Q\Omega_{1\kappa}^{\mathfrak{o}}\frac{{\rm d}}{{\rm d}m_{J }^{2}}\frac{1}{\hat{\sigma}_{\kappa}}\frac{{\rm d}\hat{\sigma}_{\kappa}^{ \mathfrak{o}}}{{\rm d}m_{J}^{2}}\] \[+\frac{\Upsilon_{1,0\kappa}^{\mathfrak{o}}+\beta\Upsilon_{1,1 \kappa}^{\mathfrak{o}}}{Q}\frac{1}{\hat{\sigma}_{\kappa}}\frac{{\rm d}\hat{ \sigma}_{\kappa}^{\mathfrak{o}}}{{\rm d}m_{J}^{2}}+\cdots\,,\] Here \({\rm d}\sigma_{\kappa}\) and \({\rm d}\hat{\sigma}_{\kappa}\), respectively, refer to hadron and parton level groomed jet mass cross sections for flavor \(\kappa\) and \(Q\) characterizing the the hard scale of the jet. The weights \({\rm d}\hat{\sigma}_{\kappa}^{\mathfrak{o},\mathfrak{o}}\) are perturbatively calculable. We note that, in contrast with analytical hadronization models employed in previous work [6; 60; 75; 89], Eq. (1) is a model-independent statement and includes hadron mass effects. In the SDOE region, the leading hadronization corrections are driven by a two-pronged dipole, which consists of an energetic collinear subjet at the core of the jet and a collinear-soft (c-soft) subjet that is responsible for stopping the grooming algorithm. The corrections represented by the ellipsis '\(\ldots\)' in Eq. (1) involve higher power corrections of \(\Lambda_{\rm QCD}\) and corrections from configurations that distort the two-pronged catchment area. The latter correction is a next-to-leading-logarithmic effect, and therefore Eq. (1) can also be seen as a factorization of NP effects at leading-logarithmic accuracy, where the strong ordering of angles ensures the two-pronged geometry. As the jet mass decreases, we enter the soft drop non-perturbative (SDNP) region, where the c-soft mode becomes nonperturbative and correspondingly the non-perturbative effects are of \(\mathcal{O}(1)\). The transition between these two regions is clearly visible in Fig. 1, where the insets show the distribution of low-energy NP particles in the transverse plane of the jet [81]. The statement of NP factorization in Eq. (1) presents us with a singular opportunity to probe hadronization in jets in a rich setting. As can be seen from Eq. (1), the consistency of the formalism requires that the three constants be sufficient to describe data measured from high energy colliders over a wide range of energies. The highly constraining structure given by Eq. (1) (constants being of \(\mathcal{O}(\Lambda_{\rm QCD})\), having a \(\beta\) proportional coefficient \(\Upsilon_{1,1\kappa}^{\mathfrak{o}}\), \(z_{\rm cut}\)-independence, etc.) makes this far from a trivial feat and hence useful for calibrating hadronization models. In this work, we demonstrate how the universality structure strongly constrains the NP parameters, allowing them to be accurately determined by considering various combinations of soft drop and kinematic parameters. This, for example, improves the prospects for measuring the strong coupling constant \(\alpha_{s}\) at the LHC. _Calculation of perturbative weights._-- Characterizing the two-pronged configuration of the collinear and the c-soft subjet in the SDOE region requires auxiliary measurements of the groomed jet radius \(R_{g}\) and soft subjet energy fraction \(z_{g}\)[7; 35; 34; 95], which after marginalizing give [82] \[\frac{1}{\hat{\sigma}_{\kappa}}\frac{{\rm d}\hat{\sigma}_{\kappa} ^{\mathfrak{o}}}{{\rm d}m_{J}^{2}} \equiv \int{\rm d}r_{g}\,r_{g}\frac{1}{\hat{\sigma}_{\kappa}}\frac{{\rm d }^{2}\hat{\sigma}_{\kappa}}{{\rm d}m_{J}^{2}{\rm d}r_{g}}\,, \tag{2}\] \[\frac{1}{\hat{\sigma}_{\kappa}}\frac{{\rm d}\hat{\sigma}_{\kappa} ^{\mathfrak{o}}}{{\rm d}m_{J}^{2}} \equiv \int\frac{{\rm d}r_{g}{\rm d}z_{g}\,\delta\big{(}z_{g}-z_{\rm cut }r_{g}^{\beta}\big{)}}{r_{g}}\frac{1}{\hat{\sigma}_{\kappa}}\frac{{\rm d}^{3} \hat{\sigma}_{\kappa}}{{\rm d}m_{J}^{2}{\rm d}r_{g}{\rm d}z_{g}}\,.\] As the NP constants in Eq. (1) are independent of the jet kinematics and grooming parameters, all these dependencies are encapsulated by \({\rm d}\hat{\sigma}_{\kappa}^{\mathfrak{o},\mathfrak{o}}\). The appearance of \(r_{g}=R_{g}/R\) in Eq. (2) is analogous to how jet radius \(R\) appears in hadronization corrections for the ungroomed jet mass in the tail and for the jet \(p_{T}\)[90; 89]: \[m_{J,{\rm no\,sd}}^{2} \!= \!\hat{m}_{J,{\rm no\,sd}}^{2}\!+\!p_{T}R\,\Omega_{1\kappa}\,, \quad p_{T}=\hat{p}_{T}+\frac{1}{R}\Upsilon_{1\kappa}\,, \tag{3}\] where \(\Omega_{1\kappa},\Upsilon_{1\kappa}\sim\Lambda_{\rm QCD}\) are NP parameters and hatted variables are parton level values. In the case of the SD jet mass, the dynamically determined groomed jet radius \(R_{g}\) plays the role of \(R\). The term in Eq. (1) with \({\rm d}\hat{\sigma}_{\kappa}^{\mathfrak{o}}\) is analogous to the ungroomed jet mass shift correction in Figure 2: NNLL results for perturbative weights in Eq. (1) of hadronization corrections (shown here for gluon jets). Bands denote perturbative uncertainty and vertical lines the extent of the fit region (see Eq. (7)). The factor of \(1/Q\) is included to illustrate the size of hadronization corrections. the tail, but is now described by a _different constant_\(\Omega_{1\kappa}^{\oplus}\) as \(m_{J}^{2}=\hat{m}_{J}^{2}+p_{T}R_{g}\Omega_{1\kappa}^{\oplus}\). The term in the second line in Eq. (1) with \(\mathrm{d}\hat{\sigma}_{\kappa}^{\oplus}\) is called the boundary correction. This effect is similar to the migration of events across \(p_{T}\)-bins due to hadronization. Near the "boundary" of the c-soft subjet passing/failing soft drop, i.e. when \(z_{g}\approx z_{\mathrm{cut}}r_{g}^{\oplus}\), the partonic values \(\hat{z}_{g}\) and \(\hat{r}_{g}\) are modified due to hadronization as \[z_{g}=\hat{z}_{g}+\frac{1}{r_{g}}\frac{\Upsilon_{1,0\kappa}^{\oplus}}{p_{T}R} \,,\qquad r_{g}=\hat{r}_{g}-\frac{\Upsilon_{1,1\kappa}^{\oplus}}{p_{T}R}\,. \tag{4}\] Here, \(\Upsilon_{1,0\kappa}^{\oplus}\) characterizes the shift in the \(p_{T}\) of the c-soft subjet analogous to jet \(p_{T}\) shift in Eq. (3), and \(\Upsilon_{1,1\kappa}^{\oplus}\) describes the change in the subjet location relative to the collinear subjet. The combination of the two gives rise to the linear structure \(\Upsilon_{1\kappa}^{\oplus}=\Upsilon_{1,0\kappa}^{\oplus}+\beta\Upsilon_{1,1 \kappa}^{\oplus}\) as shown in Eq. (1), and constitutes a nontrivial prediction. Finally, it is useful to factor out the parton level groomed jet mass cross section from \(\mathrm{d}\sigma^{\bullet,\oplus}\): \[\frac{\mathrm{d}\hat{\sigma}_{\kappa}}{\mathrm{d}m_{J}^{2}}C_{1}^{\kappa}(m_{J }^{2})\equiv\frac{\mathrm{d}\hat{\sigma}_{\kappa}^{\oplus}}{\mathrm{d}m_{J}^ {2}}\,,\quad\frac{\mathrm{d}\hat{\sigma}_{\kappa}}{\mathrm{d}m_{J}^{2}}C_{2}^ {\kappa}(m_{J}^{2})\equiv\frac{\mathrm{d}\hat{\sigma}_{\kappa}^{\oplus}}{ \mathrm{d}m_{J}^{2}}\,. \tag{5}\] This definition is convenient as it will allow us to combine analytical calculation of the coefficients \(C_{1,2}^{\kappa}(m_{J}^{2})\) with parton shower jet mass cross section \(\mathrm{d}\hat{\sigma}_{\kappa}\) as discussed below. In Ref. [81], \(C_{1,2}^{\kappa}(m_{J}^{2})\) were computed in the coherent branching framework at LL accuracy. The first big step towards improving the accuracy of these coefficients was achieved in Ref. [82] by recasting them as moments of doubly differential cross section as in Eq. (2) and computing them at NLL\({}^{\prime}\) accuracy in the SDOE region. In this work, we employ a further improved calculation at NNLL accuracy described in the companion paper in Ref. [83], where the matching of the doubly differential cross section in the ungroomed region is included for correct treatment of the soft drop cusp location at NNLL. In Fig. 2, we show calculations of \(\mathrm{d}\hat{\sigma}_{\kappa}^{\oplus,\oplus}\) for gluon jets at NNLL accuracy. With \(\mathcal{O}(1\,\mathrm{GeV})\) NP constants and kinematic prefactors as shown in Eq. (2), we see that the leading hadronization corrections can be as large as 10% for small jet masses. _Calibrating hadronization models._-- With state-of-the-art NNLL perturbative results for \(C_{1,2}^{\kappa}(m_{J}^{2})\), we are in position to carry out a precise calibration of hadronization models. Furthermore, by incorporating NNLL perturbative uncertainty, we are able to significantly improve upon the analysis of Ref. [82] with LL predictions lacking uncertainty estimates. We simulate \(e^{+}e^{-}\to gg\), \(e^{+}e^{-}\to q\bar{q}\), \(pp\to Z+q\) jet and \(pp\to Z+g\) jet processes using Pythia 8.306[78], Vincia 2.3[96] and Herwig 7.2.3[80] parton showers with their default hadronization models. We reconstruct anti-\(k_{T}\)[97] jets with \(R=0.8\) using Fastjet[98], and analyze them using jet analysis software JETlib written by two of the authors [99]. For \(e^{+}e^{-}\) collisions, we sample both jets in the dijet configuration, while only using the leading jet in \(pp\) collisions. As NP parameters are explictly predicted to be independent of the jet kinematics and grooming parameters, we carry out analysis using a wide range of kinematic and grooming parameter choices. In \(e^{+}e^{-}\) collisions, we analyze events at center of mass energies \(Q=500,750,1000\) GeV, while in \(pp\), we use jets with \(p_{T}\in\{[400,600],[600,800],[800,1000]\}\) GeV and soft drop parameters \(z_{\mathrm{cut}}\in\{0.05,0.1,0.15,0.2\}\) and \(\beta\in\{0,0.5,1,1.5,2\}\). We begin by explicitly defining the SDOE region where our analysis is carried out. We first define a dimensionless variable \(\xi\equiv m_{J}^{2}/Q^{2}\), where \[Q^{(pp)}\equiv p_{T}R\,,\qquad Q^{(ee)}\equiv 2E_{J}\,. \tag{6}\] In terms of \(\xi\), the SDOE region is then defined as \(\xi\in\left[\xi_{\mathrm{SDOE}},\xi_{0}^{\prime}\right]\), where \[\xi_{\mathrm{SDOE}}\equiv\xi_{0}\Big{(}\frac{\rho\Lambda_{\mathrm{QCD}}}{Q \xi_{0}}\Big{)}^{\frac{2+\beta}{1+\beta}}\,,\quad\xi_{0}^{\prime}\equiv\frac{ \xi_{0}}{(1+\zeta^{2})^{\frac{2+\beta}{2}}}\,. \tag{7}\] Here \(\xi_{0}\) is the location of the soft drop cusp [76; 83]: \[\xi_{0}^{(pp)}=z_{\mathrm{cut}}\Big{(}\frac{R}{R_{0}}\Big{)}^{\beta}\,,\quad \xi_{0}^{(ee)}=z_{\mathrm{cut}}\bigg{(}\sqrt{2}\frac{\tan\frac{R}{2}}{\sin \frac{R_{0}}{2}}\bigg{)}^{\beta}\,, \tag{8}\] while \(\zeta\) is defined by \[\zeta^{(pp)}\equiv\frac{R}{2\cosh\eta_{J}}\,,\qquad\zeta^{(ee)}\equiv\tan \frac{R}{2}\,, \tag{9}\] such that \(\xi_{0}^{\prime}\) in Eq. (7) is the soft-wide angle transition point of the NNLL calculation. We set \(\Lambda_{\mathrm{QCD}}\to 1\) GeV, the typical scale of transition from parton showers to hadronization. The parameter \(\rho\) in Eq. (7) determines Figure 3: Weighted cross sections for hadronization corrections normalized to parton level jet mass spectrum as defined in Eq. (5) for \(z_{\mathrm{cut}}=0.1\) and \(\beta=1\). the onset of the SDOE region, and we set \(\rho=4.5\). In principle, any choice satisfying \(\rho\gg 1\) is acceptable. We explore other choices of \(\rho\) in the _Supplemental Material_. In Fig. 3 we show a comparison of the NNLL computation of \(C_{1,2}^{\kappa}\) with partonic Pythia and Herwig. The parton level results for from Vincia are found to be almost identical to Pythia. We find a good agreement of the NNLL \(C_{1}^{\kappa}\) with MC for all four processes. The unusually small errors for \(C_{1}^{\kappa}\) result from cancellation between correlated uncertainties in the two factors in Eq. (5). For \(pp\), the agreement for the boundary term is poor for jet masses close to the cusp due to the initial-state radiation (ISR) contribution. However, as seen in Fig. 2, the NP corrections in the cusp-region are relatively suppressed, and NP corrections from ISR are also expected to be smaller as they involve subleading \(r_{g}^{2}\) moment of the boundary cross section [83]. Consequently, these effects do not significantly impact the analysis below. Finally, we perform a least-squares fit for the NP parameters by defining our \(\chi^{2}\) statistic as \[\chi^{2}\equiv\sum_{i}\frac{\big{[}(\vec{\sigma}_{\kappa,\mathrm{had}}^{ \mathrm{MC}})_{i}-(\vec{\sigma}_{\kappa,\mathrm{part+NP}}(\Omega_{1\kappa}^{ \mathrm{p}},\ldots)\big{)}_{i}\big{]}^{2}}{(\Delta\vec{\sigma})_{i}^{2}}. \tag{10}\] Here, \(\vec{\sigma}_{X}\) is a vector of cross section values for \(n_{\mathrm{bins}}=10\) bins in the fit range and all permutations of \(p_{T}\) (or \(E_{J}\)), \(z_{\mathrm{cut}}\), and \(\beta\) values considered above. We denote the hadron level MC groomed jet mass cross section as \(\vec{\sigma}_{\kappa,\mathrm{had}}^{\mathrm{MC}}\), and define \(\vec{\sigma}_{\kappa,\mathrm{part+NP}}\) by including the NP constants \(\Omega_{1\kappa}^{\mathrm{e}},\Upsilon_{1,0\kappa}^{\mathrm{\earth}},\Upsilon _{1,1\kappa}^{\mathrm{\earth}}\) and NNLL computation of \(C_{1,2}^{\kappa}\) in Eq. (5) to the parton level MC spectrum \(\mathrm{d}\hat{\sigma}_{\kappa}^{\mathrm{MC}}\) following Eq. (1). The uncertainty in the denominator is defined as \[(\Delta\vec{\sigma})_{i}^{2}\equiv\big{(}0.05(\vec{\sigma}_{\mathrm{part \times C_{1}}})_{i}\big{)}^{2}+\big{(}0.25(\vec{\sigma}_{\mathrm{part\times C _{2}}})_{i}\big{)}^{2}\,, \tag{11}\] where, guided by the size of perturbative uncertainties in Fig. 2, we have assigned 5% and 25% uncertainty respectively to the weighted cross sections for shift and boundary corrections respectively. The NP constants \(\Omega_{1\kappa}^{\mathrm{e}},\Upsilon_{1,0\kappa}^{\mathrm{\earth}},\Upsilon _{1,1\kappa}^{\mathrm{\earth}}\) are then varied to minimize this \(\chi^{2}\) statistic. An example of the fit for mass distribution is shown in Fig. 1. In Tab. 1, we present the fit results for the NP constants with scale variations of \(C_{1,2}^{\kappa}\) for Pythia. As anticipated, the parameters are \(\lesssim 1\) GeV. We also find similar parameter values for quark jets within the two quark processes within uncertainties. Even when NP parameters for quark jets are simultaneously fit for in \(e^{+}e^{-}\) and \(pp\) process, we find an excellent \(\chi^{2}\) value of 0.840/dof. This is expected, as soft drop isolates the jet from surrounding radiation. To further investigate this, we show correlations between \(\Omega_{1\kappa}^{\mathrm{p}}\) and \(\Upsilon_{1,0\kappa}^{\mathrm{\earth}}\) for the four processes in Fig. 4 where each ellipse represents a \(1\sigma\) deviation. To account for perturbative uncertainties, we repeat the fit by varying \(C_{1,2}^{\kappa}\) up and down within the uncertainty band shown in Fig. 3. We observe an excellent agreement within uncertainties between the NP parameters for quark jets in \(pp\) and \(e^{+}e^{-}\) collisions in Pythia simulations, and a moderate agreement for Vincia and Herwig. In contrast, while Herwig exhibits similar levels of agreement for gluon jets and quark jets at both colliders, Pythia and Vincia show significant disagreement. This shows that contrary to the expectation for groomed jets, hadronization modeling of gluon jets in isolation in \(e^{+}e^{-}\) collisions in Pythia and Vincia differs significantly from jets in hadron colliders. Additionally, the differing results between Pythia and Vincia point to the interplay of parton showers with hadronization models. In the _Supplemental Material_ we show correlations in the other two combinations of NP parameters which show similar behavior as well as numerical fit results for Herwig and Vincia. Next, we test the grooming parameters independence \begin{table} \begin{tabular}{l|c|c|c|c} \hline \hline Quark Jets & \(\Omega_{1q}^{\mathrm{e}}\)(GeV) & \(\Upsilon_{1,0q}^{\mathrm{e}}\)(GeV) & \(\Upsilon_{1,1q}^{\mathrm{e}}\)(GeV) & \(\chi_{\mathrm{min}}^{2}\)/dof. \\ \hline \hline \(e^{+}e^{-}\!\to\!q\bar{q}\) & \(0.55^{+0.06}_{-0.03}\) & \(-0.57^{+0.19}_{-0.19}\) & \(1.06^{+0.31}_{-0.35}\) & \(0.77^{+0.02}_{-0.00}\) \\ \hline \(pp\!\to\!Z\!+\!q\) & \(0.56^{+0.05}_{-0.14}\) & \(-0.73^{+0.29}_{-0.28}\) & \(0.89^{+0.27}_{-0.25}\) & \(0.65^{+0.01}_{-0.02}\) \\ \hline \hline Gluon Jets & \(\Omega_{1q}^{\mathrm{e}}\)(GeV) & \(\Upsilon_{1,0q}^{\mathrm{e}}\)(GeV) & \(\Upsilon_{1,1q}^{\mathrm{e}}\)(GeV) & \(\chi_{\mathrm{min}}^{2}\)/dof. \\ \hline \hline \(e^{+}e^{-}\!\to\!gg\) & \(1.92^{+0.16}_{-0.32}\) & \(-0.48^{+0.27}_{-0.22}\) & \(0.87^{+0.75}_{-0.25}\) & \(3.13^{+0.105}_{-0.20}\) \\ \hline \(pp\!\to\!Z\!+\!g\) & \(0.93^{+0.06}_{-0.12}\) & \(-0.24^{+0.11}_{-0.01}\) & \(0.89^{+0.20}_{-0.23}\) & \(1.34^{+0.20}_{-0.10}\) \\ \hline \hline \end{tabular} \end{table} Table 1: Fit results for NP constants in Pythia 8.306 for quark and gluon jets in \(e^{+}e^{-}\) and \(pp\) collisions. Figure 4: Testing jet flavor universality of soft drop NP parameters in Pythia 8.306 (top), Vincia 2.3 (bottom, left) and Herwig 7.2.3 (bottom, right). of these NP constants. We follow the same procedure as Ref. [81] and test this behavior by comparing the fit results for individual \(z_{\rm cut}\) and \(\beta\) values with the global fit. In Fig. 5, we demonstrate the linear \(\beta\)-dependence of the boundary correction by fitting for a single parameter \(\Upsilon^{\mathbb{O}}_{1\kappa}(\beta)\) for each value of \(\beta\). Because of degeneracy in the NP parameters, we fix \(\Omega^{\rm e}_{1\kappa}\) to its global-fit value in this case. The error bars take into account perturbative uncertainty in \(C^{\kappa}_{1,2}\) by re-fitting with minimum and maximum variations. We find that all the three simulations perform well in each of the four cases. In Fig. 6, we repeat the same procedure to test \(z_{\rm cut}\)-independence of NP parameters. We find here that the three event generators pass the test for both quark and gluon jets in \(e^{+}e^{-}\) collisions, but exhibit a linear trend in \(z_{\rm cut}\) for both flavors in \(pp\) collisions. The larger \(\chi^{2}\) values for gluon jets, as seen in Tab. 1 for Pythia (also true for Herwig and Vincia) suggest that modeling of hadronization in gluon jets is less consistent with our field theory predictions. Finally, our analysis of the \(e^{+}e^{-}\to q\bar{q}\) process using NNLL predictions of \(C^{\kappa}_{1,2}\) demonstrates significant improvement in the universality behavior of \(z_{\rm cut}\) and \(\beta\), compared to Ref. [81] where LL predictions were used.1 In conclusion, while our universality tests of the NP parameters generally display expected behaviors in all the cases considered, they also reveal some tension with the hadronization models, pointing to interesting avenues for further improvement2 and motivate the use of real-world collider data for further analyses. Footnote 1: Note that our numerical results for \(e^{+}e^{-}\to q\bar{q}\) also differ from those in Ref. [81] due to different prescription for error in Eq. (11) and newer versions of MC. _Conclusions.--_ In this _Letter_, we have presented a systematic framework for analyzing nonperturbative corrections in soft drop jet mass by bringing together earlier work on nonperturbative factorization and high precision calculations of multi-differential soft drop cross sections. Our analysis with hadronization models successfully demonstrates that nonperturbative parameters exhibit the universal behaviors predicted by field theory. Our analysis is also directly applicable to precision phenomenology involving soft drop jet mass. For example, in Ref. [76] our results are used to assess the impact of the NP corrections on the sensitivity and ultimate precision achievable on \(\alpha_{s}\) at the LHC using SD jet mass. Findings in Ref. [76] indicate that the hadronization effects in the \(\beta=1\) case, for instance, are 3% (8%) for quark (gluon) jets when nonperturbative parameters in Eq. (1) are left unconstrained, which are of the same size as the NNLL perturbative uncertainty. We anticipate that with high precision calculations for the soft drop jet mass and the boundary correction (\(C^{\kappa}_{2}\) in Fig. 3), it will be possible to significantly constrain some or all of the NP constants, and hence improve the ultimate precision achievable on \(\alpha_{s}\)-determination at the LHC. In summary, our work thus provides crucial understanding of hadronization corrections necessary for precision measurements with soft drop jet mass, a benchmark tool for improving hadronization modeling in MC event generators, and motivation for analyses with real world collider data. _Acknowledgements._-- We would like to thank Mrinal Dasgupta, Michael Seymour for helpful discussions. We are grateful to Simon Platzer for many discussions and support with analysis with Herwig. We thank Holmfridur Hannesdottir, Johannes Michel and Iain Stewart for numerous discussions and feedback on the manuscript. We provide a numerical implementation of the NNLL calculation in C++ building on core classes of SCETlib [100] which will be made available as a part of the scetlib:sd module [101]. We thank Johannes Michel for support with above-mentioned implementation in SCETlib. KL was supported by the LDRD program of LBNL and the U.S. DOE under contract number DE-SC0011090. AP acknowledges support from DESY (Hamburg, Germany), a member of the Helmholtz Association HGF. AP was a member of the Lancaster-Manchester-Sheffield Consortium for Fundamental Physics, which is supported by the UK Science and Technology Facilities Council (STFC) under grant number ST/T001038/1. AF also gratefully acknowledges support from the above-mentioned grant.
2310.15726
Uniqueness of conservative solutions to the the modified Camassa-Holm equation via Characteristics
In this paper,for a given conservative solution, we introduce a set of auxiliary variables tailored to this particular solution, and prove that these variables satisfy a particular semilinear system having unique solutions. In turn, we get the uniqueness of the conservative solution in the original variables.
Zhen He, Zhaoyang Yin
2023-10-24T11:01:42Z
http://arxiv.org/abs/2310.15726v1
# Uniqueness of conservative solutions to the the modified Camassa-Holm equation via Characteristics ###### Abstract In this paper,for a given conservative solution, we introduce a set of auxiliary variables tailored to this particular solution, and prove that these variables satisfy a particular semilinear system having unique solutions. In turn, we get the uniqueness of the conservative solution in the original variables. _2020 Mathematics Subject Classification_: 35Q30, 35Q84, 76B03, 76D05. _Keywords_: A modified Camassa-Holm equation; Globally weak solutions ; Uniqueness. ###### Contents * 1 Introduction * 2 Basic definitons and results * 3 Preliminary lemmas * 4 Proof of Theorem2.3 Introduction In this paper, we consider the Cauchy problem of the following modified Camassa-Holm (MOCH) equation [26] \[\left\{\begin{array}{l}\gamma_{t}=\lambda(v_{x}-\gamma-\frac{1}{ \lambda}v\gamma)_{x},\quad t>0,\ x\in\mathbb{R},\\ v_{xx}=\gamma_{x}+\frac{\gamma^{2}}{2\lambda},\quad t\geq 0,\ x\in\mathbb{R},\\ \gamma(0,x)=\gamma_{0}(x),\quad x\in\mathbb{R},\end{array}\right. \tag{1.1}\] which was called by Gorka and Reyes[16]. Let \(G={\partial_{x}}^{2}-1,n=Gv\) The equation (1.1) can be rewritten as \[\left\{\begin{array}{l}\gamma_{t}+G^{-1}n\gamma_{x}=\frac{\gamma^{2}}{2}+ \lambda G^{-1}n-\gamma G^{-1}n_{x},\quad t>0,\ x\in\mathbb{R},\\ n=\gamma_{x}+\frac{\gamma^{2}}{2\lambda},\quad t\geq 0,\ x\in\mathbb{R},\\ \gamma(0,x)=\gamma_{0}(x),\quad x\in\mathbb{R}.\end{array}\right. \tag{1.2}\] The equation (1.1) was first studied through the geometric approach in[6][18]. Conservation laws and the existence and uniqueness of weak solutions to the modified Camassa-Holm equation were presented in [16]. We observe that if we solve (1.2), then \(m\) will formally satisfy the following physical form of the Camassa-Holm \[n_{t}=-2vn_{x}-nv_{x}+\lambda v_{x} \tag{1.3}\] If \(\lambda=0\), it is known as the well-known Camassa-Holm (CH) equation. As far as we know, the CH equation has many properties, such as: integrability [7, 10, 11], Hamiltonian structure, infinitely many conservation laws [7, 15]. The local well-posedness of CH equation has been proved in Sobolev spaces \(H^{s},s>\frac{3}{2}\) and in Besov space \(B^{s}_{p,r}\) with \(s>\max\{\frac{3}{2},1+\frac{1}{p}\}\) or \(s=1+\frac{1}{p},p\in[1,2],r=1\) in [8, 9, 13, 14, 17, 19, 20]. In [12, 25], the authors showed the existence and uniqueness of global weak solutions for the CH equation. In addition, Bressan and Constantin studied the existence of the global conservative solutions [3] and global disspative solutions [4] in \(H^{1}(\mathbb{R})\). Later, Bressan and his collaborators used the similar technique to a variational wave equation[5] and obtain the uniqueness [2]. Utilizing the good structure of the semilinear ODE system proposed in [5], Bressan and Chen proved that, For an open dense set of \(C^{3}\) initial data, we prove that the solution is piecewise smooth in the t-x plane, while the gradient \(u_{x}\) can blow up along finitely many characteristic curves[1]. Later, Li and Zhang studied similar property in [21] Luo, Qiao and Yin studied the locally well-posedness in \(B^{s}_{p,r},s{>}\max\{\frac{1}{2},\frac{1}{p}\}\) or \(s=\frac{1}{p},1\leq p\leq 2,r=1\), blow up phenomena, global existence for periodic MOCH and global conservative solution[22, 23, 24]. The remainder of the paper is organized as follows. In Section 2 we review basic definitions and state our main uniqueness result Theorem 2.3. Section 3 establishes the key technical tool (Lemma 3.2), determining a unique characteristic curve through each initial point. In Section 4 we conclude the proof of the main theorem. We can easily deduce that \(\gamma=(\partial_{x}-1)m\) is a unique solution to (1.1). ## 2 Basic definitons and results Firstly we think about the initial problem \[\left\{\begin{array}{l}x_{t}=u(t,x)\\ x(0)=\bar{x}.\end{array}\right. \tag{2.1}\] Now, let us consider a transformation \(m=(\partial_{x}+1)G^{-1}\gamma=(\partial_{x}-1)^{-1}\gamma\). Then we have \(\gamma=(\partial_{x}-1)m\), and therefore, equation (1.2) is changed to \[m_{t}+um_{x}=F, \tag{2.2}\] where \[F=mu-P_{4}-P_{4x}+\frac{1}{2}(-m^{2}+P_{3}+P_{3x})+\lambda P_{1x} +\frac{1}{2}(P_{5}+P_{5x}-P_{2})\] \[P_{1}=G^{-1}m=p*m,\qquad P_{2}=p*m^{2},\qquad P_{3}=p*m_{x}^{2},\] \[u=m-P_{1x}+P_{1}+\frac{1}{2\lambda}(P_{3}+P_{2}-P_{2x}),\qquad P _{4}=p*H,\] \[P_{5}=p*P_{3},\qquad H=um_{x}-um,\qquad p=\frac{1}{2}e^{-|x|}.\] For the initial data, we have \[m(0,x)=\bar{m}(x)=(\partial_{x}-1)^{-1}\bar{\gamma}(x). \tag{2.3}\] For smooth solutions, differentiating (2.2) w.r.t. x one obtains \[m_{xt}+u_{x}m_{x}+um_{xx}=F_{x}, \tag{2.4}\] multiplying (2.4) with \(m_{x}\), we obtain \[\frac{d}{2dt}m_{x}^{2}+\frac{1}{2}(um_{x}^{2})_{x}=E, \tag{2.5}\] where \[E =(mu)_{x}m_{x}-P_{4x}m_{x}-Hm_{x}-P_{4}m_{x}+\frac{1}{2}(-2mm_{x}^ {2}+P_{3x}m_{x}+m_{x}^{3}+P_{3}m_{x})\] \[\quad+\lambda(m+P_{1})m_{x}+\frac{1}{2\lambda}(P_{5x}+P_{5}+P_{3 }-P_{2}+m^{2})m_{x}-\frac{1}{2}u_{x}m_{x}^{2}\] \[=(mu)_{x}m_{x}-P_{4x}m_{x}-Hm_{x}-P_{4}m_{x}+\frac{1}{2}(P_{3x}m_ {x}+P_{3}m_{x})+\lambda(m+P_{1})m_{x}\] \[\quad+\frac{1}{2\lambda}(P_{5x}+P_{5}+P_{3}-P_{2}+m^{2})m_{x} \tag{2.6}\] \[\quad-\frac{1}{2}(mm_{x}^{2}-P_{1}m_{x}^{2}+m_{x}^{2}P_{1x}-\frac {1}{2\lambda}(P_{3}+P_{2}-P_{2x})m_{x}^{2}).\] We denote \(m_{x}^{2}\) by \(w\), then \[\frac{d}{dt}w+(uw)_{x}=2E. \tag{2.7}\] Because of (2.7), the characteristic curve \(t\to x(t)\) satisies the additional eqaution \[\frac{d}{dt}\int_{-\infty}^{x(t)}m_{x}^{2}(t,x)dx=\int_{-\infty}^{x(t)}2E(t,x)dx. \tag{2.8}\] And we introduce new coordinates (t,\(\beta\)), where \(\beta\) is implictly defined as \[x(t,\beta)+\int_{-\infty}^{x(t,\beta)}m_{x}^{2}(t,\xi)d\xi=\beta. \tag{2.9}\] At a time t where the measure \(\mu_{(t)}\) is not absolutely continuous w.r.t Lebesgue measure, we can define x(t,\(\beta\)) to be the unique point x such that \[x(t,\beta)+\mu_{(t)}(]-\infty,x[)\leq\beta\leq x(t,\beta)+\mu_{(t)}(]-\infty,x]) \tag{2.10}\] Then we shall prove Lipschitz continuous of t and u as functions of the variables t,\(\beta\). Before providing our main results in this paper, let us first introduce some definitions of the globally conservative solutions for (1.1) and (2.2) **Definition 2.1**.: _Let \(\ \bar{\gamma}\in L^{2}(\mathbb{R})\). We say that \(\ \gamma(t,x)\) is locally conservative solution to the Cauchy problem (1.1) if \(\gamma\) satisies the following equality:_ \[\int_{\mathbb{R}^{+}}\int_{\mathbb{R}}\big{(}\gamma\psi_{t}+( \gamma\partial_{x}G^{-1}\gamma+\frac{1}{2\lambda}\gamma G^{-1}\gamma^{2})\psi _{x}\big{)}(t,x)dxdt\] \[=-\int_{\mathbb{R}^{+}}\int_{\mathbb{R}}\big{(}(\frac{\gamma^{2 }}{2}+\lambda\partial_{x}G^{-1}\gamma+\frac{1}{2}G^{-1}\gamma^{2})\psi\big{)} (t,x)dxdt-\int_{\mathbb{R}}\bar{\gamma}(x)\psi(0,x)dx,\] _for every test function \(\phi\in C_{c}^{\infty}(\mathbb{R}^{2})\)_ **Definition 2.2**.: _A solution m=m(t,x) is conservative if \(w=m_{x}^{2}\) provides a distributional solution to the balance law (2.5), namely_ \[\int_{0}^{\infty}\int[m_{x}^{2}\phi_{t}+um_{x}^{2}\phi_{x}+2E\phi]dxdt+\int m _{0,x}^{2}\phi(0,x)dx=0 \tag{2.11}\] _for every test function \(\phi\in C_{c}^{1}(\mathbb{R}^{2})\)_ Our main theorem is stated as follows. **Theorem 2.3**.: _For any initial data \(\bar{m}\in H^{1}(\mathbb{R})\), the Cauchy problem (2.2) has a unique conservative solution_ **Corollary 2.4**.: _For any initial data \(\gamma_{0}\in L^{2}(\mathbb{R})\), the Cauchy problem (1.1) has a unique conservative solution_ ## 3 Preliminary lemmas **Lemma 3.1**.: _Let m=m(t,x) be a conservative solution of equation (2.2). Then, for every \(t\geq 0\), the maps \(\beta\to x(t,\beta)\) and \(\beta\to m(t,\beta)\triangleq m(t,x(t,\beta))\) implicitly defined by (2.10) are Lipschitz continuous with constant 1. The map \(t\to x(t,\beta)\) is also Lipschitz continuous with constant 1. The map \(t\to x(t,\beta)\) is also Lipschitz continuous with a constant depending only on \(\|m_{0}\|_{H^{1}}\)_ Proof.: Fix any time t\(\geq 0\). Then the map \[x\rightarrow\beta(t,x)\triangleq x+\int_{-\infty}^{x}m_{x}^{2}(t,y)dy \tag{3.1}\] is right continuous and strictly increasing. Hence it has a well defined, continuous, nondecreasing inverse \(\beta\to x(t,\beta)\). If \(\beta_{1}<\beta_{2}\), by (2.10) we obtain \[x(t,\beta_{2})-x(t,\beta_{1})+\mu_{(t)}(]-\infty,x(\beta_{2})[)-\mu_{(t)}(]- \infty,x(\beta_{1})])\leq\beta_{2}-\beta_{1}, \tag{3.2}\] which implies \[x(t,\beta_{2})-x(t,\beta_{1})\leq\beta_{2}-\beta_{1} \tag{3.3}\] showing that the map \(\beta\to x(t,\beta)\) is a contraction. According to (3.2) \[|m(t,x(t,\beta_{2}))-m(t,x(t,\beta_{1}))| \leq\int_{x(t,\beta_{1})}^{x(t,\beta_{2})}|m_{x}|dx\leq\int_{x(t, \beta_{1})}^{x(t,\beta_{2})}\frac{1}{2}|1+m_{x}^{2}|dx\] \[=\frac{1}{2}(x(t,\beta_{2})-x(t,\beta_{1}))+\int_{x(t,\beta_{1}) }^{x(t,\beta_{2})}\frac{1}{2}|m_{x}^{2}|dx\] \[\leq\frac{1}{2}(x(t,\beta_{2})-x(t,\beta_{1}))+\frac{1}{2}\mu_{( t)}(]x(t,\beta_{1}),x(t,\beta_{2})[) \tag{3.4}\] \[\leq\frac{1}{2}(\beta_{2}-\beta_{1}).\] Next we prove the Lipschitz continuity of the map \(t\to x(t,\beta)\). By Sobolev's inequality, we have \[\|m\|_{L^{\infty}}\leq\|m\|_{H^{1}}. \tag{3.5}\] Using Young's inequality, we get \[\|P_{1}\|_{L^{\infty}} \leq\|e^{-|\cdot|}\|_{L^{1}}\|m\|_{L^{\infty}}\leq C\|m\|_{H^{1}}, \quad\|P_{1x}\|_{L^{\infty}}\leq\|sgn(\cdot)e^{-|\cdot|}\|_{L^{1}}\|m\|_{L^{ \infty}}\leq C\|m\|_{H^{1}}, \tag{3.7}\] \[\|P_{2}\|_{L^{\infty}} \leq\|e^{-|\cdot|}\|_{L^{\infty}}\|m^{2}\|_{L^{1}}\leq C\|m\|_{H^ {1}},\quad\|P_{2x}\|_{L^{\infty}}\leq\|sgn(\cdot)e^{-|\cdot|}\|_{L^{\infty}}\| m^{2}\|_{L^{1}}\leq C\|m\|_{H^{1}},\] (3.8) \[\|P_{3}\|_{L^{\infty}} \leq\|e^{-|\cdot|}\|_{L^{\infty}}\|m_{x}^{2}\|_{L^{1}}\leq C\|m\|_ {H^{1}},\quad\|P_{3x}\|_{L^{\infty}}\leq\|sgn(\cdot)e^{-|\cdot|}\|_{L^{\infty} }\|m_{x}^{2}\|_{L^{1}}\leq C\|m\|_{H^{1}}, \tag{3.6}\] From (3.6)-(3.8), we can deduce that \[\|u\|_{L^{\infty}}\leq C_{\lambda}\|m\|_{H^{1}}\triangleq C_{\infty}. \tag{3.9}\] Similarly \[\|P_{3}\|_{L^{\infty}}\leq\|e^{-|\cdot|}\|_{L^{1}}\|P_{3}\|_{L^{\infty}}\leq C \|m\|_{H^{1}},\quad\|P_{5x}\|_{L^{\infty}}\leq\|sgn(\cdot)e^{-|\cdot|}\|_{L^{1} }\|P_{3}\|_{L^{\infty}}\leq C\|m\|_{H^{1}}. \tag{3.10}\] To estimate \(\|P_{4}\|_{L^{\infty}}\), we can see \[\|P_{4}\|_{L^{\infty}}=\|p*H\|_{L^{\infty}}\leq\|e^{\cdot|\cdot|}\|_{L^{\infty} }\|um_{x}\|_{L^{1}}+\|e^{\cdot|\cdot}\|_{L^{1}}\|um\|_{L^{\infty}}. \tag{3.11}\] Then the main difficulty that confronts us is the estimation of the term \(\|um_{x}\|_{L^{1}}\). Firstly, by Young's inequality \[\|m_{x}P_{3}\|_{L^{1}} \leq C\|(1+\frac{m_{x}^{2}}{2})G^{-1}m_{x}^{2}\|_{L^{1}}\leq C(\| G^{-1}m_{x}^{2}\|_{L^{1}}+\|\frac{m_{x}^{2}}{2}G^{-1}m_{x}^{2}\|_{L^{1}})\] \[\leq C\|e^{-|\cdot|}\|_{L^{\infty}}\|m_{x}^{2}\|_{L^{1}}+\|\frac{ m_{x}^{2}}{2}\|_{L^{1}}\|G^{-1}m_{x}^{2}\|_{L^{\infty}}\] \[\leq C(\|m_{x}^{2}\|_{L^{1}}+\|m_{x}^{2}\|_{L^{1}}\|e^{\downarrow^{ \cdot}}\|_{L^{\infty}}\|m_{x}^{2}\|_{L^{1}}) \tag{3.12}\] \[\leq C(\|m\|_{H^{1}}+\|m\|_{H^{1}}^{2}).\] Similarly we have \[\|m_{x}P_{2}\|_{L^{1}},\|m_{x}P_{2x}\|_{L^{1}},\|m_{x}P_{1}\|_{L^{1}},\|m_{x}P_{ 1x}\|_{L^{1}}\leq C\|m\|_{H^{1}}^{2}. \tag{3.13}\] Then one obtains \[\|um_{x}\|_{L^{1}}\leq C(\|m\|_{H^{1}}+\|m\|_{H^{1}}^{2}). \tag{3.14}\] Using the same mothed, we can deduce that \[\|P_{4}\|_{L^{\infty}}\leq C(\|m\|_{H^{1}}+\|m\|_{H^{1}}^{2}),\quad\|P_{4x}\|_{ L^{\infty}}\leq C(\|m\|_{H^{1}}+\|m\|_{H^{1}}^{2}). \tag{3.15}\] Next, by Young's inequality, we have \[\|P_{1}\|_{L^{1}},\ \|P_{2}\|_{L^{1}},\ \|P_{3}\|_{L^{1}},\ \|P_{3x}\|_{L^{1}},\ \|P_{5}\|_{L^{1}},\ \|P_{5x}\|_{L^{1}}\leq C\|m\|_{H^{1}}, \tag{3.16}\] and \[\|P_{2}\|_{L^{2}}\leq C\|e^{-|\cdot|}\|_{L^{2}}\|m^{2}\|_{L^{1}} \leq C\|m\|_{H^{1}}, \tag{3.18}\] \[\|P_{2x}\|_{L^{2}}\leq C\|sgn(\cdot)e^{-|\cdot|}\|_{L^{2}}\|m^{2} \|_{L^{1}}\leq C\|m\|_{H^{1}}\] (3.19) \[\|P_{3}\|_{L^{2}}\leq C\|m\|_{H^{1}}. \tag{3.17}\] From (3.18) and (3.19), one obtains \[\|u\|_{L^{2}}\leq C\|m\|_{H^{1}}. \tag{3.20}\] Similarly, \[\|P_{4}\|_{L^{1}},\|P_{4x}\|_{L^{1}}\leq\|H\|_{L^{1}}\leq C(\|m\|_{H^{1}}+\|m \|_{H^{1}}^{2}). \tag{3.21}\] By (3.16)-(3.21), we can deduce that there exists a constant C depending on \(\lambda\) such that \[\|F\|_{L^{1}}\leq C(\|m\|_{H^{1}}+\|m\|_{H^{1}}^{2})\triangleq C_{S}. \tag{3.22}\] Assuming that \(t>\tau\) and y=x(\(\tau\),\(\beta\)), we obtain \[\mu_{(t)}(]-\infty,y-C_{\infty}(t-\tau)[) =\mu_{(\tau)}((-\infty,y))-\mu_{(\tau)}((-\infty,y))+\mu_{(\tau )}((-\infty,x(t,\beta)))\] \[\quad+\mu_{(t)}((-\infty,y))-\mu_{(t)}((-\infty,y))-\mu_{(t)}((y- C_{\infty}(t-\tau),y))\] \[\leq\mu_{(\tau)}((-\infty,y))+\mu_{(\tau)}((-\infty,x(t,\beta)))- \mu_{(\tau)}(]-\infty,y[)\] \[\quad+\mu_{(t)}((x(t,\beta),y))-\mu_{(t)}\mu_{(t)}((y-C_{\infty} (t-\tau),y)) \tag{3.23}\] \[\leq\mu_{(\tau)}((-\infty,y))+\int_{\tau}^{t}\|F\|_{L^{1}}dt^{ \prime}\leq\mu_{(\tau)}((-\infty,y))+C_{S}(t-\tau).\] Let \(y^{-}(t)\triangleq y-(C_{\infty}+C_{S})(t-\tau)\), we get \[y^{-}+\mu_{(t)}(]-\infty,y^{-}(t)]\leq y-(C_{\infty}-C_{S})(t-\tau)+\mu_{( \tau)}+C_{S}(t-\tau)\] \[\leq y-\mu_{(t)}(]-\infty,y[)\leq\beta \tag{3.24}\] Which implies \[x(t,\beta)\geq y^{-}(t)=y-(C_{\infty}+C_{S})(t-\tau).\] Likewise, if we define \(y^{+}(t)\triangleq y+(C_{\infty}+C_{S})(t-\tau)\), we can deduce that \[x(t,\beta)\leq y^{+}(t)=y+(C_{\infty}+C_{S})(t-\tau).\] **Lemma 3.2**.: _Let \(m(t,x)\in H^{1}(\mathbb{R})\) be a conservative solution of the Cauchy problem (2.2). Then, for any \(\bar{y}\in\mathbb{R}\), there exists a unique Lipschitz continuous map \(t\mapsto x(t)\) which satisfies both (2.8) and (2.1). Moreover, for any \(0\leq t\leq\tau\) one has_ \[m(t,x(t))-m(\tau,x(\tau))=-\int_{\tau}^{t}F(s,x(s))ds. \tag{3.25}\] Proof.: 1. Using the adapted coordinates \((t,\beta)\) as in (2.8), we asumme that x(t) is the charateristic starting at \(\bar{y}\) in the form of \(t\mapsto x(t)=x(t,\beta(t))\), where \(\beta(t)\) is a map to be determined. We sum (2.8) and (2.1), and integrating w.r.t. time we obtain (3.26) \[x(t)+\int_{-\infty}^{x(t)}m_{x}^{2}(t,x)dx=\bar{y}+\int_{-\infty}^{\bar{y}}m_ {0,x}^{2}dx+\int_{0}^{t}(u(t,x(t))+\int_{-\infty}^{x(t)}Edx)dt.\] Introducing the function (3.27) \[G(t,\beta)\triangleq\int_{-\infty}^{x(t,\beta)}u_{x}+Edx\] and the constant (3.28) \[\bar{\beta}=\bar{y}+\int_{-\infty}^{\bar{y}}m_{0,x}^{2}dx,\] we can rewrite the equation (3.26) as the form of (3.29) \[\beta(t)=\bar{\beta}+\int_{0}^{t}G(s,\beta(s))ds.\] 2. Next, we claim that For each fixed \(t\geq 0\), the function \(\beta\mapsto G(t,\beta)\) defined at (3.27) is uniformly bounded and absolutely continuous. Moreover, (3.30) \[G_{\beta}=[u_{x}+E]x_{\beta}=\frac{u_{x}+E}{1+m_{x}^{2}}\in[-C,C]\] for some constant C depending only on the norm of \(m\) and \(\lambda\). To prove the claim, we notice that (3.31) \[\|G_{\beta}\|_{L^{\infty}}=\|\frac{u_{x}+E}{1+m_{x}^{2}}\|_{L^{\infty}}\leq\| \frac{u_{x}}{1+m_{x}^{2}}\|_{L^{\infty}}+\|\frac{E}{1+m_{x}^{2}}\|_{L^{\infty }}.\] By (3.6)-(3.8), we can deduce that (3.32) \[\|\frac{u_{x}}{1+m_{x}^{2}}\|_{L^{\infty}}\leq C.\] Using the fact that (3.33) \[\frac{m_{x}}{1+m_{x}^{2}},\frac{m_{x}^{2}}{1+m_{x}^{2}}\leq 1,\] it is not hard to check that for any \(i=1,2,3,4,5\), the following inequalities holds (3.34) \[\|\frac{m_{x}P_{i}}{1+m_{x}^{2}}\|_{L^{\infty}},\|\frac{m_{x}P_{ix}}{1+m_{x}^{2 }}\|_{L^{\infty}}\leq C,\] and by (3.6)-(3.8) and (3.13),we can deduce that \[\|\frac{mu_{x}m_{x}}{1+m_{x}^{2}}\|_{L^{\infty}} \leq C\|m\|_{L^{\infty}}\|\frac{m_{x}^{2}-mm_{x}+\frac{1}{2\lambda }(P_{2x}+P_{3}-m^{2}-P_{2})m_{x}}{1+m_{x}^{2}}\|_{L^{\infty}}\] \[\leq C\|m\|_{L^{\infty}}(\|\frac{m_{x}^{2}}{1+m_{x}^{2}}\|_{L^{ \infty}}+\|m\|_{L^{\infty}}\|\frac{m_{x}}{1+m_{x}^{2}}\|_{L^{\infty}}\] \[\quad+\frac{1}{2\lambda}\|P_{2x}+P_{3}-m^{2}-P_{2}\|_{L^{\infty} }\|\frac{m_{x}}{1+m_{x}^{2}}\|_{L^{\infty}}\] (3.35) \[\leq C.\] Moreover, one can derive that \[\|\frac{m_{x}(P_{5x}+P_{5xx})}{1+m_{x}^{2}}\|_{L^{\infty}} =\|\frac{m_{x}(\partial_{x}^{2}+\partial_{x})G^{-1}G^{-1}m_{x}^{2 }}{1+m_{x}^{2}}\|_{L^{\infty}}\] \[\leq C\|G^{-1}m_{x}^{2}+G^{-1}G^{-1}m_{x}^{2}+\partial_{x}G^{-1}G ^{-1}m_{x}^{2}\|_{L^{\infty}}\] (3.36) \[\leq C\|m_{x}^{2}\|_{L^{1}}\leq C,\] and \[\|\frac{Hm_{x}}{1+m_{x}^{2}}\|_{L^{\infty}} \leq\|\frac{um_{x}^{2}}{1+m_{x}^{2}}\|_{L^{\infty}}+\|\frac{umm_ {x}}{1+m_{x}^{2}}\|_{L^{\infty}}\] (3.37) \[\leq C(\|u\|_{L^{\infty}}+\|u\|_{L^{\infty}}\|m\|_{L^{\infty}}).\] Combining (3.34)-(3.37), we conclude that (3.38) \[\|\frac{E}{1+m_{x}^{2}}\|_{L^{\infty}}\leq C,\] which completed the proof of our claim. Hence the function \(G\) in (3.27) is uniformly Lipschitz continuous w.r.t. \(\beta\). 3. Thanks to the Lipschitz continuity of the function G, the existence of a unique solution to the integral equation (3.29) can be proved by a standard fixed point argument. Namely, consider the Banach space of all continuous functions \(\beta:\mathbb{R}_{+}\mapsto\mathbb{R}\) with weighted norm \[\|\beta\|_{*}\triangleq\sup_{t\geq 0}e^{-2Ct}|\beta(t)|.\] On this space, we claim that the Picard map (3.39) \[\mathscr{P}\beta(t)\triangleq\bar{\beta}+\int_{0}^{t}G(\tau,\beta(\tau))d\tau\] is a strict contraction. Indeed, assume \(\|\beta-\tilde{\beta}\|=\delta>0\). This implies (3.40) \[|\beta(\tau)-\beta(\tau)|\leq\delta e^{2C\tau}\] Hence \[|\mathscr{P}\beta(t)-\mathscr{P}\tilde{\beta}(t)| =|\int_{0}^{t}(G(\tau,\beta(\tau))-G(\tau,\beta(\tilde{\tau})))d\tau|\] (3.41) \[\leq C\int_{0}^{t}|\beta(\tau)-\beta(\tilde{\tau})|d\tau\leq\int _{0}^{t}C\delta e^{2C\tau}d\tau\leq\frac{\delta}{2}e^{2Ct}.\] Then we conclude \(\|\mathscr{P}\beta-\mathscr{P}\tilde{\beta}\|_{\star}\leq\frac{\delta}{2}\).The contraction mapping principle guarantees that (3.29) has a unique solution. Thanks to the arbitrary of the \(T\), we infer that the integral equation (3.29) has a unique solution on \(\mathbb{R}_{+}\). 4. By the previous construction, the map \(t\mapsto x(t)\triangleq x(t,\beta(t))\) defined by (2.10) provides the unique solution to (3.29). Due to the Lipschitz continuity of \(\beta(t)\) and x(t)=x(t,\(\beta(t)\)), \(\beta(t)\) and \(x(t)\) are differentiable almost everywhere, so we only have to consider the time where \(x(t)\) is differentiable. It suffices to show that (2.1) holds at almost every time. Assume, on the contrary, \(\dot{x}(\tau)\neq u(\tau,x(\tau))\). Without loss of generality, let (3.42) \[\dot{x}(\tau)=u(\tau,x(\tau))+2\epsilon_{0}\] for some \(\epsilon_{0}>0\). The case \(\epsilon_{0}<0\) is entirely similar. To derive a contradiction we observe that for all \(t\in(\tau,\tau+\delta)\), with \(\delta>0\) small enough one has (3.43) \[x^{+}(t)\triangleq x(\tau)+(t-\tau)[u(\tau,x(\tau))+\epsilon_{0}]<x(t).\] We also observe that if \(\phi\) is Lipschitz continuous with compact support, then the approximation argument guarantees that (2.11) remain hold for any test function \(\phi\in H^{1}(\mathbb{R})\) with compact support. For any \(\epsilon>0\) enough small, we give the following functions (3.44) \[\varrho^{\varepsilon}(s,x)=\left\{\begin{array}{ccc}0,&&y\leq- \varepsilon^{-1},\\ x+\varepsilon^{-1},&&-\varepsilon^{-1}\leq y\leq 1-\varepsilon^{-1},\\ 1,&&1-\varepsilon^{-1}\leq y\leq y^{+}(x),\\ 1-\varepsilon^{-1}(x-x(s)),&&y^{+}(s)\leq y\leq y^{+}(s)+\varepsilon,\\ 0,&&y\geq y^{+}(s)+\varepsilon,\end{array}\right.\] (3.45) \[\chi^{\varepsilon}(s)=\left\{\begin{array}{ccc}0,&&s\leq\tau- \varepsilon^{-1},\\ \varepsilon^{-1}(s-\tau+\varepsilon),&&\tau-\varepsilon\leq s\leq\tau,\\ 1,&&\tau\leq s\leq t,\\ 1-\varepsilon^{-1}(s-t),&&t\leq s\leq t+\varepsilon,\\ 0,&&s\geq t+\varepsilon.\end{array}\right.\] Define \[\phi^{\varepsilon}(s,x)=\min\{\varrho^{\varepsilon}(s,x),\chi^{\varepsilon}(s )\}.\] Using \(\phi^{\varepsilon}\) as test function in (2.11) we obtain \[\int_{0}^{\infty}\int[m_{x}^{2}\phi_{t}^{\varepsilon}+um_{x}^{2}\phi_{x}^{ \varepsilon}+2E\phi^{\varepsilon}]dxdt+\int m_{0,x}^{2}\phi^{\varepsilon}(0,x)dx =0.\] If \(t\) is sufficiently close to \(\tau\), we have \[\lim_{\varepsilon\to 0}\int_{\tau}^{t}\int_{x^{+}(s)-\varepsilon}^{x^{+}(s)+ \varepsilon}m_{x}^{2}\phi_{t}^{\varepsilon}+(um_{x}^{2})\phi_{x}^{\varepsilon} +2E\phi^{\varepsilon}dyds\geq 0.\] Using the fact that \(m(s,x)<m(\tau)+\epsilon_{0}\) and \(\phi_{x}^{\varepsilon}\leq 0.\) For any \(s\in[\tau+\varepsilon,t-\varepsilon]\), we infer that (3.47) \[0=\phi_{t}^{\varepsilon}+[u(\tau,x(\tau))+\epsilon_{0}]\phi_{x}^{\varepsilon} \leq\phi_{t}^{\varepsilon}+(u(s,x(s)))\phi_{x}^{\varepsilon}.\] Noticing that the family of measures \(\mu_{t}\) depends continuously on \(t\) in the topology of weak convergence, taking the limit of (3.46) as \(\epsilon\to 0\), we obtain \[0= \int_{-\infty}^{x(\tau)}m_{x}^{2}(\tau,y)dy-\int_{-\infty}^{x^{+} (t)}m_{x}^{2}(t,x)dx+\int_{\tau}^{t}\int_{-\infty}^{x^{+}(s)}2E\phi^{ \varepsilon}(s,x)dxds\] \[+\lim_{\varepsilon\to 0}\int_{\tau}^{t}\int_{x^{+}(s)- \varepsilon}^{x^{+}(s)+\varepsilon}m_{x}^{2}(\phi_{t}^{\varepsilon}+u\phi_{x} ^{\varepsilon})(s,x)dxds \tag{3.48}\] \[\geq \int_{-\infty}^{x(\tau)}m_{x}^{2}(\tau,x)dx-\int_{-\infty}^{x^{+} (t)}m_{x}^{2}(t,x)dx+\int_{\tau}^{t}\int_{-\infty}^{x^{+}(s)}2E(s,x)dxds.\] In turn,(3.48) implies (3.49) \[\int_{-\infty}^{x^{+}(t)}m_{x}^{2}(t,x)dx \geq\int_{-\infty}^{x(\tau)}m_{x}^{2}(\tau,x)dx+\int_{\tau}^{t} \int_{-\infty}^{x^{+}(s)}2E(s,x)dxds.\] (3.50) \[=\int_{-\infty}^{x(\tau)}m_{x}^{2}(\tau,x)dx+\int_{\tau}^{t} \int_{-\infty}^{x(s)}2E(s,x)dxds+o_{1}(t-\tau).\] Notice that the last term is a higher order infinitesimal, satisfying \(\frac{o_{1}(t-\tau)}{t-\tau}\to 0\) as \(t\to\tau\). Indeed (3.51) \[|o_{1}(t-\tau)|=\int_{\tau}^{t}\int_{-\infty}^{x(s)}2E(s,x)dxds.\] For simplicity, we only show the estimate of the term \((mu)_{x}m_{x}\) in E. Firstly, we have \[\int_{\tau}^{t}\int_{x^{+}(s)}^{x(s)}(um_{x}^{2})dxds \leq\int_{\tau}^{t}\|u\|_{L^{\infty}}\int_{x(s)}^{x^{+}(s)}m_{x}^ {2}dxds\leq\|u\|_{L^{\infty}}\int_{x^{+}(s)}^{x(s)}m_{x}^{2}dxds\] (3.52) \[\leq\|u\|_{H^{1}}\int_{\tau}^{t}\|m_{x}^{2}\|_{L^{1}}(x(s)-x^{+} (s))ds\leq C(t-\tau)^{2}.\] Similarly \[\int_{\tau}^{t}\int_{x(s)}^{x^{+}(s)}(-mu_{x}m_{x})dxds =\int_{\tau}^{t}\int_{x(s)}^{x^{+}(s)}-m[m_{x}-m+\frac{1}{2\lambda }(G^{-1}m_{x}^{2}+G^{-1}m^{2}-\partial G^{-1}m^{2})]m_{x}dxds\] \[\leq\|m\|_{H^{1}}\int_{\tau}^{t}\|m_{x}^{2}\|_{L^{1}}(x(s)-x^{+} (s))ds\] \[+(\|m\|_{H^{1}}^{2}+\|m\|_{H^{1}}^{3})\int_{\tau}^{t}\|m_{x}\|_{L^ {2}}(x(s)-x^{+}(s))^{\frac{1}{2}}ds \tag{3.46}\] (3.53) \[\leq C(t-\tau)^{\frac{3}{2}}+C(t-\tau)^{2}.\] Thus we obtain that (3.54) \[|\circ_{1}(t-\tau)|\leq C((t-\tau)^{\frac{3}{2}}+(t-\tau)^{2}).\] For \(t\) sufficiently close to \(\tau\), we have \[\beta(t)= \beta(\tau)+(t-\tau)[u(\tau,x(\tau))+\int_{-\infty}^{x(\tau)}Fdx] +o_{2}(t-\tau)\] \[= x(t)+\int_{-\infty}^{x(t)}m_{x}^{2}dx=x(t)+\mu_{(t)}(-\infty,x(t))\] \[> x(\tau)+(t-\tau)[u(\tau)+\epsilon_{0}]+\mu_{(t)}(-\infty,x^{+}(t))\] (3.55) \[\geq x(\tau)+(t-\tau)[u(\tau)+\epsilon_{0}]+\mu_{(t)}(-\infty,x(t))+ \int_{\tau}^{t}\int_{-\infty}^{x(s)}Fdxds+o_{1}(t-\tau).\] We can deduce that \[\beta(\tau)+(t-\tau)[u(\tau,x(\tau))+\int_{-\infty}^{x(\tau)}Fdx] +o_{2}(t-\tau)\] (3.56) \[\geq [x(\tau)+\int_{-\infty}^{x(\tau)}m_{x}^{2}dx]+(t-\tau)[u(\tau,x( \tau))+\epsilon_{0}]+\int_{\tau}^{t}\int_{-\infty}^{x(s)}Fdxds+o_{1}(t-\tau).\] Subtracting common terms, dividing both sides by \(t-\tau\) and letting \(t\to\tau\), we achieve a contradiction. Therefore, (2.1) must hold. 5. We now prove (3.25). For every test function \(\phi\in\mathbb{C}_{c}^{\infty}(\mathbb{R}^{2})\), one has (3.57) \[\int_{0}^{\infty}\int m\phi_{t}-um_{x}\phi+F\phi dxdt+\int m_{0}\phi(0,x)dx=0.\] Given any \(\psi\in\mathbb{C}_{c}^{\infty}\), let \(\phi=\psi_{x}\). Since the map \(x\mapsto m(t,x)\) is absolutely continuous, we can intergrate by parts w.r.t. x and obtain (3.58) \[\int_{0}^{\infty}\int m_{x}\psi+um_{x}\psi_{x}+F\psi_{x}dxdt+\int\bar{m}\psi dx =0.\] For any \(\epsilon\geq 0\) sufficiently small, we give the following function \[\varrho^{\varepsilon}(s,x)=\left\{\begin{array}{ccc}0,&&y\leq-\varepsilon^{ -1},\\ x+\varepsilon^{-1},&&-\varepsilon^{-1}\leq x\leq 1-\varepsilon^{-1},\\ 1,&&1-\varepsilon^{-1}\leq x\leq x(s),\\ 1-\varepsilon^{-1}(x-x(s)),&&x(s)\leq x\leq x(s)+\varepsilon,\\ 0&&x\geq x(s)+\varepsilon,\end{array}\right.\] and \[\psi^{\varepsilon}(s,x)=\min\{\varrho^{\varepsilon}(s,x),\chi^{\varepsilon}( s)\},\] We now use the test function \(\phi=\psi^{\epsilon}\) in (3.58) and let \(\epsilon\mapsto 0\). (3.59) \[\int_{-\infty}^{x(t)}m_{x}(t,x)dx=\int_{-\infty}^{x(\tau)}m_{x}(\tau,x)dx-\int _{\tau}^{t}F(s,x(s))ds+\lim_{\epsilon\to 0}\int_{\tau-\epsilon}^{t+ \epsilon}\int_{x(s)}^{x(s)+\epsilon}m_{x}(\psi_{t}^{\epsilon}+u\psi_{x}^{ \epsilon})dxds\] Hence, it is shown that \[\lim_{\varepsilon\to 0}\int_{\tau-\varepsilon}^{t+\varepsilon}\int_{x (s)}^{x(s)+\varepsilon}m_{x}(\psi_{t}^{\varepsilon}+u\psi_{x}^{\varepsilon})dyds \tag{3.60}\] \[=\lim_{\varepsilon\to 0}\Big{(}\int_{\tau-\varepsilon}^{\tau}+\int_{ \tau}^{t}+\int_{t}^{t+\varepsilon}\Big{)}\int_{x(s)}^{x(s)+\varepsilon}m_{x}( \psi_{t}^{\varepsilon}+u\psi_{x}^{\varepsilon})dyds=0.\] Taking advantage of Cauchy's inequality and \(m_{x}\in L^{2}\), one has \[|\int_{\tau}^{t}\int_{x(s)}^{x(s)+\varepsilon}m_{x}[\psi_{t}^{ \varepsilon}+u\psi_{x}^{\varepsilon}]dxds|\] \[\leq\int_{\tau}^{t}\Big{(}\int_{x(s)}^{x(s)+\varepsilon}|m_{x}|^ {2}dy\Big{)}^{\frac{1}{2}}\Big{(}\int_{x(s)}^{x(s)+\varepsilon}[\psi_{t}^{ \varepsilon}+u\psi_{x}^{\varepsilon}]^{2}dx\Big{)}^{\frac{1}{2}}ds.\] For each \(\epsilon>0\) consider the function \[\eta_{\epsilon}(s)\triangleq(\sup_{x\in\mathbb{R}}\int_{x}^{x+\epsilon}m_{x} ^{2}(s,y)dy)^{\frac{1}{2}}. \tag{3.61}\] Observe that all functions \(\eta_{\epsilon}\) are uniformly bounded. By the dominated convergence theorem, \[\lim_{\epsilon\to 0}\int_{\tau}^{t}(\int_{x}^{x+\epsilon}m_{x}^{2}(s,y)dy)^{ \frac{1}{2}}ds\leq\lim_{\epsilon\to 0}\int_{\tau}^{t}\eta_{\epsilon}(s)ds=0 \tag{3.62}\] For \(s\in[\tau,t]\), we construct the function \[\psi_{x}^{\epsilon}(s,y)=-\epsilon^{-1}, \tag{3.63}\] and \[\psi_{t}^{\epsilon}(s,y)+u(s,x(s))\psi_{x}^{\epsilon}(s,y)=0\quad\ \,\quad x(s)<y<x(s)+\epsilon. \tag{3.64}\] This implies \[\int_{x(s)+\epsilon}^{x(s)}|\psi_{t}^{\epsilon}(s,y)+u(s,y)\psi_ {x}^{\epsilon}(s,y)|^{2}dy=\epsilon^{-2}\int_{x(s)}^{x(s)+\epsilon}|u(s,y)-u (s,x(s))|^{2}dy\] \[\leq\epsilon^{-1}(\max_{x(s)\leq y\leq x(s)+\epsilon}|u(s,y)-u(s,x(s))|)^{2}\leq\epsilon^{-1}(\int_{x(s)}^{x(s)+\epsilon}|u_{x}(s,y)|dy)^{2} \leq\epsilon^{-1}(\epsilon^{\frac{1}{2}}\|u_{x}(s)\|_{L^{2}})^{2} \tag{3.65}\] \[\leq\|m\|_{H^{1}}\] Combining (3.62) and (3.65), we show the estimate the integral \[(\int_{\tau-\epsilon}^{\tau}+\int_{t}^{t+\epsilon})\int_{x(s)}^{ x(s)+\epsilon}m_{x}(\psi_{t}+u\psi)dxds\] \[\leq(\int_{\tau-\epsilon}^{\tau}+\int_{t}^{t+\epsilon})(\int_{x(s )}^{x(s)+\epsilon}m_{x}^{2}dy)^{\frac{1}{2}}(\int_{x(s)}^{x(s)+\epsilon}( \psi_{t}+u\psi)^{2}dy)^{\frac{1}{2}} \tag{3.66}\] \[\leq 2\epsilon\|m\|_{H^{1}}(\int_{x(s)}^{x(s)+\epsilon}\epsilon^{ -2}Cdy)^{\frac{1}{2}}\leq C\epsilon^{\frac{1}{2}}\to 0\] as \(\epsilon\to 0\).The above analysis has shown that (3.67) \[\lim_{\epsilon\to 0}\int_{\tau-\epsilon}^{t+\epsilon}m_{x}(\psi_{t}+u\psi)ds=0.\] Therefore we arrive at (3.25). 6. Using the uniqueness of \(\beta\), we can prove uniqueness of \(x(t)\). **Lemma 3.3**.: _Let m=m(t,x) be a conservative solution of (2.2). Then the map\((t,\beta)\mapsto m(t,\beta)\triangleq m(t,x(t,\beta))\) is Lipschitz continuous, with a constant depending only on the norm \(\|m_{0}\|_{H^{1}}\)_ Proof.: Combining (3.4) and (3.25), we obtain \[|m(t,x(t,\bar{\beta}))-m(\tau,\bar{\beta})| \leq|m(t,x(t,\bar{\beta}))-m(t,\beta(t))|+|m(t,x(t,\beta(t)))-m( \tau,\beta(\tau))| \tag{3.68}\] \[\leq\frac{1}{2}|\beta(t)-\bar{\beta}|+(t-\tau)\|F\|_{L^{\infty}} \leq(t-\tau)(\|G\|_{L^{\infty}}+\|F\|_{L^{\infty}}).\] ## 4 Proof of Theorem2.3 We need to seek a good characteristic, and employ how the gradient \(m_{x}\) of a conservative solution varies along the good characteristic, and complete the proof of uniqueness. Proof.: **Step 1.** Lemmas 3.1-3.3 ensure that the map \((t,\beta)\mapsto(x,u)(t,\beta)\) and \(\beta\mapsto F(t,\beta)\) are Lipschitz continuous. Thanks to Rademacher's theorem, the partial derivatives \(x_{t},x_{\beta},m_{t},m_{\beta}\) and \(F_{\beta}\) exist almost everywhere. Moreover, \(x(t,\beta)\) is the unique solution to (2.1), and the following holds. **(GC)** For \(a.e.\)\(t\geq 0\), the point \((t,\beta(t,\bar{\beta}))\) is a Lebesgue point for the partial derivatives \(x_{t},x_{\beta},m_{t},m_{\beta}\) and \(F_{\xi}\). Moreover, \(x_{\beta}(t,\beta)>0\) for \(a.e.\)\(t\geq 0\). If **(GC)** holds, then \(t\to x(t,\bar{\beta})\) is a good characteristic. **Step 2.** We now construct an O.D.E. to describe that the quantities \(m_{\beta}\) and \(x_{\beta}\) vary along a good characteristic. Supposing that \(t,\ \tau\notin\mathcal{N}\), and \(y(t,\xi)\) is a good characteristic, we then have \[x(t,\beta(t,\bar{\beta}))=\bar{x}(t,\bar{\beta})+\int_{\tau}^{t}u(s,x(s,\beta (t;\tau,\bar{\beta})))ds.\] Where \(\mathcal{N}\) denotes a null set with meas(\(\mathcal{N}\))=0 such that for every \(t\notin\mathcal{N}\) the measure \(\mu_{(t)}\) is absolutely continous and has density \(m_{x}^{2}(t,\dot{\cdot})\). Differentiating the above equation with respect to \(\bar{\xi}\), we deduce that \[x_{\beta}\frac{\partial}{\partial\bar{\beta}}\beta(t;\tau,\bar{\beta})=x_{ \beta}(\tau,\bar{\beta})+\int_{\tau}^{t}u_{\beta}(s,\beta(t;\tau,\bar{\beta}) )\frac{\partial}{\partial\bar{\beta}}\beta(t;\tau,\bar{\beta})ds. \tag{4.1}\] Likewise, we have \[m_{\beta}\frac{\partial}{\partial\bar{\beta}}\beta(t;\tau,\bar{\beta})=m_{ \beta}(\tau,\bar{\beta})+\int_{\tau}^{t}F_{\beta}(s,\beta(t;\tau,\bar{\beta}) )\frac{\partial}{\partial\bar{\beta}}\beta(t;\tau,\bar{\beta})ds. \tag{4.2}\] From (4.1)-(4.2), we end up with \[\begin{cases}\dfrac{d}{dt}x_{\beta}+G_{\beta}x_{\beta}=u_{\beta},\\ \dfrac{d}{dt}m_{\beta}+G_{\beta}m_{\beta}=F_{\beta}.\end{cases} \tag{4.3}\] **Step 3.** We now return to the original coordinates \((t,x)\) and derive an evolution equation for the partial derivative \(k_{x}\) along a "good" characteristic curve. For a fixed point \((t,x)\) with \(t\notin\mathcal{N}\). Suppose that \(\bar{x}\) is a Lebesgue point for the map \(x\to m_{x}(t,x)\), and \(\bar{\beta}\) satisfies \(\bar{x}=x(t,\bar{\beta})\), and suppose that \(t\to\beta(t;\tau,\bar{\beta})\) is a good characteristic, which implies **(GC)** holds. We observe that \[m_{\beta}(t,x)=\frac{1}{x_{\beta}^{2}(t,\bar{\beta})}-1>0,\;x_{\beta}(t,\bar{ \beta})>0, \tag{4.4}\] which implies that \(y_{\xi}(t,\xi)>0\). Hence, the partial derivative \(m_{x}\) and \(u_{x}\) can be calculated as shown below \[m_{x}(t,x(t,\beta(t;\tau,\bar{\beta})))=\frac{m_{\beta}(t,\beta(t;\tau,\bar{ \beta}))}{x_{\beta}(t,\beta(t;\tau,\bar{\beta}))}.\;\;\;\;\;\;u_{x}(t,x(t, \beta(t;\tau,\bar{\beta})))=\frac{u_{\beta}(t,\beta(t;\tau,\bar{\beta}))}{x_{ \beta}(t,\beta(t;\tau,\bar{\beta}))}.\] Applying (4.3) to describe the evolution of \(m_{\beta}\) and \(x_{\beta}\), we infer that the map \(t\to m_{x}(t,x(t,\beta(t,\bar{\beta})))\) is absolutely continuous. It follows that \[\begin{split}\dfrac{d}{dt}m_{x}(t,x(t,\beta(t,\bar{\beta})))& =\dfrac{d(\frac{m_{\beta}}{x_{\beta}})}{dt}=\dfrac{x_{\beta}\frac {d}{dt}m_{\beta}-m_{\beta}\frac{d}{dt}x_{\beta}}{{x_{\beta}}^{2}}=\dfrac{x_{ \beta}(F_{\beta}-m_{\beta}G_{\beta})-m_{\beta}(u_{\beta}-G_{\beta}x_{\beta})} {{x_{\beta}}^{2}}\\ &=\dfrac{F_{\beta}x_{\beta}-u_{\beta}m_{\beta}}{x_{\beta}^{2}}= \dfrac{F_{\beta}}{x_{\beta}}-u_{x}m_{x}.\end{split}\] Hence, we conclude that as long as \(y_{\beta}\neq 0\), the map \(t\to k_{x}\) is absolutely continuous. **Step 4.** Consider the function \[v\triangleq\begin{cases}2\arctan m_{x}&if\quad 0<x_{\beta}\leq 1\\ \pi&if\quad x_{\beta}=0.\end{cases} \tag{4.5}\] Observe that this implies \[x_{\beta}=\frac{1}{1+m_{x}^{2}}=\cos^{2}\frac{v}{2},\quad\frac{m_{x}}{1+m_{x }^{2}}=\frac{1}{2}\sin v,\quad\frac{m_{x}^{2}}{1+m_{x}^{2}}=\sin^{2}\frac{v}{2} \tag{4.6}\] From which it follows that \[\begin{cases}\dfrac{d}{dt}\beta(t,\bar{\beta})=G,\\ \dfrac{d}{dt}x(t,\beta(t,\bar{\beta}))=u(t,\beta(t,\bar{\beta})),\\ \dfrac{d}{dt}m(t,\beta(t,\bar{\beta}))=F(t,\beta(t,\bar{\beta})),\\ \dfrac{d}{dt}v(t,\beta(t,\bar{\beta}))=2\cos^{2}\frac{v}{2}P-N\sin v-\sin^{2} \frac{v}{2}.\end{cases} \tag{4.7}\] with \(N(t,\beta)=\big{(}-m-P_{1}-P_{1x}+\frac{1}{2\lambda}(-P_{2}+P_{3x}+P_{2x}-m^{2}) \big{)}(t,\beta)\) and \(P=F+mN+\lambda u+\frac{m^{2}}{2}\) The function \(P_{i}\) admits a representation of the variable \(\beta\), namely \[\left\{\begin{array}{l}P_{1}=-\frac{1}{2}\int_{-\infty}^{+\infty}e^{-|\int_{ \beta^{\prime}}^{\beta}\cos^{2}\frac{v}{2}ds|}m\cos^{2}\frac{v}{2}d\beta^{ \prime},\\ P_{1x}=-\frac{1}{2}(\int_{\beta}^{+\infty}-\int_{-\infty}^{-\beta})e^{-|\int_{ \beta^{\prime}}^{\beta}\cos^{2}\frac{v}{2}ds|}m\cos^{2}\frac{v}{2}d\beta^{ \prime},\\ P_{2}=-\frac{1}{2}\int_{-\infty}^{+\infty}e^{-|\int_{\beta^{\prime}}^{\beta} \cos^{2}\frac{v}{2}ds|}m^{2}\cos^{2}\frac{v}{2}d\beta^{\prime},\\ P_{2x}=-\frac{1}{2}(\int_{\beta}^{+\infty}-\int_{-\infty}^{-\beta})e^{-|\int_{ \beta^{\prime}}^{\beta}\cos^{2}\frac{v}{2}ds|}m^{2}\cos^{2}\frac{v}{2}d\beta^{ \prime},\\ P_{3}=-\frac{1}{2}\int_{-\infty}^{+\infty}e^{-|\int_{\beta^{\prime}}^{\beta} \cos^{2}\frac{v}{2}ds|}\sin^{2}\frac{v}{2}d\beta^{\prime},\\ P_{3x}=-\frac{1}{2}(\int_{\beta}^{+\infty}-\int_{-\infty}^{\beta})e^{-|\int_{ \beta^{\prime}}^{\beta}\cos^{2}\frac{v}{2}ds|}\sin^{2}\frac{v}{2}d\beta^{ \prime},\\ P_{4}=-\frac{1}{2}\int_{-\infty}^{+\infty}e^{-|\int_{\beta^{\prime}}^{\beta} \cos^{2}\frac{v}{2}ds|}\frac{2}{u}\sin v-mu\cos^{2}\frac{v}{2}d\beta^{\prime}, \\ P_{4x}=-\frac{1}{2}(\int_{\beta}^{+\infty}-\int_{-\infty}^{\beta})e^{-|\int_{ \beta^{\prime}}^{\beta}\cos^{2}\frac{v}{2}ds|}\frac{2}{u}\sin v-mu\cos^{2} \frac{v}{2}d\beta^{\prime},\\ P_{5}=-\frac{1}{2}\int_{-\infty}^{+\infty}e^{-|\int_{\beta^{\prime}}^{\beta} \cos^{2}\frac{v}{2}ds|}P_{3}\cos^{2}\frac{v}{2}d\beta^{\prime},\\ P_{5x}=-\frac{1}{2}(\int_{\beta}^{+\infty}-\int_{-\infty}^{\beta})e^{-|\int_{ \beta^{\prime}}^{\beta}\cos^{2}\frac{v}{2}ds|}P_{3}\cos^{2}\frac{v}{2}d\beta^{ \prime}.\end{array}\right. \tag{4.8}\] For any \(\bar{\beta}\in\mathbb{R}\), we deduce that the following initial conditions \[\begin{cases}\beta(0,\bar{\beta})=\bar{\beta},\\ \beta(0,\bar{\beta})=x(0,\bar{\beta}),\\ m(0,\bar{\beta})=m_{0}(x(0,\bar{\beta})),\\ v(0,\bar{\beta})=2\arctan u_{0,x}(x(0,\bar{\beta})).\end{cases} \tag{4.9}\] Making use of all coefficients is Lipschitz continuous and the previous steps again, the system (4.7)-(4.9) has a unique globally solution. **Step 5.** Let \(m\) and \(\tilde{m}\) be two conservative weak solution of (2.2) with the same initial data \(\bar{m}\in H^{1}(\mathbb{R})\). For \(a.e.\ t\geq 0\), the corresponding Lipschitz continuous maps \(\beta\mapsto x(t,\beta),\beta\mapsto\tilde{x}(t,\beta)\) are strictly increasing. Hence they have continuous inverses, say \(x\mapsto\beta^{-1}(t,x),x\mapsto\tilde{\beta}^{-1}(t,x)\). Thus, we deduce that \[x(t,\beta)=\tilde{x}(t,\beta),\ u(t,x(t,\beta))=\tilde{u}(t,x(t,\beta)).\] Moreover, for \(a.e.\ t\geq 0\), we have \[m(t,x)=m(t,y(t,\beta))=\tilde{m}(t,\tilde{x}(t,\beta))=\tilde{m}(t,x).\] Then, we finish the proof of Theorem 2.3. **Acknowledgments** This work was partially supported by the National Natural Science Foundation of China (No.12171493). **Data Availability.** The data that support the findings of this study are available on citation. The data that support the findings of this study are also available from the corresponding author upon reasonable request.
2303.02362
Four dimensional hypersurfaces with proper mean curvature vector field in pseudo-Riemannian space forms
In this paper, we study four dimensional hypersurface M^4_r with proper mean curvature vector field (i.e. \Delta\vec{H} is proportional to \vec{H}) in pseudo-Riemannian space form N^5_s(c), and show that it has constant mean curvature, and give the range of this constant. As an application, we get that biharmonic hypersurfaces in N^5_s(c) are minimal in some specific cases, which partially confirms B.-Y. Chen's conjecture.
Chao Yang, Jiancheng Liu, Li Du
2023-03-04T09:15:51Z
http://arxiv.org/abs/2303.02362v1
Four dimensional hypersurfaces with proper mean curvature vector field in pseudo-Riemannian space forms ###### Abstract In this paper, we study four dimensional hypersurface \(M_{r}^{4}\) with proper mean curvature vector field (i.e. \(\Delta\vec{H}\) is proportional to \(\vec{H}\)) in pseudo-Riemannian space form \(N_{s}^{5}(c)\), and show that it has constant mean curvature, and give the range of this constant. As an application, we get that biharmonic hypersurfaces in \(N_{s}^{5}(c)\) are minimal in some specific cases, which partially confirms B.-Y. Chen's conjecture. keywords: proper mean curvature vector field, pseudo-Riemannian space form, four dimensional hypersurfaces, mean curvature Msc: 53C50 + Footnote †: journal: ## 1 Introduction Let \(N_{s}^{n+1}(c)\) be a \((n+1)\)-dimensional pseudo-Riemannian space form with index \(0\leq s\leq n+1\) and constant sectional curvature \(c\). Specially, the pseudo-Riemannian space form \(N_{s}^{n+1}(0)\) is isometric to \((n+1)\)-dimensional pseudo-Euclidean space \(\mathbb{E}_{s}^{n+1}\) with index \(s\). Let \(x:M_{r}^{n}\to N_{s}^{n+1}(c)\) be an isometric immersion of a pseudo-Riemannian hypersurface \(M_{r}^{n}\) into \(N_{s}^{n+1}(c)\). Denote by \(\vec{H}\) and \(\Delta\) the mean curvature vector field and the Laplace operator of \(M_{r}^{n}\). The hypersurface \(M_{r}^{n}\) is said to have proper mean curvature vector field, if it satisfies the equation \[\Delta\vec{H}=\lambda\vec{H},\] for some real constant \(\lambda\). Specially, when \(\lambda=nc\), the hypersurface \(M_{r}^{n}\) is biharmonic. In 1988, B.-Y. Chen initiated the study of hypersurface \(M_{r}^{n}\) with proper mean curvature vector field in \(\mathbb{E}_{s}^{n+1}\) in [4], and proved that when \(n=2,s=0\), surface \(M^{2}\) is minimal, or an open part of a circular cylinder. And then, A. Ferrandez and P. Lucus [10] classified such non-minimal surfaces for \(n=2,s=1\). For \(n=3\), the result of classification has not been gotten, but it was proved that hypersurface \(M_{r}^{3}\) of \(\mathbb{E}_{s}^{4}\) (\(s=0,1,2\)) has constant mean curvature (\(s=0\) by F. Defever in [6] and T. Hasanis etc. in [18], \(s=1,2\) by A. Arvanitoyeorgos etc. in [1, 2, 3]). Naturally, there is a conjecture in [3] that: _any hypersurface having proper mean curvature vector field in pseudo-Euclidean space \(\mathbb{E}_{s}^{n+1}\) has constant mean curvature,_ which is closely related with the well known B.-Y. Chen's conjecture about biharmonic hypersurfaces in [5]: _Any biharmonic hypersurface in \(\mathbb{E}_{s}^{n+1}\) is minimal._ In [11], Y. Fu, M.-C. Hong and X. Zhan illustrated the significance of solving Chen's conjecture for \(n=4\), by analogy with the famous Bernstein problem. In contrast to the Chen's conjecture, we realized that to solve the conjecture about hypersurfaces with proper mean curvature vector field for \(n=4\) is necessary. In 2021, Y. Fu and X. Zhan give an affirmative answer to this conjecture for \(n=4,s=0\) in [12]. However, when \(s>0\), the related research is difficult to carry out, since the shape operator of the hypersurface \(M_{r}^{4}\) is not necessarily diagonalizable, and the principal curvatures may not all be real. In 2022, under the assumption that the hypersurface has constant scalar curvature and diagonalizable shape operator, L. Du and J. Ren obtained the same conclusion for \(n=4,s>0\) in [8]. There are also some papers ([7, 9, 14, 15, 16]) studied the conjecture for \(n>4\), but some strong restrictions are attached. In this paper, we overcome the difficulties caused by the non-diagonalizable shape operator and the imaginary principal curvatures, and show that the hypersurface \(M_{r}^{4}\) with proper mean curvature vector field in pseudo-Riemannian space form \(N_{s}^{5}(c)\) has constant mean curvature in section 3 (cf. Theorem 3.1). Thus, the above conjecture about hypersurfaces with proper mean curvature vector field is true for \(n=4\). Once we know the mean curvature of the hypersurface \(M_{r}^{4}\) is a constant, we continue in section 4 to estimate that constant. When \(M_{r}^{4}\) has at most two distinct principal curvatures, we get the value of the mean curvature \(H\) (cf. Theorems 4.1, 4.6 and 4.8), and find the value depends on \(\varepsilon c\), \(\varepsilon\lambda\) and the multiplies of the principal curvatures, where \(\varepsilon=\langle\vec{\xi},\vec{\xi}\rangle\), \(\vec{\xi}\) denote a unit normal vector field to \(M_{r}^{4}\). Its a pity that when the number of distinct principal curvatures of \(M_{r}^{4}\) is larger than 2, we do not obtain the value of the mean curvature. However, we give a value range of \(H\) when the principal curvatures of \(M_{r}^{4}\) are all real (cf. Theorem 4.12). As an application of these results, we can partially answer B.-Y. Chen's conjecture, details are as follows: * Biharmonic hypersurface \(M_{r}^{4}\) in \(N_{s}^{5}(c)\) with two distinct principal curvatures, which are imaginary, is minimal (cf. Corollary 4.11); * Biharmonic hypersurface \(M_{r}^{4}\) in \(N_{s}^{5}(c)\) satisfying \(c\varepsilon\leq 0\) and without imaginary principal curvatures, is minimal (cf. Corollary 4.13). ## 2 Preliminaries ### The formulas of hypersurface \(M_{r}^{n}\) in \(N_{s}^{n+1}(c)\) Let \(N_{s}^{n+1}(c)\) be a pseudo-Riemannian space form with index \(s\) and constant sectional curvature \(c\). A non-zero vector \(X\) in \(N_{s}^{n+1}(c)\) is called _timelike_, _space-like_ or _light-like_, according to whether \(\langle X,X\rangle\) is negative, positive or zero. Let \(M_{r}^{n}\) be a nondegenerate hypersurface in \(N_{s}^{n+1}(c)\). \(\vec{\xi}\) denote a unit normal vector field to \(M_{r}^{n}\), then \(\varepsilon=\langle\vec{\xi},\vec{\xi}\rangle=\pm 1\). \(\nabla\) and \(\tilde{\nabla}\) denote the Levi-Civita connections of \(M_{r}^{n}\) and \(N_{s}^{n+1}(c)\), respectively. For any vector fields \(X,Y\) tangent to \(M_{r}^{n}\), the Gauss formula is given by \[\widetilde{\nabla}_{X}Y=\nabla_{X}Y+h(X,Y)\vec{\xi},\] where \(h\) is the scalar-valued second fundamental form. Denote by \(A\), \(\vec{H}\), and \(H\) the shape operator of \(M_{r}^{n}\) associated to \(\vec{\xi}\), the mean curvature vector field and the mean curvature, then \(\vec{H}=H\vec{\xi}\) and \(H=\frac{1}{n}\varepsilon\mathrm{tr}A\). For any vector fields \(X,Y,Z\) tangent to \(M_{r}^{n}\), the Codazzi and Gauss equations are given by \[\langle(\nabla_{X}A)Y,Z\rangle=\langle(\nabla_{Y}A)X,Z\rangle, \tag{2.1}\] and \[R(X,Y)Z=c(\langle Y,Z\rangle X-\langle X,Z\rangle Y)+\varepsilon\langle A(Y),Z\rangle A(X)-\varepsilon\langle A(X),Z\rangle A(Y),\] where \(R(X,Y)Z=\nabla_{X}\nabla_{Y}Z-\nabla_{Y}\nabla_{X}Z-\nabla_{[X,Y]}Z\). ### The equivalent equations of \(\Delta\vec{H}=\lambda\vec{H}\) The equation \(\Delta\vec{H}=\lambda\vec{H}\) can be rewritten as \[-\Delta\vec{H}-\mbox{trace}\,\tilde{R}(\mbox{d}x,\vec{H})\mbox{d}x=(\lambda-nc) \vec{H},\] which is equivalent to \[\tau_{2}(x)=(\lambda-nc)\tau(x),\] where \(\tau(x)\) and \(\tau_{2}(x)\) are the tension and bitension fields of \(x\), respectively, and \(\tilde{R}\) is the curvature tensor of \(N_{s}^{n+1}(c)\). Thus, according to [7], the hypersurface \(M_{r}^{n}\) satisfies \(\Delta\vec{H}=\lambda\vec{H}\) if and only if \[A(\nabla H)=-\frac{n}{2}\varepsilon H(\nabla H), \tag{2.2}\] and \[\Delta H+\varepsilon H\mbox{tr}A^{2}=\lambda H, \tag{2.3}\] where the Laplace operator \(\Delta\) acting on scalar-valued function \(f\) is given by \[\Delta f=-\sum_{i=1}^{n}\varepsilon_{i}(e_{i}e_{i}-\nabla_{e_{i}}e_{i})f,\] with \(\{e_{i}\}_{i=1}^{n}\) be a local orthonormal frame such that \(\langle e_{i},e_{i}\rangle=\varepsilon_{i}=\pm 1\). ### The shape operator of hypersurface \(M_{r}^{n}\) The tangent space \(T_{p}M_{r}^{n}\) at \(p\in M_{r}^{n}\) can be expressed as a direct sum of subspaces \(V_{k}\), \(1\leq k\leq m\), that are mutually orthogonal and invariant under the shape operator \(A\). According to [17, exercise 18, pp. 260-261], there exists an integer \(t\), with \(0\leq t\leq m\), such that \(A|_{V_{i}}\) (the restriction of \(A\) on \(V_{i}\)), \(1\leq i\leq t\), has the form \[A_{i}=\left(\begin{array}{ccccc}\lambda_{i}&&&&\\ 1&\lambda_{i}&&\\ &\ddots&\ddots&&\\ &&1&\lambda_{i}&\\ &&&&1&\lambda_{i}\end{array}\right),\] with respect to a basis \(\mathfrak{B}_{i}=\{u_{i_{1}},u_{i_{2}},\cdots,u_{i_{\alpha_{i}}}\}\) of \(V_{i}\), and \(A|_{V_{j}}\), \(t+1\leq j\leq m\), has the form \[\overline{A}_{j}=\left(\begin{array}{ccccccccc}\gamma_{j}&\tau_{j}&&&&\\ -\tau_{j}&\gamma_{j}&0&&&&\\ 1&0&\gamma_{j}&\tau_{j}&&&\\ &1&-\tau_{j}&\gamma_{j}&0&&&\\ &&1&0&\gamma_{j}&\tau_{j}&&&\\ &&&1&-\tau_{j}&\gamma_{j}&0&&&\\ &&&&\ddots&\ddots&\ddots&\ddots&\\ &&&&1&0&\gamma_{j}&\tau_{j}\\ &&&&1&-\tau_{j}&\gamma_{j}\end{array}\right),\tau_{j}\neq 0,\] with respect to a basis \(\overline{\mathfrak{B}}_{j}=\{u_{\bar{j}_{1}},u_{\bar{j}_{1}},u_{\bar{j}_{2 }},u_{\bar{j}_{2}},\cdots,u_{\bar{j}_{\beta_{j}}},u_{\bar{j}_{\beta_{j}}}\}\) of \(V_{j}\). The inner products of the basis elements in \(\mathfrak{B}_{i}\), \(1\leq i\leq t\), and \(\overline{\mathfrak{B}}_{j}\), \(t+1\leq j\leq n\), are all zero except \[\langle u_{i_{a}},u_{i_{b}}\rangle=\varepsilon_{i}=\pm 1,\ \ a+b=\alpha_{i}+1,\ \ 1 \leq i\leq t,\] and \[\langle u_{\bar{j}_{c}},u_{\bar{j}_{d}}\rangle=1=-\langle u_{\bar{j}_{c}},u_{ \bar{j}_{d}}\rangle,\ \ c+d=\beta_{j}+1,\ \ t+1\leq j\leq m.\] Certainly, the sum of the dimensions of \(V_{k}\), \(1\leq k\leq m\) is equal to \(n\), i.e. \[\sum_{i=1}^{t}\alpha_{i}+2\sum_{j=t+1}^{m}\beta_{j}=n.\] Collecting all vectors in \(\mathfrak{B}_{1},\cdots,\mathfrak{B}_{t},\overline{\mathfrak{B}}_{t+1}, \cdots,\overline{\mathfrak{B}}_{m}\) in order, we get a basis \(\mathfrak{B}\!=\!\{u_{i_{1}},u_{i_{2}},\cdots,u_{i_{\alpha_{i}}},u_{\bar{j}_{1 }},u_{\bar{j}_{1}},u_{\bar{j}_{2}},u_{\bar{j}_{2}},\cdots,u_{\bar{j}_{\beta_{j }}},u_{\bar{j}_{\beta_{j}}}|1\leq i\leq t,t\!+\!1\leq j\leq m\}\) of \(T_{x}M_{r}^{n}\). With respect to this basis \(\mathfrak{B}\), the shape operator \(A\) of the hypersurface \(M_{r}^{n}\) in \(N_{s}^{n+1}(c)\) can be expressed as an almost diagonal matrix \[A=\mbox{diag}\{A_{1},\cdots,A_{t},\overline{A}_{t+1},\cdots,\overline{A}_{m}\}.\] Observe the form of \(A_{i}\) and \(\overline{A}_{j}\), with \(1\leq i\leq t\), \(t+1\leq j\leq m\), we find \(A_{i}\) has only a simple eigenvalue \(\lambda_{i}\), and \(\overline{A}_{j}\) has eigenvalues \(\gamma_{j}+\tau_{j}\sqrt{-1}\), \(\gamma_{j}-\tau_{j}\sqrt{-1}\). So, the shape operator \(A\) has real eigenvalues \[\lambda_{1},\lambda_{2},\cdots,\lambda_{t},\] and imaginary eigenvalues \[\gamma_{t+1}+\tau_{t+1}\sqrt{-1},\gamma_{t+1}-\tau_{t+1}\sqrt{-1},\cdots,\gamma_{ m}+\tau_{m}\sqrt{-1},\gamma_{m}-\tau_{m}\sqrt{-1}.\] According to the above explanations for the form of the shape operator, we conclude that for four dimensional hypersurface \(M_{r}^{4}\), if there exists real principal curvatures, then the shape operator \(A\) and corresponding metric matrix \(G\) have possible forms: \begin{tabular}{|c|c|} \hline **Form (I)** & **Form (I)** & **Form (II)** \\ \hline \(t=m=4\), & \(t=m=3\), & \(t=m=2\), \\ \(\alpha_{i}=1,i=1,\cdots,4\). & \(\alpha_{1}=2,\alpha_{2}=\alpha_{3}=1\). & \(\alpha_{1}=\alpha_{2}=2\). \\ \(G=\left(\begin{array}{cccc}\varepsilon_{1}&0&0&0\\ 0&\varepsilon_{2}&0&0\\ 0&0&\varepsilon_{3}&0\\ 0&0&0&\varepsilon_{4}\end{array}\right)\) & \(G=\left(\begin{array}{cccc}0&\varepsilon_{1}&0&0\\ \varepsilon_{1}&0&0&0\\ 0&0&\varepsilon_{2}&0\\ 0&0&0&\varepsilon_{3}\end{array}\right)\) & \(G=\left(\begin{array}{cccc}0&\varepsilon_{1}&0&0\\ \varepsilon_{1}&0&0&0\\ 0&0&0&\varepsilon_{2}\\ 0&0&\varepsilon_{2}&0\end{array}\right)\) \\ \(\mathfrak{B}=\{u_{1_{1}},u_{1_{2}},u_{2_{1}},u_{3_{1}},u_{4_{1}}\}\) & \(A=\left(\begin{array}{cccc}\lambda_{1}&0&0&0\\ 0&\lambda_{2}&0&0\\ 0&0&\lambda_{2}&0\\ 0&0&0&\lambda_{3}\end{array}\right)\) & \(A=\left(\begin{array}{cccc}\lambda_{1}&0&0&0\\ 1&\lambda_{1}&0&0\\ 0&0&\lambda_{2}&0\\ 0&0&0&\lambda_{3}\end{array}\right)\) & \(A=\left(\begin{array}{cccc}\lambda_{1}&0&0&0\\ 1&\lambda_{1}&0&0\\ 0&0&\lambda_{2}&0\\ 0&0&1&\lambda_{2}\end{array}\right)\) \\ \hline \end{tabular} **Form (IV)** & **Form (V)** & **Form (VI)** \\ \hline \(t=m=2\), & \(m=3,t=2\), & \(\alpha_{1}=3,\alpha_{2}=1\). & \(m=2,t=1\), \\ \(G=\left(\begin{array}{cccc}0&0&\varepsilon_{1}&0\\ 0&\varepsilon_{1}&0&0\\ \varepsilon_{1}&0&0&0\\ 0&0&0&\varepsilon_{2}\end{array}\right)\) & \(G=\left(\begin{array}{cccc}\varepsilon_{1}&0&0&0\\ 0&\varepsilon_{2}&0&0\\ 0&0&1&0\\ 0&0&0&-1\end{array}\right)\) & \(G=\left(\begin{array}{cccc}0&\varepsilon_{1}&0&0\\ \varepsilon_{1}&0&0&0\\ 0&0&1&0\\ 0&0&0&-1\end{array}\right)\) \\ \(\mathfrak{B}=\{u_{1_{1}},u_{1_{2}},u_{3_{1}},u_{3_{1}}\}\) & \(A=\left(\begin{array}{cccc}\lambda_{1}&0&0&0\\ 0&\lambda_{2}&0&0\\ 0&0&\gamma_{3}&\gamma_{3}\\ 0&0&0&-\gamma_{3}&\gamma_{3}\end{array}\right)\) & \(A=\left(\begin{array}{cccc}\lambda_{1}&0&0&0\\ 1&\lambda_{1}&0&0\\ 0&0&\gamma_{2}&\tau_{2}\\ 0&0&-\tau_{2}&\gamma_{2}\end{array}\right)\) \\ \hline \end{tabular} And if the principal curvatures of \(M_{r}^{4}\) are all imaginary, then \(A\) and \(G\) have possible forms: \begin{tabular}{|c|c|} \hline **Form (VII)** & **Form (VII)** \\ \hline \(m=1,t=0\), & \(m=2,t=0\) \\ \(\beta_{1}=2\). & \(\beta_{1}=\beta_{2}=1\). \\ \(G=\left[\begin{array}{cccc}0&0&1&0\\ 0&0&0&-1\\ 1&0&0&0\\ 0&-1&0&0\end{array}\right]\) & \(G=\left[\begin{array}{cccc}1&0&0&0\\ 0&-1&0&0\\ 0&0&1&0\\ 0&0&0&-1\end{array}\right]\) \\ \(\mathfrak{B}=\{u_{1_{1}},u_{1_{1}},u_{1_{2}},u_{1_{2}}\}\) & \(\mathfrak{B}=\{u_{1},u_{1},u_{2},u_{2}\}\) \\ \(A=\left[\begin{array}{cccc}\gamma_{1}&\tau_{1}&0&0\\ -\tau_{1}&\gamma_{1}&0&0\\ 1&0&\gamma_{1}&\tau_{1}\\ 0&1&-\tau_{1}&\gamma_{1}\end{array}\right]\) & \(A=\left[\begin{array}{cccc}\gamma_{1}&\tau_{1}&0&0\\ -\tau_{1}&\gamma_{1}&0&0\\ 0&0&\gamma_{2}&\tau_{2}\\ 0&0&-\tau_{2}&\gamma_{2}\end{array}\right]\) \\ \hline \end{tabular} From these forms of \(A\), we can calculate \(\mathrm{tr}A\) and \(\mathrm{tr}A^{2}\), combining \(\mathrm{tr}A=4\varepsilon H\), we find \[\begin{cases}\mathrm{tr}A=\sum_{i=1}^{t}\alpha_{i}\lambda_{i}+2\sum_{j=t+1}^{m }\beta_{j}\gamma_{j}=4\varepsilon H,\\ \mathrm{tr}A^{2}=\sum_{i=1}^{t}\alpha_{i}\lambda_{i}^{2}+2\sum_{j=t+1}^{m}\beta _{j}(\gamma_{j}^{2}-\tau_{j}^{2}).\end{cases} \tag{2.4}\] ## 3 The result that \(M_{r}^{4}\) has constant mean curvature **Theorem 3.1**: _Let \(N_{s}^{5}(c)\) be a \(5\)-dimensional pseudo-Riemannian space form with constant sectional curvature \(c\), and \(M_{r}^{4}\) be a nondegenerate hypersurface of \(N_{s}^{5}(c)\) with proper mean curvature vector field, then \(M_{r}^{4}\) has constant mean curvature._ To prove Theorem 3.1, we adopt contradiction method. Assume that \(H\) is not a constant, we need to deduce a contradiction. When the principal curvatures of \(M_{r}^{4}\) are all imaginary, (2.2) implies \(-2\varepsilon H\) (real) is an eigenvalue of \(A\), a contradiction. When \(M_{r}^{4}\) has real principal curvatures, there are six possible forms of \(A\) (see section 2). These different forms of \(A\) will generate different expressions about connection coefficients.To derive contradictions, we need to use these expressions. So, we cannot deal with these six forms uniformly. ### The shape operator has the form (I) **Proposition 3.2**: _Let \(M_{r}^{4}\) be a nondegenerate hypersurface of \(N_{s}^{5}(c)\) with proper mean curvature vector field. Suppose that the shape operator \(A\) of \(M_{r}^{4}\) has the form (I), then \(M_{r}^{4}\) has constant mean curvature._ When \(M_{r}^{4}\) has at most three distinct principal curvatures, the proof has been provided by L. Du etc. in [7]. So, we suppose the principal curvatures \(\lambda_{1},\lambda_{2},\lambda_{3}\) and \(\lambda_{4}\) of \(M_{r}^{4}\) are distinct with each other. At the beginning, we give some equations and Lemmas, under the assumption that the mean curvature \(H\) of \(M_{r}^{n}\) is not a constant, and \(\nabla H\) is in the direction \(u_{1}\). #### 3.1.1 Some equations and Lemmas Denote \(u_{i}=u_{i_{1}}\), with \(1\leq i\leq 4\). Let \(\nabla_{u_{i}}u_{j}=\Gamma_{ij}^{k}u_{k}\), \(i,j=1,2,3,4\). we get from compatibility of the connection \(\nabla\) that \[\Gamma_{ki}^{j}=-\varepsilon_{i}\varepsilon_{j}\Gamma_{kj}^{i}, \tag{3.1}\] with \(i,j,k=1,2,3,4\). Assume that \(H\) is not a constant, and \(\nabla H\) is in the direction of \(u_{1}\), then (2.2) implies \(\lambda_{1}=-2\varepsilon H\), and we have \[u_{1}(H)\neq 0,\ u_{2}(H)=u_{3}(H)=u_{4}(H)=0. \tag{3.2}\] As the equation (3.2), we obtain from \((\nabla_{u_{i}}u_{j}-\nabla_{u_{j}}u_{i})(H)=[u_{i},u_{j}](H)\) that \[\Gamma_{ij}^{1}=\Gamma_{ji}^{1},\quad i,j=2,3,4. \tag{3.3}\] Combining the equations (3.1), (3.2) and (3.3), we deduce from Codazzi equation that \[\Gamma^{1}_{1i}=0,\ \ \Gamma^{1}_{ij}=0,\ \text{with}\ i\neq j,\ i,j=2,3,4, \tag{3.4}\] and \[\begin{cases}u_{i}(\lambda_{j})=(\lambda_{i}-\lambda_{j})\Gamma^{j}_{ji},\ i\neq j,\\ (\lambda_{i}-\lambda_{j})\Gamma^{j}_{ki}=(\lambda_{k}-\lambda_{j})\Gamma^{j}_ {ik},\ i,k\neq j.\end{cases} \tag{3.5}\] Using Gauss equation for \(\langle R(u_{1},u_{i})u_{1},u_{i}\rangle\), \(i=2,3,4\), combining (3.1), (3.3) and (3.4), we have \[u_{1}(\Gamma^{i}_{i1})=-(\Gamma^{i}_{i1})^{2}+(2H\lambda_{i}-c)\varepsilon_{1 },\ i=2,3,4. \tag{3.6}\] Considering the expressions (3.1) and (3.2), the equation (2.3) can be written as \[u_{1}u_{1}(H)+\sum_{i=2}^{4}\Gamma^{i}_{i1}u_{1}(H)-\varepsilon\varepsilon_{1 }H\text{tr}A^{2}+\varepsilon_{1}\lambda H=0. \tag{3.7}\] In the following, we discuss \(f_{k}:=\sum_{i=2}^{4}(\Gamma^{i}_{i1})^{k}\), with \(k=1,2,\cdots,5\), and get the following Lemmas 3.3 and 3.4. And then, applying these two Lemmas, we prove \(u_{i}(\Gamma^{j}_{j1})=u_{i}(\lambda_{j})=0\), with \(i,j=2,3,4\), i.e. Lemma 3.5, which will play an important role in the proof of Proposition 3.2. **Lemma 3.3**: _We have_ \[f_{2}= -u_{1}(f_{1})+12H^{2}\varepsilon\varepsilon_{1}-3c\varepsilon_{1}; \tag{3.8}\] \[f_{3}= \frac{1}{2}u_{1}^{(2)}(f_{1})-(4\varepsilon H^{2}+c)\varepsilon_ {1}f_{1}-24\varepsilon\varepsilon_{1}Hu_{1}(H);\] (3.9) \[f_{4}= -\frac{1}{6}u_{1}^{(3)}(f_{1})+\frac{4}{3}(4\varepsilon H^{2}+c )\varepsilon_{1}u_{1}(f_{1})+\frac{20}{3}\varepsilon\varepsilon_{1}Hu_{1}(H) f_{1}+8\varepsilon\varepsilon_{1}(u_{1}(H))^{2}\] (3.10) \[+16\varepsilon\varepsilon_{1}Hu_{1}^{(2)}(H)-32H^{4}+2\varepsilon H ^{2}\lambda-12\varepsilon H^{2}c+3c^{2};\] (3.11) \[f_{5}= \frac{1}{24}u_{1}^{(4)}(f_{1})-\frac{5}{6}(4\varepsilon H^{2}+c )\varepsilon_{1}u_{1}^{(2)}(f_{1})-\frac{25}{3}\varepsilon\varepsilon_{1}Hu_ {1}(H)u_{1}(f_{1})-[16H^{4}\] (3.12) \[-\frac{13}{3}Hu_{1}^{(2)}(H)-\frac{1}{3}\varepsilon\varepsilon_{1 }(u_{1}(H))^{2}+8\varepsilon H^{2}c+c^{2}]f_{1}-8\varepsilon\varepsilon_{1} Hu_{1}^{(3)}(H)\] (3.13) \[-\frac{20}{3}\varepsilon\varepsilon_{1}u_{1}(H)u_{1}^{(2)}(H)+[ 16(8+\frac{2}{3}\varepsilon\varepsilon_{1})H^{3}+38\varepsilon Hc-\frac{5}{3 }H\varepsilon\lambda]u_{1}(H), \tag{3.14}\] _where \(u_{1}^{(k)}(f_{1})\) and \(u_{1}^{(k)}(H)\) with \(k=2,3,4\) denote the \(k\)-order derivatives of \(f_{1}\) and \(H\) along \(u_{1}\), respectively._ **Proof** Since \(\sum_{i=1}^{4}\lambda_{i}=\mathrm{tr}A=4\varepsilon H\) and \(\lambda_{1}=-2\varepsilon H\), we know \[\sum_{i=2}^{4}\lambda_{i}=6\varepsilon H.\] Taking sum for \(j\) from \(2\) to \(4\) in the first equation of (3.5) with \(i=1\), we find \[\sum_{i=2}^{4}\lambda_{i}\Gamma_{i1}^{i}=-2\varepsilon Hf_{1}-6\varepsilon u_ {1}(H). \tag{3.9}\] Multiply \(\lambda_{j}\) on both sides of the first equation in (3.5) with \(i=1\), and then we know \[\sum_{i=2}^{4}\lambda_{i}^{2}\Gamma_{i1}^{i}=-2\varepsilon H\sum_{i=2}^{4} \lambda_{i}\Gamma_{i1}^{i}-\frac{1}{2}u_{1}(\mathrm{tr}A^{2})+4Hu_{1}(H).\] Differentiate \(\sum_{i=2}^{4}\lambda_{i}(\Gamma_{i1}^{i})^{k}\) along \(u_{1}\), combining (3.5) and (3.6), we get \[(k+1)\sum_{i=2}^{4}\lambda_{i}(\Gamma_{i1}^{i})^{k+1}= -u_{1}(\sum_{i=2}^{4}\lambda_{i}(\Gamma_{i1}^{i})^{k})+2k \varepsilon_{1}H\sum_{i=2}^{4}\lambda_{i}^{2}(\Gamma_{i1}^{i})^{k-1}\] \[-kc\varepsilon_{1}\sum_{i=2}^{4}\lambda_{i}(\Gamma_{i1}^{i})^{k- 1}-2\varepsilon Hf_{k+1},\ \ k=1,2,\] which together with (3.9) and the above equation gives the expressions of \(\sum_{i=2}^{4}\lambda_{i}(\Gamma_{i1}^{i})^{k}\), with \(k=2,3\). Take the sum of \(i\) from \(2\) to \(4\) in (3.6), we can express \(f_{2}\) as \[f_{2}=-u_{1}(f_{1})+12H^{2}\varepsilon\varepsilon_{1}-3c\varepsilon_{1}.\] Multiplying \((\Gamma_{i1}^{i})^{k}\) on both sides of (3.6), and then taking sum for \(i\), we obtain \[f_{k+2}=-\frac{1}{k+1}u_{1}(f_{k+1})+2\varepsilon_{1}H\sum_{i=2}^{4}\lambda_{ i}(\Gamma_{i1}^{i})^{k}-c\varepsilon_{1}f_{k},\ k=1,2,3.\] As the expressions of \(\sum_{i=2}^{4}\lambda_{i}(\Gamma_{i1}^{i})^{k}\), with \(k=1,2,3\), we conclude from the above equation that (3.8) holds. \(\square\) **Lemma 3.4**: _For \(i=2,3,4\), we have_ \[u_{i}(f_{1})=0.\] **Proof** With the notions \(f_{k}\), \(k=1,2,\cdots,5\), we find the equations \[\begin{cases}f_{1}^{4}-6f_{1}^{2}f_{2}+3f_{2}^{2}+8f_{1}f_{3}-6f_{4}=0,\\ f_{1}^{5}-5f_{1}^{3}f_{2}+5f_{1}^{2}f_{3}+5f_{2}f_{3}-6f_{5}=0\end{cases} \tag{3.10}\] (cf. [11]) also hold. Substitute the expressions of \(f_{k}\), \(k=1,2,\cdots,5\) in Lemma 3.3 into (3.10), we have \[\begin{split} F_{1}:=u_{1}^{(3)}(f_{1})+4f_{1}u_{1}^{(2)}(f_{1}) +3(u_{1}(f_{1}))^{2}+[6f_{1}^{2}-104\varepsilon\varepsilon_{1}H^{2}+10 \varepsilon_{1}c]u_{1}(f_{1})\\ +f_{1}^{4}-104\varepsilon\varepsilon_{1}H^{2}f_{1}^{2}+10 \varepsilon_{1}cf_{1}^{2}-232\varepsilon\varepsilon_{1}Hu_{1}(H)f_{1}-48 \varepsilon\varepsilon_{1}(u_{1}(H))^{2}\\ -96\varepsilon\varepsilon_{1}Hu_{1}^{(2)}(H)+624H^{4}-144 \varepsilon H^{2}c-12\varepsilon H^{2}\lambda+9c^{2}=0,\end{split}\] and \[\begin{split} F_{2}:=&-u_{1}^{(4)}(f_{1})+(-10u_{1} (f_{1})+10f_{1}^{2}+200\varepsilon\varepsilon_{1}H^{2}-10c\varepsilon_{1})u_{ 1}^{(2)}(f_{1})+4f_{1}^{5}\\ &+[20f_{1}^{3}+20(4\varepsilon H^{2}+c)\varepsilon_{1}f_{1}+680 \varepsilon\varepsilon_{1}Hu_{1}(H)]u_{1}(f_{1})-320\varepsilon\varepsilon_{ 1}H^{2}f_{1}^{3}\\ &+40\varepsilon_{1}f_{1}^{3}c-480\varepsilon\varepsilon_{1}Hu_{ 1}(H)f_{1}^{2}+104Hu_{1}^{(2)}(H)f_{1}+192\varepsilon\varepsilon_{1}Hu_{1}^{ (3)}(H)\\ &-192\varepsilon H^{2}cf_{1}-1344H^{4}f_{1}+8\varepsilon\varepsilon _{1}(u_{1}(H))^{2}f_{1}+160\varepsilon\varepsilon_{1}u_{1}(H)u_{1}^{(2)}(H)\\ &+36c^{2}f_{1}-128(69+2\varepsilon\varepsilon_{1})H^{3}u_{1}(H) +(528c+40\lambda)\varepsilon Hu_{1}(H)=0.\end{split}\] Subsequently, we will eliminate \(u_{1}^{(k)}(f_{1})\), \(k=1,2,3,4\) from the equations \(F_{1}=0\) and \(F_{2}=0\). Let \(F_{3}=\frac{1}{4}(u_{1}(F_{1})+F_{2})-f_{1}F_{1}\), then \(F_{3}=0\), i.e. \[\begin{split}& 12\varepsilon\varepsilon_{1}H^{2}u_{1}^{(2)}(f_{1}) +[36\varepsilon\varepsilon_{1}H^{2}f_{1}+30\varepsilon\varepsilon_{1}Hu_{1} (H)]u_{1}(f_{1})+12\varepsilon\varepsilon_{1}H^{2}f_{1}^{3}\\ &+30\varepsilon\varepsilon_{1}Hu_{1}(H)f_{1}^{2}+(19\varepsilon \varepsilon_{1}+13)Hu_{1}^{(2)}(H)f_{1}-4\varepsilon\varepsilon_{1}(u_{1}(H) )^{2}f_{1}\\ &+6(8c+\lambda)\varepsilon H^{2}f_{1}+12\varepsilon\varepsilon_{1 }Hu_{1}^{(3)}(H)-4\varepsilon\varepsilon_{1}u_{1}(H)u_{1}^{(2)}(H)\\ &-480H^{4}f_{1}-(792+32\varepsilon\varepsilon_{1})H^{3}u_{1}(H) +(30c+2\lambda)\varepsilon Hu_{1}(H)=0.\end{split} \tag{3.11}\] Take \(F_{4}=2H(u_{1}(F_{3})-12\varepsilon\varepsilon_{1}H^{2}F_{1}+f_{1}F_{3})-9u_{ 1}(H)F_{3}\), then \(F_{4}=0\), i.e. \[a_{1}u_{1}(f_{1})+a_{1}f_{1}^{2}+a_{2}f_{1}+a_{3}=0, \tag{3.12}\] where \[a_{1}= (294\varepsilon\varepsilon_{1}+78)Hu_{1}^{(2)}(H)-654\varepsilon \varepsilon_{1}(u_{1}(H))^{2}+4608H^{4}+36(\lambda-12c)\varepsilon H^{2},\] \[a_{2}= (186\varepsilon\varepsilon_{1}+78)H^{2}u_{1}^{(3)}(H)-(471 \varepsilon\varepsilon_{1}+273)Hu_{1}(H)u_{1}^{(2)}(H)\] \[+108\varepsilon\varepsilon_{1}(u_{1}(H))^{3}+[(13392-192 \varepsilon\varepsilon_{1})H^{2}-6(90c+13\lambda)\varepsilon]H^{2}u_{1}(H),\] \[a_{3}= 72\varepsilon\varepsilon_{1}H^{2}u_{1}^{(4)}(H)-276\varepsilon \varepsilon_{1}Hu_{1}(H)u_{1}^{(3)}(H)+108\varepsilon\varepsilon_{1}(u_{1}(H) )^{2}u_{1}^{(2)}(H)\] \[-24\varepsilon\varepsilon_{1}H(u_{1}^{(2)}(H))^{2}+[(2160-192 \varepsilon\varepsilon_{1})H^{2}+(180c+12\lambda)\varepsilon]H^{2}u_{1}^{(2) }(H)\] \[+[72(147+4\varepsilon\varepsilon_{1})H^{2}-(630c+42\lambda) \varepsilon]H(u_{1}(H))^{2}-44928\varepsilon\varepsilon_{1}H^{7}\] \[+(10368c+864\lambda)\varepsilon_{1}H^{5}-648\varepsilon \varepsilon_{1}H^{3}c^{2}.\] By acting on (3.12) with \(u_{1}\), and then combining (3.11), we have \[(b_{1}f_{1}+b_{2})u_{1}(f_{1})+b_{1}f_{1}^{3}+b_{3}f_{1}^{2}+b_{4}f_{1}+b_{5}=0,\] where \[b_{1}= -768H^{4}\varepsilon\varepsilon_{1}+6(12c-6\lambda)H^{2} \varepsilon_{1}-(13\varepsilon\varepsilon_{1}+49)Hu_{1}^{(2)}(H)+109(u_{1}(H) )^{2},\] \[b_{2}= (8304\varepsilon\varepsilon_{1}-64)H^{4}u_{1}(H)+(52\varepsilon \varepsilon_{1}+160)u_{1}^{(3)}(H)H^{2}+363(u_{1}(H))^{3}\] \[-(104\varepsilon\varepsilon_{1}+642)Hu_{1}(H)u_{1}^{(2)}(H)-(25 2c+20\lambda)H^{2}u_{1}(H)\varepsilon_{1},\] \[b_{3}= 3840H^{4}u_{1}(H)\varepsilon\varepsilon_{1}+(26\varepsilon \varepsilon_{1}+98)u_{1}^{(3)}(H)H^{2}+6\varepsilon_{1}(\lambda-12c)H^{2}u_{1 }(H)\] \[+327(u_{1}(H))^{3}-(13\varepsilon\varepsilon_{1}+485)Hu_{1}(H)u_ {1}^{(2)}(H),\] \[b_{4}= 12(13\varepsilon\varepsilon_{1}+31)u_{1}^{(4)}(H)H^{3}-2(715 \varepsilon\varepsilon_{1}+1021)H^{2}(u_{1}^{(2)}(H))^{2}+368640H^{8}\] \[+96(372\varepsilon\varepsilon_{1}-43)H^{5}u_{1}^{(2)}(H)+(923 \varepsilon\varepsilon_{1}+1973)H(u_{1}(H))^{2}u_{1}^{(2)}(H)\] \[-18(13\varepsilon\varepsilon_{1}+11)u_{1}^{(3)}(H)H^{2}u_{1}(H)+6 (464c+61\lambda)\varepsilon_{1}H^{2}(u_{1}(H))^{2}\] \[+36(96c^{2}-\lambda^{2}+4c\lambda)H^{4}+96(603\varepsilon \varepsilon_{1}-16)H^{4}(u_{1}(H))^{2}-436(u_{1}(H))^{4}\] \[-576\varepsilon(124c+3\lambda)H^{6}+12(26c\varepsilon-172c \varepsilon_{1}-13\lambda\varepsilon-47\lambda\varepsilon_{1})H^{3}u_{1}^{(2) }(H),\] \[b_{5}= -12(144c^{2}+\lambda^{2}+3c\lambda)H^{3}u_{1}(H)-12(13\varepsilon \varepsilon_{1}+103)u_{1}^{(3)}(H)H^{2}u_{1}^{(2)}(H)\] \[+144u_{1}^{(5)}(H)H^{3}-2(195c\varepsilon+13\lambda\varepsilon+1 779c\varepsilon_{1}+97\lambda\varepsilon_{1})H^{2}u_{1}^{(2)}(H)\varepsilon u _{1}(H)\] \[-8(2853\varepsilon\varepsilon_{1}+220)H^{3}(u_{1}(H))^{3}+8(12739 \varepsilon\varepsilon_{1}+1435)H^{4}u_{1}^{(2)}(H)u_{1}(H)\] \[-96(51\varepsilon\varepsilon_{1}+4)u_{1}^{(3)}(H)H^{5}+972u_{1}^{ (3)}(H)H(u_{1}(H))^{2}-436(u_{1}(H))^{3}u_{1}^{(2)}(H)\] \[+16(1476\varepsilon c-144c\varepsilon_{1}+741\varepsilon\lambda+1 2\lambda\varepsilon_{1})H^{5}u_{1}(H)-264u_{1}^{(4)}(H)H^{2}u_{1}(H)\] \[+\varepsilon_{1}(2010c+134\lambda)H(u_{1}(H))^{3}+(24576\varepsilon \varepsilon_{1}-20736)H^{7}u_{1}(H)\] \[+(52\varepsilon\varepsilon_{1}+580)Hu_{1}(H)(u_{1}^{(2)}(H))^{2}+24 \varepsilon_{1}(51c-2\lambda)u_{1}^{(3)}(H)H^{3}.\] which together with (3.12) deduce that \[c_{1}f_{1}+c_{2}=0,\] where \(c_{1}\) and \(c_{2}\) are polynomials about \(H\) and \(u_{1}^{(k)}(H)\), \(k=1,2,\cdots,5\). From the expression \([u_{i},u_{1}]=\nabla_{u_{i}}u_{1}-\nabla_{u_{1}}u_{i}\), as well as the relations (3.2) and \(\Gamma_{i1}^{1}=\Gamma_{1i}^{1}=0\) (see eq. (3.1) and (3.4)), we have \[u_{i}u_{1}^{(k)}(H)=0,\ k=0,1,2,\cdots,5. \tag{3.13}\] Differentiate \(c_{1}f_{1}+c_{2}=0\) along \(u_{i}\), with \(i=2,3,4\), using (3.13), we have \[c_{1}u_{i}(f_{1})=0.\] Suppose \(u_{i}(f_{1})\neq 0\), for some \(i\in\{2,3,4\}\), we can conclude that \(c_{1}=c_{2}=0\). And then, we can eliminate \(u_{1}^{(k)}(H)\), with \(k=1,2,\cdots,5\) and obtain a polynomial equation about \(H\), which implies \(H\) is a constant, a contradiction. So, \(u_{i}(f_{1})=0\), \(i=2,3,4\). \(\Box\) **Lemma 3.5** _We have \(u_{i}(\Gamma_{j1}^{j})=u_{i}(\lambda_{j})=0\), with \(i,j=2,3,4\)._ **Proof** Since \(u_{i}(f_{1})=0\), with \(i=2,3,4\) (see Lemma 3.4), it follows \[u_{i}u_{1}^{(k)}(f_{1})=0,\ i=2,3,4,\ k=0,1,2,3. \tag{3.14}\] Differentiate the expressions of \(f_{k}\), \(k=2,3\) in Lemma 3.3, combining (3.13) and (3.14), we find \[u_{i}(f_{k})=0,\ i=2,3,4,\ k=1,2,3.\] Note that \(f_{k}=\sum_{i=2}^{4}(\Gamma_{i1}^{i})^{k}\), the above equation tells us that \[\begin{cases}u_{i}(\Gamma_{21}^{2})+u_{i}(\Gamma_{31}^{3})+u_{i}(\Gamma_{41}^ {4})=0,\\ \Gamma_{21}^{2}u_{i}(\Gamma_{21}^{2})+\Gamma_{31}^{3}u_{i}(\Gamma_{31}^{3})+ \Gamma_{41}^{4}u_{i}(\Gamma_{41}^{4})=0,\\ (\Gamma_{21}^{2})^{2}u_{i}(\Gamma_{21}^{2})+(\Gamma_{31}^{3})^{2}u_{i}(\Gamma _{31}^{3})+(\Gamma_{41}^{4})^{2}u_{i}(\Gamma_{41}^{4})=0,\end{cases}\] with \(i=2,3,4\). Observe (3.6), we know \(\Gamma_{21}^{2}\), \(\Gamma_{31}^{3}\) and \(\Gamma_{41}^{4}\) are distinct. So, the coefficient determinant of the above system \[\left|\begin{array}{ccc}1&1&1\\ \Gamma_{21}^{2}&\Gamma_{31}^{3}&\Gamma_{41}^{4}\\ (\Gamma_{21}^{2})^{2}&(\Gamma_{31}^{3})^{2}&(\Gamma_{41}^{4})^{2}\end{array} \right|=(\Gamma_{41}^{4}-\Gamma_{31}^{3})(\Gamma_{31}^{3}-\Gamma_{21}^{2})( \Gamma_{41}^{4}-\Gamma_{21}^{2})\neq 0.\] So, the above system has only zero solution, i.e. \(u_{i}(\Gamma^{j}_{j1})=0\), \(i,j=2,3,4\). And then, we have \[u_{i}u_{1}(\Gamma^{j}_{j1})=0,\ i,j=2,3,4.\] Differentiate (3.6) along \(u_{j}\), combining \(u_{j}(\Gamma^{i}_{i1})=0\) and \(u_{j}u_{1}(\Gamma^{i}_{i1})=0\), we get \(u_{j}(\lambda_{i})=0\), \(i,j=2,3,4\). \(\Box\) #### 3.1.2 The proof of Proposition 3.2 Assume that \(H\) is not a constant, then there exists a neighbourhood \(U_{p}\) of \(p\) such that \(H\neq 0\) and \(\nabla H\neq 0\). The equation (2.2) implies \(\nabla H\) is an eigenvector of \(A\), with corresponding eigenvalue \(-2\varepsilon H\). Without loss of generality, we suppose \(\nabla H\) is in the direction of \(u_{1}\) and \(\lambda_{1}=-2\varepsilon H\), then the equations and Lemmas in subsection 3.1.1 hold. We will use these equations and Lemma 3.5 to deduce contradictions. The second equation in (3.5) yields \[\varepsilon_{4}(\lambda_{2}-\lambda_{4})\Gamma^{4}_{32}=\varepsilon_{4}( \lambda_{3}-\lambda_{4})\Gamma^{4}_{23}=\varepsilon_{3}(\lambda_{4}-\lambda_ {3})\Gamma^{3}_{24} \tag{3.15}\] \[= \varepsilon_{3}(\lambda_{2}-\lambda_{3})\Gamma^{3}_{42}= \varepsilon_{2}(\lambda_{3}-\lambda_{2})\Gamma^{2}_{43}=\varepsilon_{2}( \lambda_{4}-\lambda_{2})\Gamma^{2}_{34}.\] Together (3.1) with (3.15), we have \[(\lambda_{2}-\lambda_{3})(\lambda_{3}-\lambda_{4})(\lambda_{4}-\lambda_{2})( \Gamma^{4}_{23}\Gamma^{4}_{32}+\Gamma^{3}_{24}\Gamma^{3}_{42}+\Gamma^{2}_{34} \Gamma^{2}_{43})=0,\] i.e. \[\Gamma^{4}_{23}\Gamma^{4}_{32}+\Gamma^{3}_{24}\Gamma^{3}_{42}+\Gamma^{2}_{34} \Gamma^{2}_{43}=0. \tag{3.16}\] It follows from the equation (3.5) and Lemma 3.5 that \[\Gamma^{i}_{ij}=0,\ i,j=2,3,4. \tag{3.17}\] Apply Gauss equation for \(\langle R(u_{i},u_{j})u_{k},u_{1}\rangle\) and \(\langle R(u_{i},u_{j})u_{i},u_{j}\rangle\), with \(i,j,k\) are distinct and \(i,j,k=2,3,4\), combining (3.1), (3.4) and (3.17), we have \[\varepsilon_{4}(\Gamma^{2}_{21}-\Gamma^{4}_{41})\Gamma^{4}_{32}= \varepsilon_{4}(\Gamma^{3}_{31}-\Gamma^{4}_{41})\Gamma^{4}_{23}=\varepsilon_{3 }(\Gamma^{4}_{41}-\Gamma^{3}_{31})\Gamma^{3}_{24} \tag{3.18}\] \[= \varepsilon_{3}(\Gamma^{2}_{21}-\Gamma^{3}_{31})\Gamma^{3}_{42}= \varepsilon_{2}(\Gamma^{3}_{31}-\Gamma^{2}_{21})\Gamma^{2}_{43}=\varepsilon_{ 2}(\Gamma^{4}_{41}-\Gamma^{2}_{21})\Gamma^{2}_{34},\] and \[\begin{cases}\varepsilon_{1}\varepsilon_{2}\varepsilon_{3}\Gamma^{2}_{21} \Gamma^{3}_{31}-2\varepsilon_{4}\Gamma^{4}_{23}\Gamma^{4}_{32}=-c\varepsilon _{2}\varepsilon_{3}-\lambda_{2}\lambda_{3}\varepsilon\varepsilon_{2}\varepsilon _{3},\\ \varepsilon_{1}\varepsilon_{2}\varepsilon_{4}\Gamma^{2}_{21}\Gamma^{4}_{41}-2 \varepsilon_{3}\Gamma^{3}_{24}\Gamma^{3}_{42}=-c\varepsilon_{2}\varepsilon_{4} -\lambda_{2}\lambda_{4}\varepsilon\varepsilon_{2}\varepsilon_{4},\\ \varepsilon_{1}\varepsilon_{3}\varepsilon_{4}\Gamma^{3}_{31}\Gamma^{4}_{41}-2 \varepsilon_{2}\Gamma^{2}_{34}\Gamma^{2}_{43}=-c\varepsilon_{3}\varepsilon_{4} -\lambda_{3}\lambda_{4}\varepsilon\varepsilon_{3}\varepsilon_{4},\end{cases} \tag{3.19}\] which together with (3.16) implies that \[\varepsilon_{1}(\Gamma_{21}^{2}\Gamma_{31}^{3}+\Gamma_{21}^{2}\Gamma_{41}^{4}+ \Gamma_{31}^{3}\Gamma_{41}^{4})+\varepsilon(\lambda_{2}\lambda_{3}+\lambda_{3} \lambda_{4}+\lambda_{2}\lambda_{4})+3c=0. \tag{3.20}\] In the following, we treat the cases \(\Gamma_{23}^{4}\neq 0\) at some point and \(\Gamma_{23}^{4}=0\) at any point in \(U_{p}\), respectively. _Case 1: \(\Gamma_{23}^{4}\neq 0\) at some point in \(U_{p}\)._ Suppose \(\Gamma_{23}^{4}\neq 0\) at \(q\in U_{p}\), then there exists a neighbourhood \(U_{q}\subset U_{p}\), such that \(\Gamma_{23}^{4}\neq 0\) on \(U_{q}\). We work on \(U_{q}\) in the following discussion. The equations (3.15) and (3.18) give \[\frac{\Gamma_{41}^{4}-\Gamma_{31}^{3}}{\lambda_{4}-\lambda_{3}}=\frac{\Gamma _{41}^{4}-\Gamma_{21}^{2}}{\lambda_{4}-\lambda_{2}}=\frac{\Gamma_{31}^{3}- \Gamma_{21}^{2}}{\lambda_{3}-\lambda_{2}}.\] As \(u_{i}(\Gamma_{j1}^{j})=u_{i}(\lambda_{j})=0\), \(i,j=2,3,4\), we conclude from the above equation that there exists two smooth functions \(\mu\) and \(\nu\), with \(u_{i}(\mu)=u_{i}(\nu)=0\), \(i=2,3,4\), such that \[\Gamma_{i1}^{i}=\mu\lambda_{i}+\nu,\ i=2,3,4. \tag{3.21}\] Take sum for \(i\) in (3.21), then \[f_{1}=6\varepsilon H\mu+3\nu. \tag{3.22}\] Differentiate (3.21) along \(u_{1}\), using (3.5) and (3.6), we get for \(i=2,3,4\), \[(u_{1}(\mu)-2\varepsilon H\mu^{2}+\mu\nu-2H\varepsilon_{1})\lambda_{i}+u_{1}( \nu)+\nu^{2}-2\varepsilon H\mu\nu+c\varepsilon_{1}=0,\] which yields \[\begin{cases}u_{1}(\mu)=2\varepsilon H\mu^{2}-\mu\nu+2H\varepsilon_{1},\\ u_{1}(\nu)=-\nu^{2}+2\varepsilon H\mu\nu-c\varepsilon_{1}.\end{cases} \tag{3.23}\] Substitute (3.21) into (3.20), we obtain \[(\varepsilon_{1}\mu^{2}+\varepsilon)(\lambda_{2}\lambda_{3}+\lambda_{2} \lambda_{4}+\lambda_{3}\lambda_{4})+2\varepsilon_{1}\mu\nu(\lambda_{2}+ \lambda_{3}+\lambda_{4})+3\varepsilon_{1}\nu^{2}+3c=0,\] which can be rewritten as \[(\varepsilon_{1}\mu^{2}+\varepsilon)\text{tr}A^{2}=(\varepsilon_{1}\mu^{2}+ \varepsilon)40H^{2}+24\varepsilon\varepsilon_{1}H\mu\nu+6\varepsilon_{1}\nu ^{2}+6c. \tag{3.24}\] Put (3.21) and (3.22) into (3.9), combining (3.24), we have \[u_{1}(H)=\frac{-26}{3}\varepsilon H^{2}\mu-\frac{4\varepsilon_{1}H\mu^{2}\nu +\varepsilon\varepsilon_{1}\mu\nu^{2}+c\varepsilon\mu}{\varepsilon_{1}\mu^{2} +\varepsilon}-2H\nu. \tag{3.25}\] Let on (3.25) by \(u_{1}\), applying (3.23), we get the expression of \(u_{1}u_{1}(H)\). And then, substitute the expressions of \(u_{1}(H)\) and \(u_{1}u_{1}(H)\) into (3.7), combining (3.24), it gives \[\begin{split}&(36\mu^{3}\varepsilon+18\mu\varepsilon_{1})\nu^{3}+(30 0H\mu^{4}+156\mu^{2}H\varepsilon\varepsilon_{1}-72H)\nu^{2}+(816\mu^{5}H^{2} \varepsilon\\ &+624\mu^{3}H^{2}\varepsilon_{1}+36\mu^{3}c\varepsilon\varepsilon _{1}-192\varepsilon H^{2}\mu+18c\mu)\nu+940\mu^{4}H^{3}\varepsilon\varepsilon _{1}\\ &+728\mu^{6}H^{3}+174\mu^{4}Hc\varepsilon_{1}+9\mu^{4}H\lambda \varepsilon_{1}-304\mu^{2}H^{3}+120\mu^{2}Hc\varepsilon\\ &+18\mu^{2}H\lambda\varepsilon-516H^{3}\varepsilon\varepsilon_{1} -54Hc\varepsilon_{1}+9\varepsilon_{1}\lambda H=0.\end{split} \tag{3.26}\] By differentiating (3.26) along \(u_{1}\), using (3.23) and (3.25), one derive \[d_{1}\nu^{4}+d_{2}\nu^{3}+d_{3}\nu^{2}+d_{4}\nu+d_{5}=0, \tag{3.27}\] where \[\begin{split} d_{1}=&-516\mu^{5}\varepsilon \varepsilon_{1}-444\mu^{3},\\ d_{2}=&-4800\mu^{6}H\varepsilon_{1}-4416\mu^{4}H \varepsilon+420\mu^{2}H\varepsilon_{1}+324H\varepsilon,\\ d_{3}=&-15872\mu^{7}H^{2}\varepsilon\varepsilon_{1 }-17668\mu^{5}H^{2}-726\mu^{5}c\varepsilon-9\mu^{5}\lambda\varepsilon+1864\mu^ {3}H^{2}\varepsilon\varepsilon_{1}\\ &-618\mu^{3}c\varepsilon_{1}-18\mu^{3}\lambda\varepsilon_{1}+366 0\mu H^{2}+36c\varepsilon\mu-9\mu\lambda\varepsilon,\\ d_{4}=&-21824\mu^{8}H^{3}\varepsilon_{1}-31432\mu^{6 }H^{3}\varepsilon-3684\mu^{6}Hc-90\mu^{6}H\lambda+5320\mu^{4}H^{3}\varepsilon _{1}\\ &-3588\mu^{4}Hc\varepsilon\varepsilon_{1}-198\mu^{4}H\lambda \varepsilon\varepsilon_{1}+17640\mu^{2}H^{3}\varepsilon+384\mu^{2}Hc-126\mu^ {2}H\lambda\\ &+2712H^{3}\varepsilon_{1}+288Hc\varepsilon\varepsilon_{1}-18H \lambda\varepsilon\varepsilon_{1},\\ d_{5}=& 6024\mu^{5}H^{4}\varepsilon\varepsilon_{1}-10192\mu^{ 9}H^{4}\varepsilon\varepsilon_{1}-18376\mu^{7}H^{4}-3116\mu^{7}H^{2}c \varepsilon-6\mu^{7}H^{2}\lambda\varepsilon\\ &-3544\mu^{5}H^{2}c\varepsilon_{1}-18\mu^{5}H^{2}\lambda \varepsilon_{1}-210\mu^{5}c^{2}\varepsilon\varepsilon_{1}-9\mu^{5}c\lambda \varepsilon\varepsilon_{1}+26408\mu^{3}H^{4}\\ &+2260\mu^{3}H^{2}c\varepsilon-18\mu^{3}H^{2}\lambda\varepsilon+1 2200\mu H^{4}\varepsilon\varepsilon_{1}-174\mu^{3}c^{2}-18\mu^{3}c\lambda\\ &+2688\mu H^{2}c\varepsilon_{1}-6\mu H^{2}\lambda\varepsilon_{1} +36\mu c^{2}\varepsilon\varepsilon_{1}-9\mu c\lambda\varepsilon\varepsilon_{ 1}.\end{split}\] Eliminate \(\nu\) from (3.26) and (3.27), we can get a polynomial equation about \(H\) and \(\mu\) \[\sum_{k=29}^{33}\sum_{l=k-28}^{7}r_{k,l}H^{2l}\mu^{2k}+\sum_{k=4}^{28}\sum_{l=0 }^{7}r_{k,l}H^{2l}\mu^{2k}+\sum_{k=0}^{3}\sum_{l=4-k}^{7}r_{k,l}H^{2l}\mu^{2k}=0, \tag{3.28}\] where \(r_{0,l},r_{1,l},\cdots,r_{33,l}\) are all real constants. Applying \(u_{1}\) to both sides of (3.27), we obtain a polynomial equation about \(H,\mu\) and \(\nu\). Then, eliminating \(\nu\) from this equation and (3.26), we derive another polynomial equation about \(H\) and \(\mu\) \[(\varepsilon\varepsilon_{1}+\mu^{2})^{10}(\varepsilon\varepsilon_{1}+2\mu^{2 })^{6}(g_{1}(H,\mu))^{2}g_{2}(H,\mu)=0,\] i.e. \[(\varepsilon\varepsilon_{1}+\mu^{2})(\varepsilon\varepsilon_{1}+2\mu^{2})g_{1}(H, \mu)g_{2}(H,\mu)=0, \tag{3.29}\] where \[g_{1}(H,\mu)= 6160\varepsilon_{1}H^{2}\mu^{14}+16(17136c-62986\varepsilon H^{2 }+1935\lambda)\mu^{12}+69984\varepsilon H^{2}\] \[+8\varepsilon_{1}(37935\varepsilon c-512645H^{2}+8424\varepsilon \lambda)\mu^{10}+48(987\lambda-28138\varepsilon H^{2}\] \[+1884c)\mu^{8}+\varepsilon_{1}(8334\varepsilon c+2136896H^{2}+10 044\varepsilon\lambda)\mu^{6}+(6570c\] \[+312180\varepsilon H^{2}-1467\lambda)\mu^{4}+\varepsilon_{1}(21 06\varepsilon c-14292H^{2}-567\varepsilon\lambda)\mu^{2},\] and \[g_{2}(H,\mu)= \sum_{l=4}^{6}s_{17,l}H^{2l}\mu^{34}+\sum_{l=3}^{6}s_{16,l}H^{2l} \mu^{32}+\sum_{l=2}^{6}s_{15,l}H^{2l}\mu^{30}+\sum_{l=1}^{6}s_{14,l}H^{2l}\mu^ {28}\] \[+\sum_{k=2}^{13}\sum_{l=0}^{6}s_{k,l}H^{2l}\mu^{2k}+\sum_{l=1}^{6 }s_{1,l}H^{2l}\mu^{2}+\sum_{l=2}^{6}s_{0,l}H^{2l},\] with \(s_{0,l},s_{1,l},\cdots,s_{17,l}\) are all real constants. We can eliminate \(\mu\) from (3.28) and (3.29), and give a polynomial equation of degree 696 for \(H\), which implies \(H\) is a constant in \(U_{q}\). Thus, \(\nabla H=0\) at \(q\), a contradiction. _Case 2: \(\Gamma^{4}_{23}\equiv 0\) on \(U_{p}\)._ In this case, (3.19) can be simplified into \[\begin{cases}\varepsilon_{1}\Gamma^{2}_{21}\Gamma^{3}_{31}=-c-\varepsilon \lambda_{2}\lambda_{3},\\ \varepsilon_{1}\Gamma^{2}_{21}\Gamma^{4}_{41}=-c-\varepsilon\lambda_{2} \lambda_{4},\\ \varepsilon_{1}\Gamma^{3}_{31}\Gamma^{4}_{41}=-c-\varepsilon\lambda_{3} \lambda_{4}.\end{cases} \tag{3.30}\] As \(\lambda_{2},\lambda_{3}\) and \(\lambda_{4}\) are distinct, we have \(c+\varepsilon\lambda_{2}\lambda_{3}\neq 0\), or \(c+\varepsilon\lambda_{2}\lambda_{4}\neq 0\), or \(c+\varepsilon\lambda_{3}\lambda_{4}\neq 0\). So, the above equations implies there are at least two non-zero terms in \(\{\Gamma^{2}_{21},\Gamma^{3}_{31},\Gamma^{4}_{41}\}\). Without loss of generality, we suppose \(\Gamma^{2}_{21},\Gamma^{3}_{31}\neq 0\) at any point in \(U_{p}\), and discuss separately the subcases \(\Gamma^{4}_{41}\equiv 0\) on \(U_{p}\) and \(\Gamma^{4}_{41}\neq 0\) at some point. _Subcase (1): \(\Gamma^{4}_{41}\equiv 0\) on \(U_{p}\)._ It follows from (3.30) that \(\lambda_{4}=0\). If \(c\neq 0\), then \(c+\lambda_{2}\lambda_{4}\neq 0\), which implies \(\Gamma^{4}_{41}\neq 0\), a contradiction. In the following, we suppose \(c=0\). Act on both sides of \(\lambda_{2}+\lambda_{3}=6\varepsilon H\) by \(u_{1}\), we get \[6\varepsilon u_{1}(H)=(-2\varepsilon H-\lambda_{2})\Gamma^{2}_{21}+(-2 \varepsilon H-\lambda_{3})\Gamma^{3}_{31}. \tag{3.31}\] Differentiate (3.31) along \(u_{1}\), using (3.5) and (3.6), we obtain the expression of \(u_{1}u_{1}(H)\), and put it into (3.7), combining (3.30) and (3.31), we have \[\begin{split} 2(-2\varepsilon H-\lambda_{2})(\Gamma_{21}^{2})^{2}& +2(-2\varepsilon H-\lambda_{3})(\Gamma_{31}^{3})^{2}-34\varepsilon_{1}H \lambda_{2}\lambda_{3}\\ &+468\varepsilon_{1}H^{3}-9\varepsilon\varepsilon_{1}\lambda H= 0.\end{split} \tag{3.32}\] By applying (3.5), (3.6) and (3.31), differentiate (3.32) along \(u_{1}\), and combining (3.30) and (3.32), we derive \[K_{1}\Gamma_{21}^{2}+K_{2}\Gamma_{31}^{3}=0, \tag{3.33}\] where \[\begin{split} K_{1}=&-680H\lambda_{2}\lambda_{3} \varepsilon\varepsilon_{1}+6552\varepsilon H^{3}-96H^{2}\lambda_{2}\varepsilon +264H\lambda_{2}\lambda_{3}\varepsilon\\ &+40\lambda_{2}^{2}\lambda_{3}\varepsilon-1404H^{2}\lambda_{2}+4 08H^{2}\lambda_{3}-48H\lambda_{2}^{2}+80H\lambda_{2}\lambda_{3}\\ &+34\lambda_{3}\lambda_{2}^{2}-4\lambda_{2}\lambda_{3}^{2}+9 \lambda_{2}\lambda\varepsilon-162H\lambda;\\ K_{2}=&-680H\lambda_{2}\lambda_{3}\varepsilon\varepsilon _{1}+6552\varepsilon H^{3}-96H^{2}\lambda_{3}\varepsilon+264H\lambda_{2} \lambda_{3}\varepsilon\\ &+40\lambda_{3}^{2}\lambda_{2}\varepsilon-1404H^{2}\lambda_{3}+4 08H^{2}\lambda_{2}-48H\lambda_{3}^{2}+80H\lambda_{2}\lambda_{3}\\ &+34\lambda_{2}\lambda_{3}^{2}-4\lambda_{3}\lambda_{2}^{2}+9 \lambda_{3}\lambda\varepsilon-162H\lambda.\end{split}\] Since (3.30), we can eliminate \(\Gamma_{21}^{2}\) and \(\Gamma_{31}^{3}\) from (3.32) and (3.33), and give \[\begin{split}&-32(\lambda_{2}^{6}\lambda_{3}^{3}+\lambda_{2}^{ 3}\lambda_{3}^{6})+c_{1}(\lambda_{2}^{5}\lambda_{3}^{4}+\lambda_{2}^{4} \lambda_{3}^{5})+c_{2}(\lambda_{2}^{5}\lambda_{3}^{3}+\lambda_{2}^{3}\lambda_ {3}^{5})+c_{3}\lambda_{2}^{4}\lambda_{3}^{4}\\ &+c_{4}(\lambda_{2}^{5}\lambda_{3}^{2}+\lambda_{2}^{2}\lambda_{3} ^{5})+c_{5}(\lambda_{2}^{4}\lambda_{3}^{3}+\lambda_{2}^{3}\lambda_{3}^{4})-921 6\varepsilon H^{3}(\lambda_{2}^{5}\lambda_{3}+\lambda_{2}\lambda_{3}^{5})\\ &+c_{6}(\lambda_{2}^{4}\lambda_{3}^{2}+\lambda_{2}^{2}\lambda_{3} ^{4})+c_{7}\lambda_{2}^{3}\lambda_{3}^{3}+c_{8}(\lambda_{2}^{4}\lambda_{3}+ \lambda_{2}\lambda_{3}^{4})+c_{9}(\lambda_{2}^{3}\lambda_{3}^{2}+\lambda_{2}^{ 2}\lambda_{3}^{3})\\ &+c_{10}(\lambda_{2}^{3}\lambda_{3}+\lambda_{2}\lambda_{3}^{3})+ c_{11}\lambda_{2}^{2}\lambda_{3}^{2}+c_{12}(\lambda_{2}^{3}+\lambda_{3}^{3})+c_{13}( \lambda_{2}^{2}\lambda_{3}+\lambda_{2}\lambda_{3}^{2})\\ &+c_{14}(\lambda_{2}^{2}+\lambda_{3}^{2})+c_{15}\lambda_{2} \lambda_{3}+c_{16}(\lambda_{2}+\lambda_{3})+c_{17}=0,\end{split} \tag{3.34}\] where \(c_{i}\), \(1\leq i\leq 17\) are polynomials about \(H\). Act on (3.34) by \(u_{1}\), we get \[L_{1}\Gamma_{21}^{2}+L_{2}\Gamma_{31}^{3}=0, \tag{3.35}\] where \(L_{1}\) and \(L_{2}\) are polynomials about \(H,\lambda_{2}\) and \(\lambda_{3}\). It follows from (3.33) and (3.35) that \[K_{1}L_{2}-K_{2}L_{1}=0.\] Take into account \(\lambda_{2}+\lambda_{3}=6\varepsilon H\), we can eliminate \(\lambda_{2}\) and \(\lambda_{3}\) from (3.34) and the above equation, and derive a polynomial equation of degree \(78\) for \(H\), which implies \(H\) is a constant, a contradiction. _Subcase (2): \(\Gamma^{4}_{41}\neq 0\) at some point in \(U_{p}\)._ Suppose \(\Gamma^{4}_{41}\neq 0\) at \(q\in U_{p}\), then there exists a neighbourhood \(U_{q}\subset U_{p}\) such that \(\Gamma^{4}_{41}\neq 0\) on \(U_{q}\). For this subcase, we work on \(U_{q}\). Let \[\mu=\lambda_{2}\lambda_{3}+\lambda_{2}\lambda_{4}+\lambda_{3}\lambda_{4}\] and \[\nu=\lambda_{2}\lambda_{3}\lambda_{4}.\] By use of (3.5), differentiate both sides of \(\lambda_{2}+\lambda_{3}+\lambda_{4}=6\varepsilon H\), \(\mu=\lambda_{2}\lambda_{3}+\lambda_{2}\lambda_{4}+\lambda_{3}\lambda_{4}\) and \(\nu=\lambda_{2}\lambda_{3}\lambda_{4}\) along \(u_{1}\), and then multiply these results by \((c+\varepsilon\lambda_{3}\lambda_{4})\Gamma^{2}_{21}\), combining (3.30), we obtain \[\begin{cases}6\varepsilon u_{1}(H)=Q(-12\varepsilon Hc^{2}-10H\mu c+3 \varepsilon\nu c-48H^{2}\nu+2\mu\nu),\\ u_{1}(\mu)=Q[3\nu^{2}-2(12H^{2}+\mu)c^{2}-2(\mu^{2}+6\varepsilon H^{2}\mu-3H \nu)c-10\varepsilon H\mu\nu],\\ u_{1}(\nu)=Q[(-2\varepsilon H\mu-3\nu)c^{2}+(-24\varepsilon H^{2}\nu-2 \varepsilon\mu\nu)c-12\varepsilon H\nu^{2}],\end{cases} \tag{3.36}\] where \(Q=\frac{-\varepsilon_{1}}{(c+\varepsilon\lambda_{3}\lambda_{4})\Gamma^{2}_{21}}\). As (3.30) and (3.36), (3.7) can be rewritten as \[\begin{split}&(30\varepsilon Hc^{2}-44Hc^{2}-6\varepsilon\nu c -2c\nu)\mu^{2}+(504H^{3}c^{2}-36Hc^{3}\varepsilon\\ &-9Hc^{2}\lambda\varepsilon-12c\nu H^{2}-15\nu c^{2}\varepsilon-78H \nu^{2})\mu+3024H^{4}\nu c\\ &+504H^{3}\varepsilon c^{3}-108H^{2}\nu\varepsilon c^{2}-54H^{2} \nu\lambda\varepsilon c+1080H^{3}\nu^{2}\\ &-72H\nu^{2}\varepsilon c-9H\nu^{2}\lambda\varepsilon-18Hc^{4}-9H \lambda c^{3}+9\nu^{3}\varepsilon-9\nu c^{3}=0.\end{split} \tag{3.37}\] Act on (3.37) by \(u_{1}\), applying (3.36), it gives \[f_{1}\mu^{3}+f_{2}\mu^{2}+f_{3}\mu+f_{4}=0. \tag{3.38}\] where \[\begin{split} f_{1}=&(828\varepsilon-256)Hc^{3}+(18 0\varepsilon+80)\nu c^{2};\\ f_{2}=& 4(522\varepsilon+747)H\nu^{2}c-1728(6 \varepsilon+5)H^{3}c^{3}-288(11\varepsilon-46)H^{2}\nu c^{2}-156\nu^{3}\\ &+4(531\varepsilon-162)Hc^{4}+18(5\varepsilon+6)H\lambda c^{3}+6 (10\varepsilon+87)\nu c^{3}-18\nu c^{2}\lambda\varepsilon;\\ f_{3}=&-72576H^{5}c^{3}-(22752\varepsilon+8640)H^{3}c^ {4}+1296H^{3}c^{3}\lambda\varepsilon-283392H^{4}\nu c^{2}\\ &+2916H^{2}\nu c^{2}\lambda\varepsilon-3312H^{3}\nu^{2}c+(17928 \varepsilon+3888)H^{2}\nu c^{3}+90H\nu^{2}c\lambda\varepsilon\\ &+(6552\varepsilon+648)H\nu^{2}c^{2}+1152Hc^{5}+306Hc^{4}\lambda +26136H^{2}\nu^{3}\\ &-774\nu^{3}c\varepsilon-18\nu^{3}\lambda\varepsilon+414\nu c^{4 }-45\nu c^{3}\lambda-216\nu^{3}c;\end{split}\] \[f_{4}= 7776H^{4}\nu\lambda\varepsilon c^{2}+11664H^{3}\nu^{2}\lambda \varepsilon c-311040H^{4}\nu^{3}-236736H^{4}\nu\varepsilon c^{3}\] \[+17280H^{3}\nu^{2}\varepsilon c^{2}+108H\lambda\varepsilon c^{5}+166 32H^{2}\nu^{3}\varepsilon c+1728H^{2}\nu^{3}\lambda\varepsilon-27\nu\lambda \varepsilon c^{4}\] \[-54H\nu^{2}\lambda c^{2}+2376H^{2}\nu\lambda c^{3}-1109376H^{5} \nu^{2}c+12096H^{2}\nu c^{4}-27\nu^{3}\lambda c\] \[-435456H^{6}\nu c^{2}+1296H^{3}\lambda c^{4}+2268H\nu^{2}c^{3}-725 76H^{5}\varepsilon c^{4}+216H\varepsilon c^{6}\] \[+108\nu\varepsilon c^{5}-3348H\nu^{4}\varepsilon-12960H^{3}c^{5}- 972\nu^{3}c^{2}.\] By differentiating (3.38) along \(u_{1}\), using (3.36), we have another polynomial equation about \(H\), \(\mu\) and \(\nu\), denoted by \(g(H,\mu,\nu)=0\). Eliminating \(\mu\) from (3.37), (3.38) and \(g(H,\mu,\nu)=0\), we get \[(24336\varepsilon+18928)\nu^{15}+\sum_{i=0}^{14}h_{i}\nu^{i}=0, \tag{3.39}\] and \[32(32436060H^{2}\varepsilon+50162468H^{2}+2184345c\varepsilon+1894023c)\nu^{19 }+\sum_{i=0}^{18}l_{i}\nu^{i}=0, \tag{3.40}\] where \(h_{i}\), with \(0\leq i\leq 14\) and \(l_{j}\), with \(0\leq j\leq 18\) are polynomials about \(H\). When \(\varepsilon=1\), (3.39) and (3.40) can be rewritten as \[\begin{cases}(2cH-\nu)(7cH+4\nu)^{2}(c^{3}-6cH\nu-2\nu^{2})^{3}h_{1}(H,\nu)=0, \\ (2cH-\nu)(7cH+4\nu)^{3}(c^{3}-6cH\nu-2\nu^{2})^{3}h_{2}(H,\nu)=0,\end{cases}\] where \[\begin{cases}h_{1}(H,\nu)=338\nu^{6}+\sum_{i=0}^{5}t_{i}\nu^{i},\\ h_{2}(H,\nu)=(254898c+5162408H^{2})\nu^{9}+\sum_{i=0}^{8}s_{i}\nu^{i}.\end{cases}\] Here, \(t_{i}\) and \(t_{j}\) are polynomials about \(H\), with \(0\leq i\leq 5\) and \(0\leq j\leq 8\). If \[(2cH-\nu)(7cH+4\nu)(c^{3}-6cH\nu-2\nu^{2})\neq 0\] at some point \(o\) in \(U_{q}\), we can eliminate \(\nu\) from \(h_{1}(H,\nu)=0\) and \(h_{2}(H,\nu)=0\), and get a polynomial equation of degree \(94\) for \(H\), which implies \(\nabla H=0\) at \(o\), a contradiction. If \[(2cH-\nu)(7cH+4\nu)(c^{3}-6cH\nu-2\nu^{2})\equiv 0\] on \(U_{q}\), then acting on it by \(u_{1}\), by use of (3.36), we have \[\begin{split}& 54c^{6}H\nu-336c^{7}H^{2}+3048c^{5}H^{3}\nu+12 096c^{4}H^{5}\nu+2316c^{4}H^{2}\nu^{2}\\ &+147c^{5}\nu^{2}+27936c^{3}H^{4}\nu^{2}-1080c^{3}H\nu^{3}-1344c^{ 2}H^{3}\nu^{3}-510c^{2}\nu^{4}\\ &-10416cH^{2}\nu^{4}-2304H\nu^{5}+(98c^{4}\nu^{2}-292c^{6}H^{2}+1 008c^{5}H^{4}\\ &+130c^{5}H\nu+4344c^{4}H^{3}\nu+200c^{3}H^{2}\nu^{2}-1532c^{2}H \nu^{3}-340c\nu^{4})\mu=0.\end{split} \tag{3.41}\] Furthermore, we claim that \(7cH+4\nu\equiv 0\) on \(U_{q}\). Because if \(7cH+4\nu\neq 0\) (i.e. \((2cH-\nu)(c^{3}-6cH\nu-2\nu^{2})=0\)) at some point in \(U_{q}\), then as (3.41), we have \[(c+\lambda_{2}\lambda_{3})(c+\lambda_{2}\lambda_{4})(c+\lambda_{3}\lambda_{4}) =\nu^{2}+6cH\nu+c^{2}\mu+c^{3}=0, \tag{3.42}\] a contradiction. We can easily derive a polynomial equation about \(H\) from \(7cH+4\nu=0\) and (3.41). Then, \(H\) is a constant, a contradiction. When \(\varepsilon=-1\), (3.39) and (3.40) reduces to \[\begin{cases}(37cH-2\nu)^{2}L_{1}(H,\nu)=0,\\ (37cH-2\nu)^{3}L_{2}(H,\nu)=0,\end{cases}\] where \[\begin{cases}L_{1}(H,\nu)=-1352\nu^{13}+\sum_{i=0}^{12}r_{i}\nu^{i},\\ L_{2}(H,\nu)=(254898c+5162408H^{2})\nu^{16}+\sum_{i=0}^{15}w_{i}\nu^{i}.\end{cases}\] Here, \(r_{i}\) and \(w_{j}\) are all polynomials about \(H\), with \(0\leq i\leq 12\) and \(0\leq j\leq 8\). If \(37cH-2\nu\neq 0\) at some point \(o\) in \(U_{q}\), we can eliminate \(\nu\) from \(L_{1}(H,\nu)=0\) and \(L_{2}(H,\nu)=0\), and get a polynomial equation of degree \(292\) for \(H\), which implies \(\nabla H=0\) at \(o\), a contradiction. If \(37cH-2\nu=0\) at any point in \(U_{q}\), then acting on it by \(u_{1}\), combining (3.36), we have \[-1488c\nu H^{2}-346H\mu c^{2}-444Hc^{3}+144H\nu^{2}+98\nu\mu c+147\nu c^{2}=0,\] which together with \(37cH-2\nu=0\) and (3.37) implies that \(H\) is a constant in \(U_{q}\), a contradiction. \(\square\) ### The shape operator has form (Ii) **Proposition 3.6**: _Let \(M_{r}^{4}\) be a nondegenerate hypersurface of \(N_{s}^{5}(c)\) with proper mean curvature vector field. Suppose that the shape operator \(A\) of \(M_{r}^{4}\) has the form (Ii), then \(M_{r}^{4}\) has constant mean curvature._ **Proof** We suppose \(\lambda_{1},\lambda_{2}\) and \(\lambda_{3}\) are distinct with each other. When there are at most two distinct values in \(\{\lambda_{1},\lambda_{2},\lambda_{3}\}\), we omit the proof, since it is similar but much easier. Denote \(u_{2}=u_{2_{1}}\) and \(u_{3}=u_{3_{1}}\). Let \(\nabla_{u_{B}}u_{C}=\Gamma^{D}_{BC}u_{D}\), \(B,C=1_{1},1_{2},2,3\), then compatibility condition implies that \[\Gamma^{1_{2}}_{D1_{1}}=\Gamma^{1_{1}}_{D1_{2}}=\Gamma^{2}_{D2}=\Gamma^{3}_{D3 }=0, \tag{3.43}\] and \[\Gamma^{1_{1}}_{D1_{1}}=-\Gamma^{1_{2}}_{D1_{2}},\Gamma^{3}_{D2}=-\varepsilon_ {2}\varepsilon_{3}\Gamma^{2}_{D3},\Gamma^{1_{1}}_{Di}=-\varepsilon_{1} \varepsilon_{i}\Gamma^{i}_{D1_{2}},\Gamma^{1_{2}}_{Di}=-\varepsilon_{1} \varepsilon_{i}\Gamma^{i}_{D1_{1}}, \tag{3.44}\] with \(i=2,3\) and \(D=1_{1},1_{2},2,3\). Express \(\nabla H\) as \[\nabla H=\varepsilon_{1}u_{1_{2}}(H)u_{1_{1}}+\varepsilon_{1}u_{1_{1}}(H)u_{ 1_{2}}+\varepsilon_{2}u_{2}(H)u_{2}+\varepsilon_{3}u_{3}(H)u_{3}. \tag{3.45}\] Assume that \(H\) is not a constant, it follows from (2.2) that \(\nabla H\) is an eigenvector of \(A\) with corresponding eigenvalue \(-2\varepsilon H\), is light-like or not. _Case 1: \(\nabla H\) is light-like._ In this case, \(\nabla H\) is in the direction \(u_{1_{2}}\) and \(\lambda_{1}=-2\varepsilon H\). As (3.45), we know \[u_{1_{1}}(H)\neq 0,\ u_{1_{2}}(H)=u_{2}(H)=u_{3}(H)=0. \tag{3.46}\] According to symmetry of the connection \(\nabla\), we have \[\Gamma^{1_{1}}_{BC}=\Gamma^{1_{1}}_{CB},\quad B,C=1_{2},2,3. \tag{3.47}\] Investigate the equation (2.1), for \((X,Y,Z)=(u_{1_{1}},u_{i},u_{1_{2}}),(u_{2},u_{3},u_{1_{2}})\), \((u_{1_{2}},u_{i},u_{i})\), with \(i=2,3\), by use of (3.46) and (3.47), we obtain that \[\Gamma^{1_{1}}_{1_{1}2}=\Gamma^{1_{1}}_{1_{1}3}=\Gamma^{1_{1}}_{23}=0, \tag{3.48}\] and \[u_{1_{2}}(\lambda_{i})=(-2\varepsilon H-\lambda_{i})\Gamma^{i}_{i1_{2}},\ i=2,3. \tag{3.49}\] Calculate \(\langle R(u_{1_{2}},u_{i})u_{1_{2}},u_{i}\rangle\), \(i=2,3\) by Gauss equation, combining (3.43), (3.44), (3.47) and (3.48), we get \[u_{1_{2}}(\Gamma^{i}_{i1_{2}})=\Gamma^{1_{2}}_{1_{2}1_{2}}\Gamma^{i}_{i1_{2}} -(\Gamma^{i}_{i1_{2}})^{2},\ i=2,3. \tag{3.50}\] As \(\lambda_{1}=-2\varepsilon H\), the equation \(2\lambda_{1}+\lambda_{2}+\lambda_{3}=4\varepsilon H\) (cf. eq. (2.4)) reduces to \(\lambda_{2}+\lambda_{3}=8\varepsilon H.\) Differentiate this equation along \(u_{1_{2}}\), applying (3.46) and (3.49), we find \[(-2\varepsilon H-\lambda_{2})\Gamma^{2}_{21_{2}}+(-2\varepsilon H-\lambda_{3}) \Gamma^{3}_{31_{2}}=0.\] By use of (3.46), (3.49) and (3.50), act \(u_{1_{2}}\) on the above equation, we have \[(-2\varepsilon H-\lambda_{2})(\Gamma^{2}_{21_{2}})^{2}+(-2\varepsilon H- \lambda_{3})(\Gamma^{3}_{31_{2}})^{2}=0.\] Because of \(\lambda_{2},\lambda_{3}\neq-2\varepsilon H\) and \(\lambda_{2}+\lambda_{3}=8\varepsilon H\), we conclude from the above two equations that \[\Gamma^{2}_{21_{2}}=\Gamma^{3}_{31_{2}} \tag{3.51}\] and \[-12\varepsilon H\Gamma^{2}_{21_{2}}=0,\] which together with \(H\neq 0\) implies \(\Gamma^{2}_{21_{2}}=0\). Combining (3.43), (3.44), (3.47), (3.48) and (3.51), compute \(\langle R(u_{1_{1}},u_{i})u_{1_{2}},u_{i}\rangle\) by using Gauss equation, it gives \[2H\lambda_{i}=c,\ i=2,3,\] which yields that \(\lambda_{2}=\lambda_{3}\), a contradiction. _Case 2: \(\nabla H\) is not light-like._ Observe the form \(({\rm I\!I})\), we know \(\nabla H\) is in the direction \(u_{2}\) or \(u_{3}\) in this case. Without loss of generality, we suppose \(\nabla H\) is in the direction \(u_{2}\), then \(\lambda_{2}=-2\varepsilon H\), which together with \(2\lambda_{1}+\lambda_{2}+\lambda_{3}=4\varepsilon H\) tells us that \(\lambda_{3}=6\varepsilon H-2\lambda_{1}\). The equation (3.45) and the symmetry of connection \(\nabla\) give that \[u_{2}(H)\neq 0,\ u_{B}(H)=0,\ B=1_{1},1_{2},3. \tag{3.52}\] and \[\Gamma^{2}_{BC}=\Gamma^{2}_{CB},\quad B,C=1_{1},1_{2},3. \tag{3.53}\] By use of (3.43), (3.44), (3.52) and (3.53), we deduce from (2.1) that \[u_{1_{2}}(\lambda_{1})=\Gamma^{1_{1}}_{1_{2}3}=\Gamma^{1_{1}}_{1_{2}3}=\Gamma ^{2}_{31_{1}}=\Gamma^{2}_{31_{2}}=\Gamma^{3}_{31_{2}}=\Gamma^{2}_{2B}=0, \tag{3.54}\] \[u_{1_{2}}(\lambda_{3})=(\lambda_{1}-\lambda_{3})\Gamma^{3}_{31_{2}},\ \ \Gamma^{1_{1}}_{1_{1}3}=\Gamma^{1_{2}}_{1_{2}3}=\frac{u_{3}(\lambda_{1})}{ \lambda_{3}-\lambda_{1}}, \tag{3.55}\] \[\Gamma^{1_{1}}_{1_{1}2}=\Gamma^{1_{2}}_{1_{2}2}=\frac{u_{2}(\lambda_{1})}{-2 \varepsilon H-\lambda_{1}},\ \ \Gamma^{3}_{32}=\frac{u_{2}(\lambda_{3})}{-2\varepsilon H-\lambda_{3}}. \tag{3.56}\] Let \(e_{1}=\frac{u_{1_{1}}-u_{1_{2}}}{\sqrt{2}}\) and \(e_{2}=\frac{u_{1_{1}}+u_{1_{2}}}{\sqrt{2}}\), then \(\mathfrak{E}=\{e_{1},e_{2},u_{2},u_{3}\}\) is an orthonormal basis of \(T_{x}M^{4}_{r}\). We easily find \[\nabla_{e_{2}}e_{2}(H)-\nabla_{e_{1}}e_{1}(H)=\Gamma^{2}_{1_{1}1_{2}}+\Gamma^ {2}_{1_{2}1_{1}},\] and the equation (2.3) can be reduced to \[u_{2}u_{2}(H)+(2\Gamma^{1_{1}}_{1_{1}2}+\Gamma^{3}_{32})u_{2}(H)-\varepsilon \varepsilon_{2}H\mathrm{tr}A^{2}+\varepsilon_{2}\lambda H=0, \tag{3.57}\] where \[\mathrm{tr}A^{2}=2\lambda_{1}^{2}+\lambda_{3}^{2}+4H^{2}.\] Using Gauss equation for \(\langle R(u_{2},u_{3})u_{3},u_{2}\rangle\) and \(\langle R(u_{2},u_{1_{1}})u_{1_{2}},u_{2}\rangle\), combining (3.43), (3.44), (3.53) and (3.54), it is not difficult to check \[\begin{cases}u_{2}(\Gamma^{3}_{32})+(\Gamma^{3}_{32})^{2}=2H\varepsilon_{2} \lambda_{3}-c\varepsilon_{2},\\ u_{2}(\Gamma^{1_{1}}_{1_{1}2})+(\Gamma^{1_{1}}_{1_{1}2})^{2}=2H\varepsilon_{2} \lambda_{1}-c\varepsilon_{2}.\end{cases} \tag{3.58}\] Let \[f_{k}=(\Gamma^{1_{1}}_{1_{1}2})^{k}+(\Gamma^{1_{2}}_{1_{2}2})^{k}+(\Gamma^{3}_ {32})^{k},\ k=1,2,\cdots,\] then (3.10) also holds. We can write \(f_{k}\), \(k=2,3,4,5\), as the expressions about \(f_{1}\), \(H\), and their derivatives along \(u_{2}\), similarly with (3.8), just need to replace \(u_{1}\) with \(u_{2}\) in (3.8). And then, follow the process of the proof for Lemma 3.4, we have \(u_{B}(f_{1})=0\), with \(B=1_{1},1_{2},3\). Furthermore, we easily get that \(u_{B}u_{2}^{(k)}(f_{1})=u_{B}u_{2}^{(k)}(H)=0\), with \(B=1_{1},1_{2},3\) and \(k=1,2\). Thus, we derive \[u_{B}(f_{k})=0,\ B=1_{1},1_{2},3,\ \ k=1,2,\] which gives \[\begin{cases}2u_{B}(\Gamma^{1_{1}}_{1_{1}2})+u_{B}(\Gamma^{3}_{32})=0,\\ 2\Gamma^{1_{1}}_{1_{1}2}u_{B}(\Gamma^{1_{1}}_{1_{1}2})+\Gamma^{3}_{32}u_{B}( \Gamma^{3}_{32})=0.\end{cases}\] Observe (3.58), we know \(\Gamma^{1_{1}}_{1_{1}2}\neq\Gamma^{3}_{32}\). So, the above equations yields \[u_{B}(\Gamma^{1_{1}}_{1_{1}2})=u_{B}(\Gamma^{3}_{32})=0,\ B=1_{1},1_{2},3.\] Act on the equations in (3.58) by \(u_{B}\), \(B=1_{1},1_{2},3\), using the above equation, we find \[u_{B}(\lambda_{1})=u_{B}(\lambda_{3})=0,\ B=1_{1},1_{2},3,\] which together with (3.55) implies \[\Gamma^{1_{1}}_{1_{1}3}=\Gamma^{3}_{31_{2}}=0. \tag{3.59}\] Applying Gauss equation for \(\langle R(u_{3},u_{1_{1}})u_{3},u_{1_{2}}\rangle\), combining (3.43), (3.44), (3.53), (3.54) and (3.59), we have \[\Gamma^{3}_{32}\Gamma^{1_{1}}_{1_{1}2}=2\varepsilon\varepsilon_{2}\lambda_{1} ^{2}-6\varepsilon_{2}H\lambda_{1}-c\varepsilon_{2}. \tag{3.60}\] Substituting \(\Gamma^{3}_{32}=\frac{u_{2}(3\varepsilon H-\lambda_{1})}{-4\varepsilon H+ \lambda_{1}}\) and \(\Gamma^{1_{1}}_{1_{1}2}=\frac{u_{2}(\lambda_{1})}{-2\varepsilon H-\lambda_{1}}\) into (3.58), and then combining (3.57) and (3.60) to eliminate \(u_{2}u_{2}(H)\) and \(u_{2}u_{2}(\lambda_{1})\), we derive \[\big{(}8\Gamma^{1_{1}}_{1_{1}2}+4\Gamma^{3}_{32}\big{)}u_{2}(H)= -168\varepsilon\varepsilon_{2}H^{3}+202\varepsilon_{2}H^{2} \lambda_{1}-64\varepsilon\varepsilon_{2}H\lambda_{1}^{2}+4\varepsilon_{2} \lambda_{1}^{3} \tag{3.61}\] \[+26c\varepsilon_{2}H-2c\varepsilon\varepsilon_{2}\lambda_{1}+3 \varepsilon_{2}\lambda H.\] Differentiate (3.61) with \(u_{2}\), applying (3.57), (3.58) and (3.60), we obtain \[f_{1}(H,\lambda_{1})\Gamma^{1_{1}}_{1_{1}2}+g_{1}(H,\lambda_{1})\Gamma^{3}_{3 2}=h_{1}(H,\lambda_{1})u_{2}(H), \tag{3.62}\] where \[f_{1}(H,\lambda_{1}) =1228\varepsilon H^{3}+136\varepsilon H\lambda_{1}^{2}-852H^{2} \lambda_{1}+4c\varepsilon\lambda_{1}-82cH-17\lambda H,\] \[g_{1}(H,\lambda_{1}) =496\varepsilon H^{3}+152\varepsilon H\lambda_{1}^{2}-500H^{2} \lambda_{1}-8\lambda_{1}^{3}+4c\varepsilon\lambda_{1}-52cH-10\lambda H,\] \[h_{1}(H,\lambda_{1}) =600H^{2}\varepsilon+88\varepsilon\lambda_{1}^{2}-500H\lambda_{1 }-50c-3\lambda.\] By differentiating \(2\lambda_{1}+\lambda_{3}=6\varepsilon H\) along \(u_{2}\), using (3.56), we know \[3\varepsilon u_{2}(H)=-(2\varepsilon H+\lambda_{1})\Gamma^{1_{1}}_{1_{1}2}+(- 4\varepsilon H+\lambda_{1})\Gamma^{3}_{32}. \tag{3.63}\] Putting (3.63) into (3.62), then \[f_{2}(H,\lambda_{1})\Gamma^{1_{1}}_{1_{1}2}+g_{2}(H,\lambda_{1})\Gamma^{3}_{3 2}=0, \tag{3.64}\] where \[f_{2}(H,\lambda_{1})= 2484H^{3}-2156H^{2}\varepsilon\lambda_{1}-88\varepsilon\lambda_ {1}^{3}+732H\lambda_{1}^{2}\] \[-(146c+45\lambda)H\varepsilon+(62c+3\lambda)\lambda_{1},\] \[g_{2}(H,\lambda_{1})= 16(93-450\varepsilon)H^{3}-300(5\varepsilon-26)\lambda_{1}H^{2 }+12\lambda_{1}c\] \[-12(213\varepsilon-38)H\lambda_{1}^{2}-6(26c\varepsilon+5 \varepsilon\lambda-100c\] \[-6\lambda)H-24(\varepsilon-11)\lambda_{1}^{3}-3(50c+3\lambda) \lambda_{1}\varepsilon.\] Multiply both sides of (3.63) by \(8\Gamma^{1_{1}}_{1_{1}2}+4\Gamma^{3}_{32}\), applying (3.60) and (3.61), we have \[(16\varepsilon H+8\lambda_{1})(\Gamma^{1_{1}}_{1_{1}2})^{2}+(16\varepsilon H-4 \lambda_{1})(\Gamma^{3}_{32})^{2}=h_{3}(H,\lambda_{1}), \tag{3.65}\] where \[h_{3}(H,\lambda_{1})= -366\varepsilon\varepsilon_{2}H^{2}\lambda_{1}-4\varepsilon \varepsilon_{2}\lambda_{1}^{3}+504\varepsilon_{2}H^{3}+88\varepsilon_{2}H \lambda_{1}^{2}\] \[-38\varepsilon c\varepsilon_{2}H-9\varepsilon\varepsilon_{2} \lambda H+2c\varepsilon_{2}\lambda_{1}.\] By use of (3.60), (3.64) and (3.65), we can get a polynomial equation about \(H\) and \(\lambda_{1}\) \[(109824-1053952\varepsilon)\lambda_{1}^{9}+(21186688H-2300928H\varepsilon) \lambda_{1}^{8}+\sum_{i=0}^{7}r_{i}\lambda_{1}^{i}=0,\] where \(r_{i}\), \(0\leq i\leq 7\) are polynomials about \(H\). Acting on the above equation by \(u_{3}\) twice, and using (3.58), (3.60), (3.63) and (3.64), we obtain another algebraic equation of \(H\) and \(\lambda_{1}\) \[(-13895028523008\varepsilon +110378156212224)\lambda_{1}^{17}-(4577376380387328\varepsilon\] \[-672576751853568)H\lambda_{1}^{16}+\sum_{j=0}^{15}s_{j}\lambda_{1 }^{j}=0,\] where \(s_{j}\), \(0\leq j\leq 15\) are polynomials about \(H\). Eliminate \(\lambda_{1}\) from the above two equations, we derive an algebraic equation for \(H\) with constant coefficients. So, \(H\) must be a constant, a contradiction. \(\square\) ### The shape operator has the form (Iii) **Proposition 3.7**: _Let \(M_{r}^{4}\) be a nondegenerate hypersurface of \(N_{s}^{5}(c)\) with proper mean curvature vector field. Suppose that the shape operator \(A\) of \(M_{r}^{4}\) has the form (Iii), then \(M_{r}^{4}\) has constant mean curvature._ **Proof** Assume that \(H\) is not a constant, then \(\nabla H\) is an eigenvector of \(A\) with corresponding eigenvalue \(-2\varepsilon H\). Let \(\nabla_{u_{B}}u_{C}=\Gamma^{D}_{BC}u_{D}\), \(B,C=1_{1},1_{2},2_{1},2_{2}\), then \[\Gamma^{j_{b}}_{Di_{a}}=-\varepsilon_{i}\varepsilon_{j}\Gamma^{i_{3-a}}_{Di_{ 3-b}},\ i,j,a,b=1,2,\] with \(D=1_{1},1_{2},2_{1},2_{2}\). Observe the form (1I\(\!\)I) of \(A\), we know eigenvector \(\nabla H\) is in the direction \(u_{1_{2}}\) or \(u_{2_{2}}\). Without loss of generality, we suppose \(\nabla H\) is in the direction \(u_{1_{2}}\), then \(\lambda_{1}=-2\varepsilon H\) and \(\lambda_{2}=4\varepsilon H\). It follows that \[u_{1_{1}}(H)\neq 0,\ u_{1_{2}}(H)=u_{2_{1}}(H)=u_{2_{2}}(H)=0,\] and \[\Gamma^{1_{1}}_{BC}=\Gamma^{1_{1}}_{CB},\quad B,C\neq 1_{1}.\] We deduce from Codazzi equation that \[\Gamma^{1_{1}}_{1_{2}2}=\Gamma^{1_{1}}_{2_{2}2_{2}}=\Gamma^{2_{1}}_{2_{1}1_{2 }}=0.\] Using Gauss equation for \(\langle R(u_{1_{1}},u_{2_{1}})u_{1_{2}},u_{2_{2}}\rangle\), combining the above, we get \[c-8\varepsilon H^{2}=0,\] which tells us \(H\) is a constant, a contradiction. \(\square\) ### The shape operator has the form (IV) **Proposition 3.8**: _Let \(M^{4}_{r}\) be a nondegenerate hypersurface of \(N^{5}_{s}(c)\) with proper mean curvature vector field. Suppose that the shape operator \(A\) of \(M^{4}_{r}\) has the form (IV), then \(M^{4}_{r}\) has constant mean curvature._ **Proof** Denote \(u_{2}=u_{2_{1}}\), and let \(\nabla_{u_{B}}u_{C}=\Gamma^{D}_{BC}u_{D}\), \(B,C=1_{1},1_{2},1_{3},2\). We easily find \[\Gamma^{1_{3}}_{D1_{1}}=\Gamma^{1_{1}}_{D1_{3}}=\Gamma^{1_{2}}_{D1_{2}}= \Gamma^{2}_{D2}=0, \tag{3.66}\] and \[\Gamma^{1_{b}}_{D1_{a}}=-\Gamma^{1_{4-a}}_{D1_{4-b}},\ \Gamma^{2}_{D1_{a}}=- \varepsilon_{1}\varepsilon_{2}\Gamma^{1_{4-a}}_{D2}, \tag{3.67}\] where \(a,b=1,2,3\). Assume that \(H\) is not a constant, then \(\nabla H\) is an eigenvector of \(A\), is light-like or not. _Case 1: \(\nabla H\) is light-like._ In this case, \(\nabla H\) is in the direction \(u_{1_{3}}\) and \(\lambda_{1}=-2\varepsilon H\), \(\lambda_{2}=10\varepsilon H\). It follows that \[u_{1_{1}}(H)\neq 0,\ u_{1_{2}}(H)=u_{1_{3}}(H)=u_{2}(H)=0.\] and \[\Gamma^{1_{1}}_{BC}=\Gamma^{1_{1}}_{CB},\quad B,C\neq 1_{1}.\] Applying Codazzi equation, we conclude \[\Gamma^{1_{1}}_{1_{1}2}=\Gamma^{1_{1}}_{1_{2}2}=\Gamma^{2}_{21_{2}}=\Gamma^{2}_{2 _{1_{3}}}=0.\] Calculate \(\langle R(u_{1_{1}},u_{2})u_{1_{3}},u_{2}\rangle\) by Gauss equation, combining the above equations, we get \[c\varepsilon_{1}-20\varepsilon\varepsilon_{1}H^{2}=0,\] a contradiction. _Case 2: \(\nabla H\) is not light-like._ We know \(\nabla H\) is in the direction \(u_{2}\), \(\lambda_{2}=-2\varepsilon H\) and \(\lambda_{1}=2\varepsilon H\). So, \[u_{2}(H)\neq 0,\ u_{B}(H)=0,B\neq 2. \tag{3.68}\] and \[\Gamma^{2}_{BC}=\Gamma^{2}_{CB},\ \ \ B,C\neq 2. \tag{3.69}\] We obtain from Codazzi equation that \[\Gamma^{2}_{21_{1}}=\Gamma^{2}_{21_{2}}=\Gamma^{2}_{21_{3}}=\Gamma^{2}_{1_{3} 1_{2}}=\Gamma^{2}_{1_{3}1_{3}}=\Gamma^{1_{1}}_{21_{2}}=\Gamma^{1_{2}}_{21_{3}} =0,\] \[W:=\Gamma^{1_{1}}_{1_{1}2}=\Gamma^{1_{2}}_{1_{2}2}=\Gamma^{1_{3}}_{1_{3}2},\ \ u_{2}(H)=-2HW,\] Using Gauss equation for \(\langle R(u_{2},u_{1_{3}})u_{2},u_{1_{1}}\rangle\), combining the above equations, it gives \[u_{2}(W)+W^{2}=4\varepsilon_{1}H^{2}-c\varepsilon_{1}. \tag{3.70}\] Let \(e_{1}=\frac{u_{1_{1}}-u_{1_{3}}}{\sqrt{2}}\), \(e_{2}=u_{1_{2}}\), and \(e_{3}=\frac{u_{1_{1}}+u_{1_{3}}}{\sqrt{2}}\), then \(\mathfrak{E}=\{e_{1},e_{2},e_{3},u_{2}\}\) is an orthonormal basis of \(T_{p}M^{4}_{r}\), and (2.3) can be rewriten as \[2Wu_{2}(H)+u_{2}u_{2}(H)-16\varepsilon_{1}\varepsilon H^{3}+\lambda\varepsilon _{1}H=0.\] Put \(u_{2}(H)=-2HW\) into the above equation, combining (3.70), we have \[2HW^{2}-8(1+2\varepsilon)\varepsilon_{1}H^{3}+2c\varepsilon_{1}H+\lambda \varepsilon_{1}H=0. \tag{3.71}\] Differentiate (3.71) along \(u_{2}\), using \(u_{2}(H)=-2HW\) and (3.70), we obtain \[-2W[4HW^{2}-16(2+3\varepsilon)\varepsilon_{1}H^{3}+4c\varepsilon_{1}H+ \lambda\varepsilon_{1}H]=0,\] which together with (3.71) implies \[2W[16(1+\varepsilon)\varepsilon_{1}H^{3}+\lambda\varepsilon_{1}H]=0.\] Since \(u_{2}(H)\neq 0\), we know \(W\neq 0\). The above equation implies \(H\) is a constant, a contradiction. ### The shape operator has the form (V) **Proposition 3.9**: _Let \(M_{r}^{4}\) be a nondegenerate hypersurface of \(N_{s}^{5}(c)\) with proper mean curvature vector field. Suppose that the shape operator \(A\) of \(M_{r}^{4}\) has the form (V), then \(M_{r}^{4}\) has constant mean curvature._ For the form (V), the equations (3.74) and (3.75) deduced from Codazzi equation and Gauss equation are very complicated, compared with the equations for the forms (I), (I\(\!\!\)I), (I\(\!\!\)I) and (I\(\!\!\)V). However, by constructing creatively the terms \(b_{k}\), \(c_{k}\) (see (3.76)), and letting \(f_{k}=(\Gamma_{21}^{2})^{k}+2b_{k}\), with \(k=1,\cdots,5\), we provide an opportunity to complete the proof similarly with the form (I). **Proof** Denote \(u_{1}=u_{1_{1}}\), \(u_{2}=u_{2_{1}}\), \(u_{\bar{3}}=u_{\bar{3}_{1}}\) and \(u_{\bar{3}}=u_{\bar{3}_{1}}\). Let \(\nabla_{u_{B}}u_{C}=\Gamma_{BC}^{D}u_{D}\), \(B,C=1,2,\bar{3},\tilde{3}\), we have \[\Gamma_{BD}^{D}=0,\ \ \Gamma_{B2}^{1}=-\varepsilon_{1}\varepsilon_{2}\Gamma_{B1 }^{2},\ \ \Gamma_{Bi}^{\bar{3}}=-\varepsilon_{i}\Gamma_{B\bar{3}}^{i},\ \ \Gamma_{Bi}^{\bar{3}}=\varepsilon_{i}\Gamma_{B\bar{3}}^{i},\ \ \Gamma_{B\bar{3}}^{\bar{3}}=\Gamma_{B\bar{3}}^{\bar{3}},\] with \(B,D=1,2,\bar{3},\tilde{3}\) and \(i=1,2\). Assume that \(H\) is not a constant, then there exists a neighbourhood \(U_{p}\) of \(p\) such that \(H\neq 0\) and \(\nabla H\neq 0\). It follows from (2.2) that \(\nabla H\) is an eigenvector of \(A\), and is in the direction \(u_{1}\) or \(u_{2}\). Without loss of generality, we suppose \(\nabla H\) is in the direction \(u_{1}\), then \(\lambda_{1}=-2\varepsilon H\), \[u_{1}(H)\neq 0,\ u_{2}(H)=u_{\bar{3}}(H)=u_{\bar{3}}(H)=0.\] and \[\Gamma_{BD}^{1}=\Gamma_{DB}^{1}.\] The equation (2.3) gives \[u_{1}u_{1}(H)+(\Gamma_{21}^{2}+\Gamma_{\bar{3}1}^{\bar{3}}+\Gamma_{\bar{3}1}^ {\bar{3}})u_{1}(H)-\varepsilon\varepsilon_{1}H{\rm tr}A^{2}+\varepsilon_{1} \lambda H=0. \tag{3.72}\] We get from (2.1), with \((X,Y,Z)=(u_{1},u_{B},u_{1}),(u_{B},u_{1},u_{B}),(u_{B},u_{D},u_{1})\), \(B,D=2,\bar{3},\tilde{3}\) that \[\Gamma_{1B}^{1}=\Gamma_{2\bar{3}}^{1}=\Gamma_{2\bar{3}}^{1}=0,\ \ \Gamma_{\bar{3}1}^{\bar{3}}=\Gamma_{\bar{3}1}^{\bar{3}},\ \ {\rm with}\ B=2,\bar{3},\tilde{3}, \tag{3.73}\] and \[\begin{cases}u_{1}(\lambda_{2})=(\lambda_{1}-\lambda_{2})\Gamma_{21}^{2},\\ u_{1}(\gamma_{3})=(\lambda_{1}-\gamma_{3})\Gamma_{\bar{3}1}^{\bar{3}}+\tau_{3} \Gamma_{\bar{3}1}^{\bar{3}},\\ u_{1}(\tau_{3})=(\lambda_{1}-\gamma_{3})\Gamma_{\bar{3}1}^{\bar{3}}-\tau_{3} \Gamma_{\bar{3}1}^{\bar{3}}.\end{cases} \tag{3.74}\] If \(\lambda_{2}=\lambda_{1}\), then the above equations imply \(u_{1}(H)=0\), which contradicts with \(u_{1}(H)\neq 0\). So, \(\lambda_{2}\neq\lambda_{1}\). Using Gauss equation for \(\langle R(u_{1},u_{B})u_{B},u_{1}\rangle\) and \(\langle R(u_{1},u_{\bar{3}})u_{\bar{3}},u_{1}\rangle\), with \(B=2,\bar{3},\bar{\bar{3}}\), we have \[\begin{cases}u_{1}(\Gamma^{2}_{21})=-(\Gamma^{2}_{21})^{2}+(2H\lambda_{2}-c) \varepsilon_{1},\\ u_{1}(\Gamma^{\bar{3}}_{\bar{3}1})=-(\Gamma^{\bar{3}}_{\bar{3}1})^{2}+( \Gamma^{\bar{3}}_{\bar{3}1})^{2}+(2H\gamma_{3}-c)\varepsilon_{1},\\ u_{1}(\Gamma^{\bar{3}}_{\bar{3}1})=-2\Gamma^{\bar{3}}_{\bar{3}1}\Gamma^{\bar {3}}_{\bar{3}1}+2H\tau_{3}\varepsilon_{1}.\end{cases} \tag{3.75}\] **Construct the terms \(b_{k}\) and \(c_{k}\), with \(k=1,\cdots,5\) as follows** \[\begin{cases}b_{1}=\Gamma^{\bar{3}}_{\bar{3}1},\\ c_{1}=\Gamma^{\bar{3}}_{\bar{3}1},\\ b_{l}=b_{1}b_{l-1}-c_{1}c_{l-1},\ l=2,3,4,5\\ c_{l}=b_{1}c_{l-1}+c_{1}b_{l-1},\ l=2,3,4,5.\end{cases} \tag{3.76}\] Set \(f_{k}=(\Gamma^{2}_{21})^{k}+2b_{k}\), with \(k=1,\cdots,5\), we will find (3.8) and \(u_{B}(f_{1})=0\) also hold, with \(B=2,\bar{3},\tilde{3}\). And then, we can conclude that \(u_{B}(\lambda_{2})=u_{B}(\gamma_{3})=u_{B}(\tau_{3})=0\), \(B=2,\bar{3},\tilde{3}\), similarly with Lemma 3.5, which will play an important role in the following process of the proof. **Lemma 3.10** (3.8) _holds._ **Proof** As \(\lambda_{1}=-2\varepsilon H\), the equation (2.4) gives \(\lambda_{2}+2\gamma_{3}=6\varepsilon H\). Differentiate \(f_{k}=(\Gamma^{2}_{21})^{k}+2b_{k}\) along \(u_{1}\), with \(k=1,2,3,4\), using (3.75), we have \[f_{k+1}=-\frac{1}{k}u_{1}(f_{k})-c\varepsilon_{1}f_{k-1}+2\varepsilon_{1}HQ_ {k-1}, \tag{3.77}\] where \(f_{0}=1\) and \(Q_{k-1}=\lambda_{2}(\Gamma^{2}_{21})^{k-1}+2\gamma_{3}b_{k-1}-2\tau_{3}c_{k-1}\). Here, \(b_{0}=1\) and \(c_{0}=0\). Take the \(k\)-th derivatives of \(\lambda_{2}+2\gamma_{3}=6\varepsilon H\) along \(u_{1}\), with \(k=1,2,3\), combining (3.74) and (3.75), we obain \[Q_{1}=-6\varepsilon u_{1}(H)-2\varepsilon Hf_{1}, \tag{3.78}\] \[Q_{2}=3\varepsilon u_{1}^{(2)}(H)+\varepsilon Hu_{1}(f_{1})+\varepsilon f_{1}u _{1}(H)-\varepsilon Hf_{2}-4\varepsilon_{1}H^{3}-3\varepsilon\varepsilon_{1} Hc+\varepsilon_{1}H{\rm tr}A^{2},\] and \[3Q_{3}= \varepsilon Hu_{1}(f_{2})-3\varepsilon u_{1}^{(3)}(H)+\varepsilon f _{2}u_{1}(H)-2\varepsilon u_{1}(H)u_{1}(f_{1})\] \[+12\varepsilon_{1}H^{2}u_{1}(H)-\varepsilon_{1}u_{1}(H){\rm tr}A^ {2}-\varepsilon f_{1}u_{1}^{(2)}(H)+15\varepsilon\varepsilon_{1}cu_{1}(H)\] \[-\varepsilon Hu_{1}^{(2)}(f_{1})+4\varepsilon\varepsilon_{1}cHf_{1 }-2\varepsilon Hf_{3}-\varepsilon_{1}Hu_{1}({\rm tr}A^{2})\] \[+4\varepsilon_{1}H(\lambda_{2}^{2}\Gamma^{2}_{21}+2(\gamma_{3}^{ 2}-\tau_{3}^{2})\Gamma^{\bar{3}}_{\bar{3}1}-4\gamma_{3}\tau_{3}\Gamma^{\bar{3 }}_{\bar{3}1}).\] Multiply the first, second and third equation in (3.74) by \(\lambda_{2}\), \(\gamma_{3}\) and \(\tau_{3}\), respectively, and then we get from these results that \[\lambda_{2}^{2}\Gamma_{21}^{2}+2(\gamma_{3}^{2}-\tau_{3}^{2})\Gamma_{\tilde{3} 1}^{\bar{3}}-4\gamma_{3}\tau_{3}\Gamma_{\tilde{3}1}^{\bar{3}}=-2\varepsilon HQ _{1}-\frac{1}{2}u_{1}(\mbox{tr}A^{2})+4Hu_{1}(H).\] Applying (3.72) and the above equations, we deduce from (3.77) that (3.8) holds. \(\Box\) By calculation, we easily find (3.10) also holds. Then, follow the process of the proof for Lemma 3.4 (replacing the index \(i\in\{2,3,4\}\) with \(B\in\{2,\bar{3},\tilde{3}\}\)), we conclude that \(u_{B}(f_{1})=0\), with \(B=2,\bar{3},\tilde{3}\). **Lemma 3.11**: _We have_ \[u_{B}(\Gamma_{D1}^{D})=u_{B}(\Gamma_{\tilde{3}1}^{\bar{3}})=u_{B}(\lambda_{2 })=u_{B}(\gamma_{3})=u_{B}(\tau_{3})=0,\] _with \(B,D=2,\bar{3},\tilde{3}\)._ **Proof** Since \(u_{B}(f_{1})=u_{B}(H)=0\), \(B=2,\bar{3},\tilde{3}\), we conclude \[u_{B}u_{1}^{(k)}(f_{1})=u_{B}u_{1}^{(k)}(H)=0,\ B=2,\bar{3},\tilde{3},\ k=1,2,3.\] Observe the expressions of \(f_{k}\), \(k=2,3\) in (3.8), combining the above equation, we have \[u_{B}(f_{k})=0,\ B=2,\bar{3},\tilde{3},\ k=1,2,3,\] which gives \[(\Gamma_{21}^{2})^{k-1}u_{B}(\Gamma_{21}^{2})+2b_{k-1}u_{B}(\Gamma_{\tilde{3} 1}^{\bar{3}})-2c_{k-1}u_{B}(\Gamma_{\tilde{3}1}^{\bar{3}})=0,k=1,2,3.\] Here, \(b_{0}=1\) and \(c_{0}=0\). The coefficient determinant of this system \[\left|\begin{array}{ccc}1&2&0\\ \Gamma_{21}^{2}&2\Gamma_{\tilde{3}1}^{\bar{3}}&-2\Gamma_{\tilde{3}1}^{\bar{3} }\\ (\Gamma_{21}^{2})^{2}&2((\Gamma_{\tilde{3}1}^{\bar{3}})^{2}-(\Gamma_{\tilde{3} 1}^{\bar{3}})^{2})&-4\Gamma_{\tilde{3}1}^{\bar{3}}\Gamma_{\tilde{3}1}^{\bar{3 }}\end{array}\right|=-4\Gamma_{\tilde{3}1}^{\bar{3}}[(\Gamma_{21}^{2}-\Gamma_ {\tilde{3}1}^{\bar{3}})^{2}+(\Gamma_{\tilde{3}1}^{\bar{3}})^{2}].\] As \(\tau_{3}\neq 0\), the third equation of (3.75) implies that \(\Gamma_{\tilde{3}1}^{\bar{3}}\neq 0\). So, the above coefficient determinant is not equal to zero, and then \[u_{B}(\Gamma_{21}^{2})=u_{B}(\Gamma_{\tilde{3}1}^{\bar{3}})=u_{B}(\Gamma_{ \tilde{3}1}^{\bar{3}})=0,\ B=2,\bar{3},\tilde{3}.\] Differentiate the equations in (3.75) along \(u_{B}\), \(B=2,\bar{3},\tilde{3}\), combining the above equation, we get \[u_{B}(\lambda_{2})=u_{B}(\gamma_{3})=u_{B}(\tau_{3})=0,\ B=2,\bar{3},\tilde{3}.\] Now, we continue the proof of Proposition 3.9. \(\square\) Applying Lemma 3.11, we obtain from the equation (2.1) for \((X,Y,Z)=(u_{B},u_{D},u_{B}),(u_{2},u_{\bar{3}},u_{\bar{3}}),(u_{2},u_{\bar{3}},u_ {\bar{3}})\), with \(B,D=2,\bar{3},\bar{3}\) that \[\begin{cases}(\gamma_{3}-\lambda_{2})\Gamma^{2}_{2\bar{3}}-\tau_{3}\Gamma^{2}_ {2\bar{3}}=0,\\ (\gamma_{3}-\lambda_{2})\Gamma^{2}_{2\bar{3}}+\tau_{3}\Gamma^{2}_{2\bar{3}}=0,\\ -2\tau_{3}\Gamma^{\bar{3}}_{2\bar{3}}=(\lambda_{2}-\gamma_{3})\Gamma^{\bar{3}}_ {32}-\tau_{3}\Gamma^{\bar{3}}_{32},\\ 2\tau_{3}\Gamma^{\bar{3}}_{2\bar{3}}=(\lambda_{2}-\gamma_{3})\Gamma^{\bar{3}}_ {32}+\tau_{3}\Gamma^{\bar{3}}_{\bar{3}2},\\ (\lambda_{2}-\gamma_{3})\Gamma^{\bar{3}}_{\bar{3}2}+\tau_{3}\Gamma^{\bar{3}}_ {\bar{3}2}=0,\\ (\lambda_{2}-\gamma_{3})\Gamma^{\bar{3}}_{\bar{3}2}-\tau_{3}\Gamma^{\bar{3}}_ {\bar{3}2}=0,\\ \Gamma^{\bar{3}}_{\bar{3}3}=\Gamma^{\bar{3}}_{\bar{3}\bar{3}}=0,\end{cases} \tag{3.79}\] which gives that \[\Gamma^{2}_{2\bar{3}}=\Gamma^{2}_{2\bar{3}}=\Gamma^{\bar{3}}_{\bar{3}\bar{3}} =\Gamma^{\bar{3}}_{\bar{3}\bar{3}}=\Gamma^{\bar{3}}_{\bar{3}2}+\Gamma^{\bar{3} }_{\bar{3}2}=\Gamma^{\bar{3}}_{\bar{3}2}-\Gamma^{\bar{3}}_{\bar{3}2}=0,\] and \[(\Gamma^{\bar{3}}_{\bar{3}2})^{2}+(\Gamma^{\bar{3}}_{\bar{3}\bar{2}})^{2}-2 \Gamma^{\bar{3}}_{\bar{3}2}\Gamma^{\bar{3}}_{2\bar{3}}=0. \tag{3.80}\] Calculate \(\langle R(u_{2},u_{B})u_{2},u_{B}\rangle\), \(\langle R(u_{2},u_{B})u_{2},u_{D}\rangle\), \(\langle R(u_{B},u_{D})u_{B},u_{D}\rangle\) and \(\langle R(u_{2},u_{B})u_{D},u_{1}\rangle\) by Gauss equation, with \(B,D=\bar{3},\bar{3}\) and \(B\neq D\), combining (3.73) and the above two equations, we have \[\Gamma^{\bar{3}}_{\bar{3}2}(\Gamma^{2}_{21}-\Gamma^{\bar{3}}_{\bar{3}1})+ \Gamma^{\bar{3}}_{\bar{3}2}\Gamma^{\bar{3}}_{\bar{3}1}=0, \tag{3.81}\] and \[\begin{cases}-\varepsilon_{1}\Gamma^{2}_{21}\Gamma^{\bar{3}}_{\bar{3}1}+2 \varepsilon_{2}\Gamma^{\bar{3}}_{\bar{3}2}\Gamma^{\bar{3}}_{2\bar{3}}= \varepsilon\lambda_{2}\tau_{3},\\ \varepsilon_{1}\Gamma^{2}_{21}\Gamma^{\bar{3}}_{\bar{3}1}+2\varepsilon_{2} \Gamma^{\bar{3}}_{\bar{3}2}\Gamma^{\bar{3}}_{2\bar{3}}=-c-\varepsilon\lambda_ {2}\gamma_{3},\\ \varepsilon_{1}[(\Gamma^{\bar{3}}_{\bar{3}1})^{2}+(\Gamma^{\bar{3}}_{\bar{3}1 })^{2}]-4\varepsilon_{2}\Gamma^{\bar{3}}_{\bar{3}2}\Gamma^{\bar{3}}_{2\bar{3} }=-c-\varepsilon(\gamma_{3}^{2}+\tau_{3}^{2}).\end{cases} \tag{3.82}\] The second and third equations of (3.82) implies \[\varepsilon_{1}[(\Gamma^{\bar{3}}_{\bar{3}1})^{2}+(\Gamma^{\bar{3}}_{\bar{3}1}) ^{2}+2\Gamma^{2}_{21}\Gamma^{\bar{3}}_{\bar{3}1}]=-3c-2\varepsilon\lambda_{2} \gamma_{3}-\varepsilon(\gamma_{3}^{2}+\tau_{3}^{2}). \tag{3.83}\] _Case 1: \(\Gamma^{\bar{3}}_{\bar{3}2}\neq 0\) at some point in \(U_{p}\)._ We suppose \(\Gamma^{\bar{3}}_{\bar{3}2}\neq 0\) on a neighbourhood \(U_{q}\subset U_{p}\), and work on \(U_{q}\). From the fifth equation in (3.79) and (3.81), we get \[\frac{\Gamma^{2}_{21}-\Gamma^{\bar{3}}_{\bar{3}1}}{\Gamma^{\bar{3}}_{\bar{3}1}} =\frac{\lambda_{2}-\gamma_{3}}{\tau_{3}}.\] Considering Lemma 3.11, we conclude from the above equation that there exists two smooth functions \(\mu\) and \(\nu\), with \(u_{B}(\mu)=u_{B}(\nu)=0\), \(B=2,\bar{3},\tilde{3}\), such that \[\Gamma^{2}_{21}=\mu\lambda_{2}+\nu,\ \Gamma^{\bar{3}}_{\bar{3}1}=\mu\gamma_{3}+ \nu,\ \Gamma^{\bar{3}}_{\bar{3}1}=\mu\tau_{3}. \tag{3.84}\] The equation (3.22) also holds. By use of (3.74) and (3.75), differentiating the equations in (3.84) along \(u_{1}\), it gives (3.23). Substitute (3.84) into (3.83), we obtain \[(\varepsilon_{1}\mu^{2}+\varepsilon)(\gamma_{3}^{2}+2\lambda_{2}\gamma_{3}+ \tau_{3}^{2})+2\varepsilon_{1}\mu\nu(\lambda_{2}+2\gamma_{3})+3\varepsilon_{ 1}\nu^{2}+3c=0. \tag{3.85}\] Since \[\gamma_{3}^{2}+2\lambda_{2}\gamma_{3}+\tau_{3}^{2} =\frac{1}{2}[(\lambda_{2}+2\gamma_{3})^{2}-(\lambda_{2}^{2}+2 \gamma_{3}^{2}-2\tau_{3}^{2})]\] \[=20H^{2}-\frac{1}{2}\mbox{tr}A^{2},\] the equation (3.85) can be rewritten as (3.24). Put (3.84) into (3.78), combining (3.22) and (3.24), we have (3.25). Follow the process for the case \(\Gamma^{4}_{23}\neq 0\) at some point in section 3.1, just correcting (3.7) with (3.72), one derive a contradiction. _Case 2: \(\Gamma^{\bar{3}}_{\bar{3}2}\equiv 0\) on \(U_{p}\)._ It follows from (3.80) that \(\Gamma^{\bar{3}}_{\bar{3}2}=0\). And then, (3.82) can be simplified into \[\begin{cases}\varepsilon_{1}\Gamma^{2}_{21}\Gamma^{\bar{3}}_{\bar{3}1}=- \varepsilon\lambda_{2}\tau_{3},\\ \varepsilon_{1}\Gamma^{2}_{21}\Gamma^{\bar{3}}_{\bar{3}1}=-c-\varepsilon \lambda_{2}\gamma_{3},\\ \varepsilon_{1}[(\Gamma^{\bar{3}}_{\bar{3}1})^{2}+(\Gamma^{\bar{3}}_{\bar{3} 1})^{2}]=-c-\varepsilon(\gamma_{3}^{2}+\tau_{3}^{2}).\end{cases} \tag{3.86}\] _Subcase (1): \(\Gamma^{2}_{21}\equiv 0\) on \(U_{p}\)._ The first and second equations of (3.86) tells us that \(\lambda_{2}=0\) and \(c+\varepsilon\lambda_{2}\gamma_{3}=0\). If \(c\neq 0\), then \(c+\varepsilon\lambda_{2}\gamma_{3}\neq 0\), a contradiction. In the following, we suppose \(c=0\). Differentiate both sides of \(\gamma_{3}=3\varepsilon H\) along \(u_{1}\), using (3.74), we get \[3\varepsilon u_{1}(H)=(-2\varepsilon H-\gamma_{3})\Gamma^{\bar{3}}_{\bar{3}1} +\tau_{3}\Gamma^{\bar{3}}_{\bar{3}1}. \tag{3.87}\] Applying (3.86) and (3.87), we can rewrite (3.72) as \[4[-5\varepsilon H((\Gamma^{\bar{3}}_{\bar{3}1})^{2}-(\Gamma^{\bar{3}}_{\bar{3}1}) ^{2})+2\tau_{3}\Gamma^{\bar{3}}_{\bar{3}1}\Gamma^{\bar{3}}_{\bar{3}1}]-34 \varepsilon_{1}H\tau_{3}^{2}+162\varepsilon_{1}H^{3}-9\varepsilon\varepsilon_{ 1}\lambda H=0. \tag{3.88}\] By differentiating (3.88) along \(u_{1}\), using (3.74) and (3.75), and combining (3.86) and (3.88), we derive \[K_{1}\Gamma^{\bar{3}}_{\bar{3}1}+K_{2}\Gamma^{\bar{3}}_{\bar{3}1}=0, \tag{3.89}\] where \[\begin{cases}K_{1}=1710H^{3}-78H\tau_{3}^{2}-135\varepsilon\lambda H,\\ K_{2}=1494H^{2}\tau_{3}\varepsilon-78\tau_{3}^{3}\varepsilon-9\tau_{3}\lambda. \end{cases}\] As (3.86), we get from (3.89) that \[\begin{cases}(K_{1}^{2}+K_{2}^{2})\Gamma^{\bar{3}}_{\bar{3}1}\Gamma^{\bar{3}} _{\bar{3}1}-\varepsilon\varepsilon_{1}(9H^{2}+\tau_{3}^{2})K_{1}K_{2}=0,\\ K_{1}K_{2}[(\Gamma^{\bar{3}}_{\bar{3}1})^{2}-(\Gamma^{\bar{3}}_{\bar{3}1})^{2} ]+(K_{2}^{2}-K_{1}^{2})\Gamma^{\bar{3}}_{\bar{3}1}\Gamma^{\bar{3}}_{\bar{3}1} =0.\end{cases}\] Eliminate \(\Gamma^{\bar{3}}_{\bar{3}1}\) and \(\Gamma^{\bar{3}}_{\bar{3}1}\) from (3.88) and the above equations, one derives \[2741856\varepsilon\tau_{3}^{12}+c_{1}\tau_{3}^{10}+c_{2}\tau_{3}^{8}+c_{3} \tau_{3}^{6}+c_{4}\tau_{3}^{4}+c_{5}\tau_{3}^{2}+c_{6}=0, \tag{3.90}\] where \(c_{i}\), \(1\leq i\leq 6\), are polynomials about \(H\). Acting on (3.90) by \(u_{1}\), we have \[L_{1}\Gamma^{\bar{3}}_{\bar{3}1}+L_{2}\Gamma^{\bar{3}}_{\bar{3}1}=0,\] where \(L_{1}\) and \(L_{2}\) are polynomials about \(H\) and \(\tau_{3}\). It follows from (3.89) and the above equation that \[K_{1}L_{2}-K_{2}L_{1}=0.\] Eliminating \(\tau_{3}\) from (3.90) and the above equation, we get a polynomial equation of degree \(92\) for \(H\), which implies \(H\) is a constant, a contradiction. _Subcase (2): \(\Gamma^{2}_{21}\neq 0\) at some point in \(U_{p}\)._ We suppose \(\Gamma^{2}_{21}\neq 0\) at \(q\in U_{p}\). If \(c+\varepsilon(\gamma_{3}^{2}+\tau_{3}^{2})=0\), then (3.86) implies \(\Gamma^{\bar{3}}_{\bar{3}1}=\Gamma^{\bar{3}}_{\bar{3}1}=0\) and \(\lambda_{2}=c=\gamma_{3}=\tau_{3}=0\), a contradiction. So, \(c+\varepsilon(\gamma_{3}^{2}+\tau_{3}^{2})\neq 0\). We derive from the equations in (3.86) that \[(\Gamma^{2}_{21})^{2}=-\varepsilon_{1}\frac{\lambda_{2}^{2}(\gamma_{3}^{2}+ \tau_{3}^{2})+c^{2}+2c\varepsilon\lambda_{2}\gamma_{3}}{c+\varepsilon(\gamma_ {3}^{2}+\tau_{3}^{2})}. \tag{3.91}\] Let \[\mu=2\lambda_{2}\gamma_{3}+\gamma_{3}^{2}+\tau_{3}^{2},\ \ \nu=\lambda_{2}(\gamma_{3}^{2}+\tau_{3}^{2}),\] then take derivatives of \(\mu\), \(\nu\) and \(H\) along \(u_{1}\), by use of (3.74), (3.86) and (3.91), we get \[\begin{cases}6\varepsilon u_{1}(H)=K(12\varepsilon Hc^{2}+10H\mu c-3 \varepsilon\nu c+48H^{2}\nu-2\mu\nu),\\ u_{1}(\mu)=K[(24H^{2}+2\mu)c^{2}+(2\mu^{2}+12\varepsilon H^{2}\mu-6H\nu)c-3 \nu^{2}+10\varepsilon H\mu\nu],\\ u_{1}(\nu)=K[(2\varepsilon H\mu+3\nu)c^{2}+(24\varepsilon H^{2}\nu+2 \varepsilon\mu\nu)c+12\varepsilon H\nu^{2}],\end{cases} \tag{3.92}\] where \(K=\frac{\varepsilon_{1}}{\Gamma_{21}^{2}[c+\varepsilon(\gamma_{3}^{2}+\tau_ {3}^{2})]}\). Applying (3.86), (3.91) and (3.92), we can rewrite (3.72) as (3.37). Follow the process for the subcase \(\Gamma_{21}^{2},\Gamma_{\bar{3}1}^{\bar{3}},\Gamma_{\bar{3}1}^{\bar{3}}\neq 0\) of the case \(\Gamma_{2\bar{3}}^{\bar{3}}=0\) in section 3.1, and replace the system (3.36) with (3.92), the equation (3.42) with \[\begin{split}-\varepsilon_{1}(c+(\gamma_{3}^{2}+\tau_{3}^{2}))^{ 2}(\Gamma_{21}^{2})^{2}&=(c+(\gamma_{3}^{2}+\tau_{3}^{2}))(c^{2}+ \lambda_{2}^{2}(\gamma_{3}^{2}+\tau_{3}^{2})+2c\lambda_{2}\gamma_{3})\\ &=\nu^{2}+6cH\nu+c^{2}\mu+c^{3}=0,\end{split}\] we deduce a contradiction. \(\square\) **Remark 3.12** For form (V) of the shape operator, by constructing the terms \(b_{k}\) and \(c_{k}\), \(k=1,2,\cdots\), we find there are many equations similarly with the equations for the form (I) hold, by only changing some terms for the form (I) into the terms about \(b_{k}\) and \(c_{k}\) for the form (V) as follows. \begin{tabular}{|c|c|} \hline **The terms for form (I)** & **The terms for form (V)** \\ \hline \((\Gamma_{31}^{3})^{k}+(\Gamma_{41}^{4})^{k}\) & \(2b_{k}\) \\ \(\lambda_{3}(\Gamma_{31}^{3})^{k}+\lambda_{4}(\Gamma_{41}^{4})^{k}\) & \(2\gamma_{3}b_{k}-2\tau_{3}c_{k}\) \\ \(\lambda_{3}^{2}(\Gamma_{31}^{3})^{k}+\lambda_{4}^{2}(\Gamma_{41}^{4})^{k}\) & \(2(\gamma_{3}^{2}-\tau_{3}^{2})b_{k}-4\gamma_{3}\tau_{3}c_{k-1}\) \\ \hline \end{tabular} We believe that the construction of \(b_{k}\) and \(c_{k}\), \(k=1,2,\cdots\), will provide a new insight for us to study hypersurfaces with imaginary principal curvatures. ### The shape operator has the form (V) **Proposition 3.13** _Let \(M_{r}^{4}\) be a nondegenerate hypersurface of \(N_{s}^{5}(c)\) with proper mean curvature vector field. Suppose that the shape operator \(A\) of \(M_{r}^{4}\) has the form (V), then \(M_{r}^{4}\) has constant mean curvature._ **Proof** Assume that \(H\) is not a constant, then the eigenvector \(\nabla H\) of \(A\) is in the direction \(u_{1_{2}}\) and \(\lambda_{1}=-2\varepsilon H\). As \(2\lambda_{1}+2\gamma_{2}=4\varepsilon\), we know \(\gamma_{2}=4\varepsilon H\). Let \(u_{\bar{2}}=u_{\bar{2}_{1}}\), \(u_{\bar{2}}=u_{\bar{2}_{1}}\) and \(\nabla_{u_{B}}u_{C}=\Gamma^{D}_{BC}u_{D}\), \(B,C=1_{1},1_{2},\bar{2},\tilde{2}\), then \[\Gamma^{1_{2}}_{D1_{1}}=\Gamma^{\bar{2}}_{D\bar{2}}=\Gamma^{\tilde{2}}_{D\tilde {2}}=0,\] and \[\Gamma^{1_{1}}_{D1_{1}}=-\Gamma^{1_{2}}_{D1_{2}},\ \Gamma^{\tilde{2}}_{D\bar{2}}= \Gamma^{\bar{2}}_{D\bar{2}},\ \Gamma^{\tilde{2}}_{D1_{a}}=-\varepsilon_{1}\Gamma^{1_{3-a}}_{D\bar{2}},\ \Gamma^{\tilde{2}}_{D1_{a}}= \varepsilon_{1}\Gamma^{1_{3-a}}_{D\bar{2}},\ \ a=1,2.\] It follows that \[u_{1_{1}}(H)\neq 0,\ u_{1_{2}}(H)=u_{\bar{2}}(H)=u_{\bar{2}}(H)=0.\] and \[\Gamma^{1_{1}}_{BC}=\Gamma^{1_{1}}_{CB},\quad B,C\neq 1_{1}.\] From Codazzi equation (2.1), with \((X,Y,Z)=(u_{1_{1}},u_{B},u_{1_{2}}),(u_{\bar{2}},u_{\bar{2}},u_{1_{2}}),\) \((u_{1_{2}},u_{B},u_{B}),(u_{1_{2}},u_{\bar{2}},u_{\bar{2}})\), \(B=\bar{2},\tilde{2}\), we deduce \[\Gamma^{1_{1}}_{\bar{2}\bar{2}}=-\Gamma^{1_{1}}_{\bar{2}\bar{2}},\ \ \Gamma^{1_{1}}_{1_{1}\bar{2}}=\Gamma^{1_{1}}_{1_{1}\tilde{2}}=\Gamma^{\tilde{2} }_{1_{2}\bar{2}}=\Gamma^{\bar{2}}_{1_{2}\tilde{2}}=0,\] and \[\begin{cases}-6\varepsilon H\Gamma^{\bar{2}}_{21_{2}}+\tau_{2}\Gamma^{\bar{2} }_{\bar{2}1_{2}}=0,\\ u_{1_{2}}(\tau_{1})=-6\varepsilon H\Gamma^{\bar{2}}_{\bar{2}1_{2}}-\tau_{2} \Gamma^{\bar{2}}_{\bar{2}1_{2}}.\end{cases} \tag{3.93}\] Using Gauss equation for \(\langle R(u_{1_{2}},u_{\bar{2}})u_{\bar{2}},u_{1_{2}}\rangle\) and \(\langle R(u_{1_{2}},u_{\bar{2}})u_{\bar{2}},u_{1_{2}}\rangle\), combining the above equations, it gives \[\begin{cases}u_{1_{2}}(\Gamma^{\bar{2}}_{\bar{2}1_{2}})=\Gamma^{1_{2}}_{1_{2} 1_{2}}\Gamma^{\bar{2}}_{\bar{2}1_{2}}+(\Gamma^{\bar{2}}_{\bar{2}1_{2}})^{2}-( \Gamma^{\bar{2}}_{\bar{2}1_{2}})^{2},\\ u_{1_{2}}(\Gamma^{\bar{2}}_{\bar{2}1_{2}})=\Gamma^{1_{2}}_{1_{2}1_{2}}\Gamma^{ \bar{2}}_{\bar{2}1_{2}}-2\Gamma^{\bar{2}}_{\bar{2}1_{2}}\Gamma^{\bar{2}}_{\bar {2}1_{2}}.\end{cases} \tag{3.94}\] Applying (3.93) and (3.94), differentiate the first equation in (3.93) along \(u_{1_{2}}\), we obtain \[-6\varepsilon H(\Gamma^{\bar{2}}_{\bar{2}1_{2}})^{2}-\tau_{2}\Gamma^{\bar{2} }_{\bar{2}1_{2}}\Gamma^{\bar{2}}_{\bar{2}1_{2}}=0,\] which together with the first equation of (3.93) implies that \(\Gamma^{\bar{2}}_{\bar{2}1_{2}}=0\) and \(H\Gamma^{\bar{2}}_{\bar{2}1_{2}}=0\). Compute \(\langle R(u_{1_{1}},u_{\bar{2}})u_{1_{2}},u_{\bar{2}}\rangle\) by Gauss equation, we have \[\Gamma^{\bar{2}}_{\bar{2}1_{2}}\Gamma^{\bar{2}}_{1_{1}\bar{2}}=2H\tau_{2}.\] Multiply both sides of the above equation by \(H\), combining \(H\Gamma^{\bar{2}}_{\bar{2}1_{2}}=0\), we conclude that \(H=0\), a contradiction. ## 4 Estimates for the constant mean curvature In this section, as an application of the conclusion that \(M_{r}^{4}\) has constant mean curvature, we will estimate the value of the mean curvature \(H\). When \(M_{r}^{4}\) has one or two distinct principal curvatures, we will compute the value of \(H\). It's a pity that we can not get the value of \(H\) when \(M_{r}^{4}\) has three or four distinct principal curvatures. But for the case that the principal curvatures are all real, we give a value range of the constant mean curvature \(H\). ### When \(M_{r}^{4}\) has one simple principal curvature **Theorem 4.1**: _Let \(M_{r}^{4}\) be a nondegenerate hypersurface of \(N_{s}^{5}(c)\) with proper mean curvature vector field, satisfying \(\Delta\vec{H}=\lambda\vec{H}\), \(\vec{\xi}\) be a unit normal vector field to \(M_{r}^{4}\), with \(\varepsilon=\langle\vec{\xi},\vec{\xi}\rangle=\pm 1\). Suppose that \(M_{r}^{4}\) has one simple principal curvature, then_ * _when_ \(\varepsilon\lambda\leq 0\)_, we have_ \(H=0\)_;_ * _when_ \(\varepsilon\lambda>0\)_, we have_ \(H=0\) _or_ \(H^{2}=\frac{\varepsilon\lambda}{4}\)_._ **Proof** Recall subsection 2.3, the imaginary principal curvatures of \(M_{r}^{4}\) appear in conjugate pairs. So, under the assumption that \(M_{r}^{4}\) has one simple principal curvature \(\mu\), \(\mu\) must be real. Then, we know \(t=m\) (the signs \(t,m\) see subsection 2.3) and \(\lambda_{1}=\cdots=\lambda_{m}=\mu\). Thus, equations (2.4) can be simplified as \(\mbox{tr}A^{2}=4\mu^{2}\) and \(\mu=\varepsilon H\). According to Theorem 3.1, the mean curvature \(H\) of \(M_{r}^{4}\) is a constant. Using this result and \(\mbox{tr}A^{2}=4\mu^{2}\), we get from (2.3) that \(H=0\) or \(4\mu^{2}=\varepsilon\lambda\). When \(\varepsilon\lambda\leq 0\), we assume \(H\neq 0\). Then \(4\mu^{2}=\varepsilon\lambda\leq 0\), a contradiction. When \(\varepsilon\lambda>0\), suppose that \(H\neq 0\), we also have \(4\mu^{2}=\varepsilon\lambda\), which together with \(\mu=\varepsilon H\) implies that \[H^{2}=\frac{\varepsilon\lambda}{4}.\] \(\Box\) **Remark 4.2**: The result of Theorem 4.1 for \(c=0\) has been gotten in [16]. ### When \(M_{r}^{4}\) has two distinct principal curvatures **Lemma 4.3**: _Let \(M_{r}^{4}\) be a non-minimal hypersurface of \(N_{s}^{5}(c)\) with proper mean curvature vector field, \(\vec{\xi}\) be a unit normal vector field to \(M_{r}^{4}\), with \(\varepsilon=\langle\vec{\xi},\vec{\xi}\rangle=\pm 1\). Suppose that \(M_{r}^{4}\) has two distinct principal curvatures \(\mu\) and \(\nu\), then_ \[c+\varepsilon\mu\nu=0.\] **Proof** Since the number of imaginary principal curvatures is even, these two distinct principal curvatures \(\mu\) and \(\nu\) are all real or all imaginary. _Case 1: \(\mu\) and \(\nu\) are all real._ Let \(l\) is the multiply of \(\mu\), then considering the mean curvature \(H\) is a constant and \(H\neq 0\), we get from (2.3) and (2.4) that \[\begin{cases}l\mu+(4-l)\nu=4\varepsilon H,\\ l\mu^{2}+(4-l)\nu^{2}=\varepsilon\lambda,\end{cases} \tag{4.1}\] which implies \(\mu\) and \(\nu\) are all constants. In the following, by use of that the principal curvatures are all constant, we discuss separately the possible forms (I), (I\(\!\)I), (I\(\!\)I) and (I\(\!\)I) of \(A\) for this case, and derive the result \(c+\varepsilon\mu\nu=0\). _Subcase (1): the shape operator has the form (I)._ Without loss of generality, we suppose \(\lambda_{1}=\mu\) and \(\lambda_{4}=\nu\). Let \(u_{i}=u_{i_{1}}\) and \(\nabla_{u_{i}}u_{j}=\Gamma_{ij}^{k}u_{k},i,j=1,2,3,4\), then recall subsection 3.1, we know (3.1) and (3.5) also hold. Since the principal curvatures \(\lambda_{i}\), with \(i=1,2,3,4\), are constants, the equations (3.1) and (3.5) imply that \(\Gamma_{ij}^{i}=0\) if \(\lambda_{i}\neq\lambda_{j}\), \(\Gamma_{i4}^{1}=\Gamma_{i1}^{4}=\Gamma_{1i}^{4}=0\) if \(\lambda_{i}=\mu\), and \(\Gamma_{41}^{i}=\Gamma_{i1}^{4}=0\) if \(\lambda_{i}=\nu\). Using Gauss equation for \(\langle R(u_{1},u_{4})u_{1},u_{4}\rangle\), we have \[c+\varepsilon\mu\nu=0.\] _Subcase (2): the shape operator has the form (I\(\!\)I)._ We can suppose \(\lambda_{1}=\mu\) and \(\lambda_{3}=\nu\). Denote \(u_{2}=u_{2_{1}}\), \(u_{3}=u_{3_{1}}\), and let \(\nabla_{u_{B}}u_{C}=\Gamma_{BC}^{D}u_{D}\), \(B,C=1_{1},1_{2},2,3\), then (3.43) and (3.44) hold. If \(\lambda_{2}=\nu\), considering that \(\mu\) and \(\nu\) are constants, we conclude from the Codazzi equation (2.1) that \[\Gamma_{1_{1}3}^{1_{1}}=\Gamma_{31_{2}}^{3}=\Gamma_{31_{1}}^{3}=\Gamma_{1_{2 }3}^{1_{1}}=\Gamma_{1_{1}2}^{1_{1}}=\Gamma_{21_{2}}^{3}=\Gamma_{31_{2}}^{2}=0.\] Applying Gauss equation for \(\langle R(u_{1_{1}},u_{3})u_{1_{2}},u_{3}\rangle\), combining the above equations, we have \(c+\varepsilon\mu\nu=0\). If \(\lambda_{2}=\mu\), then we deduce from the Codazzi equation that \[\Gamma^{1_{1}}_{1_{1}3}=\Gamma^{1_{2}}_{1_{2}3}=\Gamma^{1_{1}}_{1_{2}3}=\Gamma^{ 3}_{31_{2}}=\Gamma^{3}_{31_{1}}=\Gamma^{3}_{32}=\Gamma^{2}_{23}=\Gamma^{1_{1}}_{ 23}=\Gamma^{2}_{1_{2}3}=0,\] and \[\Gamma^{3}_{21_{1}}=\Gamma^{3}_{1_{1}2}.\] Using Gauss equation for \(\langle R(u_{1_{1}},u_{3})u_{1_{2}},u_{3}\rangle\) and \(\langle R(u_{2},u_{3})u_{2},u_{3}\rangle\), combining the above two equations, we have \[\begin{cases}\Gamma^{2}_{31_{2}}\Gamma^{3}_{1_{1}2}=-c\varepsilon_{1}- \varepsilon\varepsilon_{1}\mu\nu,\\ -2\Gamma^{2}_{31_{2}}\Gamma^{3}_{1_{1}2}=-c\varepsilon_{1}-\varepsilon \varepsilon_{1}\mu\nu,\end{cases}\] which implies \(\Gamma^{2}_{31_{2}}\Gamma^{3}_{1_{1}2}=0\), and \(c+\varepsilon\mu\nu=0\). _Subcase (3): the shape operator has the form (III)._ Suppose \(\lambda_{1}=\mu\) and \(\lambda_{2}=\nu\). Let \(\nabla_{u_{B}}u_{C}=\Gamma^{D}_{BC}u_{D}\), \(B,C=1_{1},1_{2},2_{1},2_{2}\). It follows from Codazzi equation that \[\Gamma^{2_{1}}_{2_{1}1_{2}}=\Gamma^{1_{1}}_{1_{1}2_{2}}=\Gamma^{1_{1}}_{1_{2} 2_{2}}=0.\] Using Gauss equation for \(\langle R(u_{1_{1}},u_{2_{1}})u_{1_{2}},u_{2_{2}}\rangle\), combining the above equation, we have \(c+\varepsilon\mu\nu=0\). _Subcase (4): the shape operator has the form (IV)._ We can suppose \(\lambda_{1}=\mu\) and \(\lambda_{2}=\nu\). Let \(u_{2}=u_{2_{1}}\) and \(\nabla_{u_{B}}u_{C}=\Gamma^{D}_{BC}u_{D}\), \(B,C=1_{1},1_{2},1_{3},2\), then (3.66) and (3.67) hold, and we have from Codazzi equation that \[\Gamma^{1_{1}}_{1_{1}2}=\Gamma^{1_{3}}_{1_{3}2},\ \ \Gamma^{1_{1}}_{1_{1}2}+ \Gamma^{1_{2}}_{1_{2}2}+\Gamma^{1_{3}}_{1_{3}2}=0, \tag{4.2}\] \[\Gamma^{2}_{21_{1}}=\Gamma^{2}_{2_{1}2}=\Gamma^{2}_{21_{3}}=\Gamma^{1_{1}}_{ 1_{2}2}=\Gamma^{1_{1}}_{1_{3}2}=\Gamma^{1_{2}}_{1_{3}2}=0, \tag{4.3}\] \[(\nu-\mu)\Gamma^{1_{3}}_{1_{3}2}=-\Gamma^{1_{2}}_{21_{3}}, \tag{4.4}\] and \[(\mu-\nu)\Gamma^{2}_{1_{1}1_{2}}+\Gamma^{2}_{1_{1}1_{3}}=(\mu-\nu)\Gamma^{2}_{ 1_{2}1_{1}}+\Gamma^{2}_{1_{2}1_{2}}. \tag{4.5}\] From Gauss equation for \(\langle R(u_{1_{1}},u_{2})u_{1_{3}},u_{2}\rangle\) and \(\langle R(u_{1_{3}},u_{2})u_{1_{1}},u_{2}\rangle\), using (4.3), we have \[\begin{cases}-u_{2}(\Gamma^{2}_{1_{1}3})+\Gamma^{1_{2}}_{21_{3}}\Gamma^{2}_{1 _{1}1_{2}}-\Gamma^{1_{1}}_{1_{2}1}\Gamma^{2}_{1_{1}1_{3}}=-c\varepsilon_{1}- \varepsilon\varepsilon_{1}\mu\nu,\\ -u_{2}(\Gamma^{2}_{1_{3}1_{1}})+\Gamma^{1_{2}}_{21_{3}}\Gamma^{2}_{1_{2}1_{1} }-\Gamma^{1_{3}}_{1_{3}2}\Gamma^{2}_{1_{3}1_{1}}=-c\varepsilon_{1}- \varepsilon\varepsilon_{1}\mu\nu,\end{cases} \tag{4.6}\] which together with (4.2) implies that \[\Gamma^{1_{2}}_{21_{3}}=0\ \ \text{or}\ \ \Gamma^{2}_{1_{1}1_{2}}=\Gamma^{2}_{1_{2}1_{ 1}}.\] If \(\Gamma^{1_{2}}_{21_{3}}=0\), then (4.4) tells us that \(\Gamma^{1_{3}}_{1_{3}2}=0\). If \(\Gamma^{2}_{1_{1}1_{2}}=\Gamma^{2}_{1_{2}1_{1}}\), then from (4.2), (4.4) and (4.5), we also have \[\Gamma^{1_{3}}_{1_{3}2}=\Gamma^{1_{2}}_{21_{3}}=0.\] So, we derive from (4.6) that \(c+\varepsilon\mu\nu=0\). _Case 2: \(\mu\) and \(\nu\) are all imaginary._ We can suppose \(\mu=\gamma+\tau\sqrt{-1}\) and \(\nu=\gamma-\tau\sqrt{-1}\). Since \(H\) is a nonzero constant, the equations (2.3) and (2.4) gives that \[\begin{cases}\gamma=\varepsilon H,\\ 4(\gamma^{2}-\tau^{2})=\varepsilon\lambda.\end{cases}\] So, \(\gamma\) and \(\tau\) are all constants. In the following, we treat the possible forms (VI) and (VIII) of \(A\), and derive the result that \(c+\varepsilon\mu\nu=0\). _Subcase (1): the shape operator has the form (VII)._ It follows that \(\nu_{1}=\nu\) and \(\tau_{1}=\tau\). Let \[\nabla_{\bar{u}_{1_{a}}}\bar{u}_{1_{b}}=\Gamma^{\bar{1}_{d}}_{ \bar{1}_{a}\bar{1}_{b}}\bar{u}_{1_{d}}+\Gamma^{\bar{1}_{d}}_{\bar{1}_{a}\bar{1 }_{b}}\bar{v}_{1_{d}},\ \nabla_{\bar{v}_{1_{a}}}\bar{v}_{1_{b}}=\Gamma^{\bar{1}_{d}}_{\bar{1}_{a}\bar{ 1}_{b}}\bar{u}_{1_{d}}+\Gamma^{\bar{1}_{d}}_{\bar{1}_{a}\bar{1}_{b}}\bar{v}_{1 _{d}},\] \[\nabla_{\bar{u}_{1_{a}}}\bar{v}_{1_{b}}=\Gamma^{\bar{1}_{d}}_{ \bar{1}_{a}\bar{1}_{b}}\bar{u}_{1_{d}}+\Gamma^{\bar{1}_{d}}_{\bar{1}_{a}\bar{1 }_{b}}\bar{v}_{1_{d}},\ \nabla_{\bar{v}_{1_{a}}}\bar{u}_{1_{b}}=\Gamma^{\bar{1}_{d}}_{\bar{1}_{a}\bar{ 1}_{b}}\bar{u}_{1_{d}}+\Gamma^{\bar{1}_{d}}_{\bar{1}_{a}\bar{1}_{b}}\bar{v}_{1 _{d}},\] with \(a,b=1,2\), we get from compatibility condition that \[\Gamma^{\bar{1}_{b}}_{B\bar{1}_{a}}=-\Gamma^{\bar{1}_{3-a}}_{B\bar{1}_{3-b}}, \ \Gamma^{\bar{1}_{b}}_{B\bar{1}_{a}}=-\Gamma^{\bar{1}_{3-a}}_{B\bar{1}_{3-b}},\ \Gamma^{\bar{1}_{b}}_{B\bar{1}_{a}}=\Gamma^{\bar{1}_{3-a}}_{B\bar{1}_{3-b}}.\] Since \(\gamma\) and \(\tau\) are constants, we deduce from Codazzi equation that \[\Gamma^{\bar{1}_{1}}_{\bar{1}_{2}\bar{1}_{2}}=\Gamma^{\bar{1}_{1}}_{\bar{1}_{ 2}\bar{1}_{2}}=\Gamma^{\bar{1}_{1}}_{\bar{1}_{2}\bar{1}_{2}}=\Gamma^{\bar{1}_{ 1}}_{\bar{1}_{2}\bar{1}_{2}}=0, \tag{4.7}\] and \[\begin{cases}-\Gamma^{\bar{1}_{1}}_{\bar{1}_{2}\bar{1}_{1}}+\Gamma^{\bar{1}_{ 1}}_{\bar{1}_{2}\bar{1}_{1}}=0,\\ \Gamma^{\bar{1}_{2}}_{\bar{1}_{2}\bar{1}_{2}}+\Gamma^{\bar{1}_{2}}_{\bar{1}_{2 }\bar{1}_{2}}=-\Gamma^{\bar{1}_{2}}_{\bar{1}_{2}\bar{1}_{2}}+\Gamma^{\bar{1}_ {2}}_{\bar{1}_{2}\bar{1}_{2}},\\ \Gamma^{\bar{1}_{1}}_{\bar{1}_{1}\bar{1}_{2}}+\Gamma^{\bar{1}_{1}}_{\bar{1}_{1 }\bar{1}_{2}}=\Gamma^{\bar{1}_{1}}_{\bar{1}_{2}\bar{1}_{1}}+\Gamma^{\bar{1}_ {1}}_{\bar{1}_{2}\bar{1}_{1}},\\ -\Gamma^{\bar{1}_{1}}_{\bar{1}_{2}\bar{1}_{1}}+\Gamma^{\bar{1}_{1}}_{\bar{1}_ {2}\bar{1}_{1}}=0,\\ \Gamma^{\bar{1}_{2}}_{\bar{1}_{2}\bar{1}_{2}}-\Gamma^{\bar{1}_{2}}_{\bar{1}_{2 }\bar{1}_{2}}=-\Gamma^{\bar{1}_{2}}_{\bar{1}_{2}\bar{1}_{2}}-\Gamma^{\bar{1}_ {2}}_{\bar{1}_{2}\bar{1}_{2}},\\ \Gamma^{\bar{1}_{1}}_{\bar{1}_{1}\bar{1}_{2}}+\Gamma^{\bar{1}_{1}}_{\bar{1}_{1 }\bar{1}_{2}}=\Gamma^{\bar{1}_{1}}_{\bar{1}_{2}\bar{1}_{1}}+\Gamma^{\bar{1}_ {1}}_{\bar{1}_{2}\bar{1}_{1}},\end{cases}\] which reduces to \[\Gamma^{\bar{1}_{1}}_{\bar{1}_{1}\bar{1}_{2}}=\Gamma^{\bar{1}_{1}}_{\bar{1}_{1} \bar{1}_{2}}=\Gamma^{\bar{1}_{1}}_{\bar{1}_{1}\bar{1}_{2}}=\Gamma^{\bar{1}_{1}}_ {\bar{1}_{1}\bar{1}_{2}}=0. \tag{4.8}\] Using Gauss equation for \(\langle R(\bar{u}_{1_{1}},\bar{v}_{1_{1}})\bar{u}_{1_{2}},\bar{v}_{1_{2}}\rangle\), combining (4.7) and (4.8), we have \[c+\varepsilon(\gamma^{2}+\tau^{2})=0,\] i.e. \(c+\varepsilon\mu\nu=0\). _Subcase (2): the shape operator has the form (VIII)._ In this case, we have \(\gamma_{1}=\gamma_{2}=\gamma\), \(\tau_{1}=\tau_{2}=\tau\). Denote \(\bar{u}_{1}=\bar{u}_{1_{1}}\), \(\bar{u}_{2}=\bar{u}_{2_{1}}\) and \(\bar{v}_{1}=\bar{v}_{1_{1}}\) and \(\bar{v}_{2}=\bar{v}_{2_{1}}\). Let \[\nabla_{\bar{u}_{a}}\bar{u}_{b}=\Gamma^{\bar{d}}_{\bar{a}\bar{b}} \bar{u}_{d}+\Gamma^{\bar{d}}_{\bar{a}\bar{b}}\bar{v}_{d},\ \nabla_{\bar{v}_{a}}\bar{v}_{b}=\Gamma^{\bar{d}}_{\bar{a}\bar{b}}\bar{u}_{d}+ \Gamma^{\bar{d}}_{\bar{a}\bar{b}}\bar{v}_{d},\] \[\nabla_{\bar{u}_{a}}\bar{v}_{b}=\Gamma^{\bar{d}}_{\bar{a}\bar{b}} \bar{u}_{d}+\Gamma^{\bar{d}}_{\bar{a}\bar{b}}\bar{v}_{d},\ \nabla_{\bar{v}_{a}}\bar{u}_{b}=\Gamma^{\bar{d}}_{\bar{a}\bar{b}}\bar{u}_{d}+ \Gamma^{\bar{d}}_{\bar{a}\bar{b}}\bar{v}_{d},\] with \(a,b=1,2\), we obtain from compatibility condition that \[\Gamma^{\bar{b}}_{B\bar{a}}=-\Gamma^{\bar{a}}_{B\bar{b}},\ \Gamma^{\bar{b}}_{B \bar{a}}=-\Gamma^{\bar{a}}_{B\bar{b}},\ \Gamma^{\bar{b}}_{B\bar{a}}=\Gamma^{\bar{a}}_{B\bar{b}},\ \ a,b=1,2.\] The Codazzi equation gives that \[\Gamma^{\bar{1}}_{\bar{1}\bar{1}}=\Gamma^{\bar{1}}_{\bar{1}\bar{1}}=\Gamma^{ \bar{1}}_{\bar{2}\bar{1}}=\Gamma^{\bar{1}}_{\bar{2}\bar{1}}=0,\] and \[\Gamma^{\bar{1}}_{\bar{1}\bar{2}}+\Gamma^{\bar{1}}_{\bar{1}\bar{2}}=\Gamma^{ \bar{1}}_{\bar{1}\bar{2}}-\Gamma^{\bar{1}}_{\bar{1}\bar{2}}=\Gamma^{\bar{1}}_ {\bar{1}\bar{2}}+\Gamma^{\bar{1}}_{\bar{1}\bar{2}}=\Gamma^{\bar{1}}_{\bar{1} \bar{2}}-\Gamma^{\bar{1}}_{\bar{1}\bar{2}}=0.\] Applying Gauss equation for \(\langle R(\bar{u}_{1},\bar{v}_{1})\bar{u}_{1},\bar{v}_{1}\rangle\), combining the above two equations, we conclude \(c+\varepsilon(\gamma^{2}+\tau^{2})=0\), i.e. \(c+\varepsilon\mu\nu=0\). \(\square\) **Remark 4.4** The result of Lemma 4.3 that \(c+\varepsilon\mu\nu=0\) coincides with the basic identity of Cartan in [14, Theorem 2.9] for the isoparametric hypersurface \(M^{n}_{r}\) of \(N^{n+1}_{s}(c)(c=0,-1,1)\) with two distinct principal curvatures \(\mu\) and \(\nu\), and algebraic and geometric multiplicities of \(\mu\) or \(\nu\) coincide. By use of Lemma 4.3, we can give the value of the mean curvature \(H\), according to the principal curvatures are real or imaginary. The following Lemma 4.5 will be used for the case that the principal curvatures are real. **Lemma 4.5** _Let \(c\), \(\lambda\), \(l\), \(\mu\) and \(\nu\) are real constants, and the equations_ \[\begin{cases}\mu\nu=-c\varepsilon,\\ l\mu^{2}+(4-l)\nu^{2}=\varepsilon\lambda\end{cases}\] hold, where \(\varepsilon=\pm 1\). Then, \(\varepsilon\lambda=-4c\varepsilon\geq 0\) is the necessary condition such that \(\mu=\nu\)._ **Proof** If \(\mu=\nu\), then \[-c\varepsilon=\mu^{2}\geq 0,\] and \[\varepsilon\lambda=-4c\varepsilon.\] \(\Box\) **Theorem 4.6**: _Let \(M_{r}^{4}\) be a nondegenerate hypersurface of \(N_{s}^{5}(c)\) with proper mean curvature vector field, satisfying \(\Delta\vec{H}=\lambda\vec{H}\), \(\vec{\xi}\) be a unit normal vector field to \(M_{r}^{4}\), with \(\varepsilon=\langle\vec{\xi},\vec{\xi}\rangle=\pm 1\). Suppose that \(M_{r}^{4}\) has two distinct real principal curvatures \(\mu\) and \(\nu\), and \(l=1,\ 2\) or \(3\) is the multiply of \(\mu\), then_ * _When_ \(c=0\)_, we have_ * _if_ \(\varepsilon\lambda\leq 0\)_, then_ \(H=0\)_;_ * _if_ \(\varepsilon\lambda>0\)_, then_ \(H=0\)_, or_ \(H^{2}=\frac{l\varepsilon\lambda}{16}\)_, or_ \(H^{2}=\frac{(4-l)\varepsilon\lambda}{16}\)_;_ * _When_ \(c\neq 0\)_, we have_ * _if_ \(\varepsilon\lambda<2\sqrt{l(4-l)}|c|\)_, then_ \(H=0\)_;_ * _if_ \(\varepsilon\lambda\geq 2\sqrt{l(4-l)}|c|\)_, then_ \(H=0\) _or_ \[H^{2}=\frac{1}{16}[2\varepsilon\lambda\pm(l-2)\sqrt{\lambda^{2}-4l(4-l)c^{2}} -2l(4-l)c\varepsilon].\] _Specially, if_ \(\varepsilon\lambda=-4c\varepsilon>0\)_, then_ \(H=0\) _or_ \(H^{2}=-\frac{3}{4}c\varepsilon\)_._ **Proof** (i) When \(c=0\), suppose \(H\neq 0\), then (4.1) holds, and \(\varepsilon\lambda>0\). It follows from Lemma 4.3 that \(\mu\nu=0\), which together with the second equation in (4.1) gives that \[\mu=0,\ \nu^{2}=\frac{\varepsilon\lambda}{4-l},\ \ \mbox{or}\ \mu^{2}=\frac{ \varepsilon\lambda}{l},\ \nu=0.\] Since the above equation, we obtain from the first equation in (4.1) that \[H^{2}=\frac{l\varepsilon\lambda}{16},\ \ \mbox{or}\ H^{2}=\frac{(4-l)\varepsilon \lambda}{16}.\] (ii) For the case \(c\neq 0\), suppose \(H\neq 0\), then (4.1) holds, and \(\mu\nu=-c\varepsilon\), obtained from Lemma 4.3. And then, we have \[\begin{cases}\varepsilon\lambda=\sqrt{l}\mu^{2}+\sqrt{4-l}\nu^{2}>0,\\ \varepsilon\lambda+2\sqrt{l(4-l)}c\varepsilon=(\sqrt{l}\mu-\sqrt{4-l}\nu)^{2 }\geq 0,\\ \varepsilon\lambda-2\sqrt{l(4-l)}c\varepsilon=(\sqrt{l}\mu+\sqrt{4-l}\nu)^{2 }\geq 0,\end{cases}\] which implies \(\varepsilon\lambda\geq 2\sqrt{l(4-l)}|c|\). So, if \(\varepsilon\lambda<2\sqrt{l(4-l)}|c|\), then \(H=0\). When \(\varepsilon\lambda\geq 2\sqrt{l(4-l)}|c|\), if \(H\neq 0\), then we calculate \(\mu^{2}\) and \(\nu^{2}\) from \(\mu\nu=-4c\varepsilon\) and the second equation of (4.1) that \[\mu^{2}=\frac{\varepsilon\lambda\pm\sqrt{\lambda^{2}-4l(4-l)c^{2}}}{2l},\ \ \nu^{2}=\frac{ \varepsilon\lambda\mp\sqrt{\lambda^{2}-4l(4-l)c^{2}}}{2(4-l)}.\] Together with the first equation of (4.1) and the above equation, we have \[H^{2}=\frac{2\varepsilon\lambda\pm(l-2)\sqrt{\lambda^{2}-4l(4-l)c^{2}}-2l(4-l )c\varepsilon}{16}.\] However, some values of \(\mu^{2}\), \(\nu^{2}\) and \(H^{2}\) in the above maybe contradict with the condition that \(\mu\neq\nu\). From Lemma 4.5, considering \(\varepsilon\lambda\neq 0\), we know \(\varepsilon\lambda=-4c\varepsilon>0\) is the necessary condition of \(\mu=\nu\) (a contradiction). It's necessary for us to discuss the values of \(\mu^{2}\), \(\nu^{2}\) and \(H^{2}\) for the case \(\varepsilon\lambda=-4c\varepsilon>0\), which is in the range \(\varepsilon\lambda\geq 2\sqrt{l(4-l)}|c|\). By calculation, when \(\varepsilon\lambda=-4c\varepsilon>0\), we find (1) \(\mu^{2}=\nu^{2}=H^{2}=-c\varepsilon\) for \(l=2\); (2) \(\mu^{2}=-3c\varepsilon\), \(\nu^{2}=-\frac{c\varepsilon}{3}\), \(H^{2}=-\frac{3}{4}c\varepsilon\), or \(\mu^{2}=\nu^{2}=H^{2}=-c\varepsilon\), for \(l=1\); (3) \(\nu^{2}=-3c\varepsilon\), \(\mu^{2}=-\frac{c\varepsilon}{3}\), \(H^{2}=-\frac{3}{4}c\varepsilon\), or \(\mu^{2}=\nu^{2}=H^{2}=-c\varepsilon\), for \(l=3\). As \(\mu\nu>0\), \(\mu^{2}=\nu^{2}\) is equivalent to \(\mu=\nu\) (a contradiction). Thus, we conclude that \(H^{2}=-\frac{3}{4}c\varepsilon\) for the case \(\varepsilon\lambda=-4c\varepsilon>0\). In conclusion, when \(c\neq 0\) and \(\varepsilon\lambda\geq 2\sqrt{l(4-l)}|c|\), we have \(H=0\) or \[H^{2}=\frac{2\varepsilon\lambda\pm(l-2)\sqrt{\lambda^{2}-4l(4-l)c^{2}}-2l(4-l )c\varepsilon}{16}. \tag{4.9}\] Specially, if \(\varepsilon\lambda=-4c\varepsilon>0\), then \(H=0\) or \(H^{2}=-\frac{3}{4}c\varepsilon\). By the way, we emphasize that the right side values of (4.9) are all greater than or equal to zero. When \(l=2\), it is obviously. When \(l=1,3\), since \(\varepsilon\lambda\geq 2\sqrt{3}|c|\), it follows that \[\varepsilon\lambda-3|c|\geq(2\sqrt{3}-3)|c|\geq 0,\] which together with \[\frac{(2\varepsilon\lambda-6c\varepsilon)^{2}}{\lambda^{2}-12c^{2}}=1+\frac{3( \lambda-4c)^{2}}{\lambda^{2}-12c^{2}}\geq 1\] gives that \(2\varepsilon\lambda-6c\varepsilon\geq\sqrt{\lambda^{2}-12c^{2}}\). So, when \(l=1,3\), the right side values \[\frac{2\varepsilon\lambda\pm\sqrt{\lambda^{2}-12c^{2}}-6c\varepsilon}{16}\geq 0.\] \(\square\) **Remark 4.7** When \(c=0\) and the algebraic and geometric multiplicities of \(\mu\) or \(\nu\) coincide, we has gotten in [16] the values of \(H^{2}\), which is agree with the result of Theorem 4.6 for \(c=0\). **Theorem 4.8**_Let \(M_{r}^{4}\) be a nondegenerate hypersurface of \(N_{s}^{5}(c)\) with proper mean curvature vector field, satisfying \(\Delta\vec{H}=\lambda\vec{H}\), \(\vec{\xi}\) be a unit normal vector field to \(M_{r}^{4}\), with \(\varepsilon=\langle\vec{\xi},\vec{\xi}\rangle=\pm 1\). Suppose that \(M_{r}^{4}\) has two imaginary principal curvatures, we have_ * _If_ \(c\varepsilon\geq 0\)_, then_ \(H=0\)_;_ * _If_ \(c\varepsilon<0\) _and_ \(|\varepsilon\lambda|\geq-4c\varepsilon\)_, then_ \(H=0\)_;_ * _If_ \(c\varepsilon<0\) _and_ \(|\varepsilon\lambda|<-4c\varepsilon\)_, then_ \(H=0\) _or_ \(H^{2}=\frac{\varepsilon\lambda-4c\varepsilon}{8}\)_._ **Proof** Suppose that \(M_{r}^{4}\) has two imaginary principal curvatures \(\gamma+\tau i\) and \(\gamma-\tau i\), we get from Lemma 4.3 and its proof that if \(H\neq 0\), then \(\gamma=\varepsilon H\) and \[\begin{cases}4(\gamma^{2}-\tau^{2})=\varepsilon\lambda,\\ \gamma^{2}+\tau^{2}=-c\varepsilon.\end{cases} \tag{4.10}\] When \(c\varepsilon\geq 0\), we assume that \(H\neq 0\), then \(\gamma^{2}+\tau^{2}=-c\varepsilon\leq 0\), a contradiction. When \(c\varepsilon<0\) and \(|\varepsilon\lambda|\geq-4c\varepsilon\), we suppose \(H\neq 0\), then the equations (4.10) implies that \[\begin{cases}-4c\varepsilon-\varepsilon\lambda=8\tau^{2}>0,\\ \varepsilon\lambda-4c\varepsilon=8\nu^{2}>0,\end{cases}\] which contradicts with \(|\varepsilon\lambda|\geq-4c\varepsilon\). For the case \(c\varepsilon<0\) and \(|\varepsilon\lambda|<-4c\varepsilon\), if \(H\neq 0\), we also have \(8\gamma^{2}=\varepsilon\lambda-4c\varepsilon\), which combining \(\gamma=\varepsilon H\) tells us that \[H^{2}=\frac{\varepsilon\lambda-4c\varepsilon}{8}.\] \(\Box\) **Corollary 4.9**: _Let \(M_{r}^{4}\) be a nondegenerate hypersurface of \(\mathbb{E}_{s}^{5}\) with proper mean curvature vector field. Suppose that \(M_{r}^{4}\) has two principal curvatures, which are imaginary, then \(M_{r}^{4}\) is minimal._ **Remark 4.10**: In [16], we also estimated the value of the mean curvature \(H\) for the hypersurface \(M_{r}^{4}\) in \(\mathbb{E}_{s}^{5}\) satisfying \(\Delta\vec{H}=\lambda\vec{H}\) and with two distinct imaginary principal curvatures, but just gave a value range that \(H=0\) or \(H^{2}>\frac{\varepsilon\lambda}{4}\). Clearly, the result \(H=0\) of Corollary 4.9 improves the result in [16] to the greatest extent. In non-flat space form \(N_{s}^{5}(c)\) (\(c\neq 0\)), under the assumption that the hypersurface \(M_{r}^{4}\) has two imaginary principal curvatures, we have from Theorem 4.8 that the mean curvature \(H\) is not necessarily zero. However, if the hypersurface \(M_{r}^{4}\) is biharmonic, i.e. \(\lambda=4c\), then Theorem 4.8 yields that the mean curvature is zero, which give a partial affirmative answer to Chen's conjecture. **Corollary 4.11**: _Let \(M_{r}^{4}\) be a biharmonic hypersurface of \(N_{s}^{5}(c)\). Suppose that \(M_{r}^{4}\) has two principal curvatures, which are imaginary, then \(M_{r}^{4}\) is minimal._ ### When \(M_{r}^{4}\) has not imaginary principal curvatures **Theorem 4.12**: _Let \(M_{r}^{4}\) be a nondegenerate hypersurface of \(N_{s}^{5}(c)\) satisfying \(\Delta\vec{H}=\lambda\vec{H}\) (\(\lambda\) a constant), and \(\vec{\xi}\) be a unit normal vector field to \(M_{r}^{4}\), with \(\varepsilon=\langle\vec{\xi},\vec{\xi}\rangle=\pm 1\). Suppose that \(M_{r}^{4}\) has not imaginary principal curvatures, we have_ * _If_ \(\varepsilon\lambda\leq 0\)_, then_ \(H=0\)_;_ * _If_ \(\varepsilon\lambda>0\)_, then_ \(H^{2}\leq\frac{\varepsilon\lambda}{4}\)_, the equal sign holds if and only if the principal curvatures are all equal and nonzero._ **Proof** Since \(M_{r}^{4}\) has not imaginary principal curvatures, the equations (2.4) reduce to \[{\rm tr}A^{2}=\sum_{i=1}^{m}\alpha_{i}\lambda_{i}^{2},\ \ 4\varepsilon H=\sum_{i=1}^{m} \alpha_{i}\lambda_{i}.\] Note that \(\sum_{i=1}^{m}\alpha_{i}=4\). Because of that the mean curvature \(H\) is a constant, combining the above equations, we have from (2.3) that \(H=0\) or \(\sum_{i=1}^{m}\alpha_{i}\lambda_{i}^{2}=\varepsilon\lambda\). When \(\varepsilon\lambda\leq 0\), we assume that \(H\neq 0\), then \[\sum_{i=1}^{m}\alpha_{i}\lambda_{i}^{2}=\varepsilon\lambda\leq 0,\] which implies \(\lambda_{i}=0\), \(i=1,\cdots,m\). And then, \(4\varepsilon H=\sum_{i=1}^{m}\alpha_{i}\lambda_{i}=0\), a contradiction. For the case \(\varepsilon\lambda>0\), if \(H\neq 0\), using Cauchy inequality, we know \[(\sum_{i=1}^{m}\alpha_{i}\lambda_{i})^{2}\leq 4\sum_{i=1}^{m}\alpha_{i} \lambda_{i}^{2}, \tag{4.11}\] where the equal sign holds if and only if \(\lambda_{1}=\lambda_{2}=\cdots=\lambda_{m}\neq 0\). The equation (4.11) yields that \[0<H^{2}\leq\frac{\varepsilon\lambda}{4}.\] The equal sign of the above equation is true if and only if the principal curvatures are all equal and nonzero. \(\Box\) **Corollary 4.13** _Let \(M_{r}^{4}\) be a biharmonic hypersurface of \(N_{s}^{5}(c)\), \(\vec{\xi}\) be a unit normal vector field to \(M_{r}^{4}\), with \(\varepsilon=\langle\vec{\xi},\vec{\xi}\rangle=\pm 1\). If \(\varepsilon c\leq 0\) and \(M_{r}^{4}\) has not imaginary principal curvatures, then \(M_{r}^{4}\) is minimal._ **Remark 4.14** We claim that the value \(\frac{1}{16}[2\varepsilon\lambda\pm(l-2)\sqrt{\lambda^{2}-4l(4-l)c^{2}}-2l(4- l)c\varepsilon]\) of \(H^{2}\) for the case \(c\neq 0\), \(\varepsilon\lambda\geq 2\sqrt{l(4-l)}|c|\) and \(\varepsilon\lambda\neq-4c\varepsilon\) in Theorem 4.6 is in the range \(H^{2}<\frac{\varepsilon\lambda}{4}\). In fact, in this case, we can easily check that \(\varepsilon\lambda+l(4-l)c\varepsilon\geq 0\) and \[(2\varepsilon\lambda+2l(4-l)c\varepsilon)^{2}-(l-2)^{2}(\lambda^{2}-4l(4-l)c^ {2})=l(4-l)(\lambda+4c)^{2}>0.\] Thus, it follows that \[2\varepsilon\lambda+2l(4-l)c\varepsilon>\pm(l-2)\sqrt{\lambda^{2}-4l(4-l)c^{ 2}},\] which deduce our claim. **Remark 4.15** When \(M_{r}^{4}\) has imaginary principal curvature, we have from (2.4) that \[\mathrm{tr}A^{2}=\sum_{i=1}^{t}\alpha_{i}\lambda_{i}^{2}+2\sum_{j=t+1}^{m}\beta_ {j}(\nu_{j}^{2}-\tau_{j}^{2}),\ \ t<m.\] Since there exists negative terms in the right side of the above equation, we can not get the range value of \(H\) by using Cauchy inequality, for the case that the number of distinct principal curvatures is larger than 2. AcknowledgementsThis work was supported by the National Natural Science Foundation of China (Nos. 12161078, 11761061), the Foundation for Distinguished Young Scholars of Gansu Province (No. 20JR5RA515), and the Project of Northwest Normal University (No. NWNU-LKQN2019-23).
2301.00324
Planar equilibrium measure problem in the quadratic fields with a point charge
We consider a two-dimensional equilibrium measure problem under the presence of quadratic potentials with a point charge and derive the explicit shape of the associated droplets. This particularly shows that the topology of the droplets reveals a phase transition: (i) in the post-critical case, the droplets are doubly connected domain; (ii) in the critical case, they contain two merging type singular boundary points; (iii) in the pre-critical case, they consist of two disconnected components. From the random matrix theory point of view, our results provide the limiting spectral distribution of the complex and symplectic elliptic Ginibre ensembles conditioned to have zero eigenvalues, which can also be interpreted as a non-Hermitian extension of the Marchenko-Pastur law.
Sung-Soo Byun
2023-01-01T01:54:12Z
http://arxiv.org/abs/2301.00324v1
# Planar equilibrium measure problem ###### Abstract. We consider a two-dimensional equilibrium measure problem under the presence of quadratic potentials with a point charge and derive the explicit shape of the associated droplets. This particularly shows that the topology of the droplets reveals a phase transition: (i) in the post-critical case, the droplets are doubly connected domain; (ii) in the critical case, they contain two merging type singular boundary points; (iii) in the pre-critical case, they consist of two disconnected components. From the random matrix theory point of view, our results provide the limiting spectral distribution of the complex and symplectic elliptic Ginibre ensembles conditioned to have zero eigenvalues, which can also be interpreted as a non-Hermitian extension of the Marchenko-Pastur law. Key words and phrases:Planar equilibrium measure problem, two-dimensional Coulomb gases, elliptic Ginibre ensemble, conditional point process, conformal mapping method, a non-Hermitian extension of the Marchenko-Pastur law. Sung-Soo Byun was partially supported by Samsung Science and Technology Foundation (SSTF-BA1401-51) and by the National Research Foundation of Korea (NRF-2019R1A5A1028324) and by a KIAS Individual Grant (SP083201) via the Center for Mathematical Challenges at Korea Institute for Advanced Study. As expected from the structure of the Hamiltonians (1.1) and (1.2), the macroscopic behaviours of the system can be effectively described using the logarithmic potential theory [59]. For this purpose, let us briefly recap some basic notions in the logarithmic potential theory. Given a compactly supported probability measure \(\mu\) on \(\mathbb{C}\), the weighted logarithmic energy \(I_{W}[\mu]\) associated with the potential \(W\) is given by \[I_{W}[\mu]:=\int_{\mathbb{C}^{2}}\log\frac{1}{|z-w|}\,d\mu(z)\,d\mu(w)+\int_{ \mathbb{C}}W\,d\mu. \tag{1.4}\] For a general potential \(W\) satisfying suitable conditions, there exists a unique probability measure \(\mu_{W}\) which minimises \(I_{W}[\mu]\). Such a minimiser \(\mu_{W}\) is called the **equilibrium measure** associated with \(W\) and its support \(S_{W}:=\operatorname{supp}(\mu_{W})\) is called the **droplet**. Furthermore, if \(W\) is \(C^{2}\)-smooth in a neighbourhood of \(S_{W}\), it follows from Frostman's theorem that \(\mu_{W}\) is absolutely continuous with respect to \(dA\) and takes the form \[d\mu_{W}(z)=\Delta W(z)\cdot\mathbb{1}_{\{z\in S_{W}\}}\,dA(z), \tag{1.5}\] where \(\Delta:=\partial\bar{\partial}\) is the quarter of the usual Laplacian. In relation with the point processes (1.3), it is well known [31, 15, 41] that \[\mu_{N,W}:=\frac{1}{N}\sum_{j=1}^{N}\delta_{\zeta_{j}}\to\mu_{W} \tag{1.6}\] in the weak star sense of measure. From the statistical physics viewpoint, this convergence is quite natural since the weighted energy \(I_{W}\) in (1.4) corresponds to the continuum limit of the discrete Hamiltonians (1.1) and (1.2) after proper renormalisations. (In the case of (1.2), it is required to further assume that \(W(\zeta)=W(\bar{\zeta})\).) Contrary to the density (1.5) of the measure \(\mu_{W}\), there is no general theory on the determination of its support \(S_{W}\). (See however [60] for a general theory on the regularity and [49] on the connectivity of the droplet associated with Hele-Shaw type potentials.) This leads to the following natural question. **For a given potential \(W\), what is the precise shape of the associated droplet?** In view of the energy functional (1.4), this is a typical form of an inverse problem in the potential theory and is called an equilibrium measure problem. Beyond the case when \(W\) is radially symmetric (cf. [59, Section IV.6]), this problem is highly non-trivial even for some explicit potentials with a simple form, see [3, 12, 44, 14, 21, 35, 54, 36] for some recent works. Let us also stress that such a problem is important not only because it provides the intrinsic macroscopic behaviours of the point processes (1.3) but also because it plays the role of the first step to perform the Riemann-Hilbert analysis which gives rise to a more detailed statistical information (\(k\)-point functions) of the point processes, see [12, 13, 17, 18, 44, 45, 46, 47, 51, 53, 56] for extensive studies in this direction. In this work, we aim to contribute to the equilibrium problems associated with the potentials (1.7) and (1.14) below, which are of particular interest in the context of non-Hermitian random matrix theory. ### Main results For given parameters \(\tau\in[0,1)\) and \(c\geq 0\), we consider the potential \[Q(\zeta):=\frac{1}{1-\tau^{2}}\Big{(}|\zeta|^{2}-\tau\operatorname{Re}\zeta^{ 2}\Big{)}-2c\log|\zeta|. \tag{1.7}\] When \(\beta=2\), the ensembles (1.3) associated with \(Q\) correspond to the distribution of random eigenvalues of the elliptic Ginibre matrices of size \((c+1)N\) conditioned to have zero eigenvalues with multiplicity \(cN\). We mention that such a model with \(c>0\) was also studied in the context of Quantum Chromodynamics [1]. In (1.7), the logarithmic term can be interpreted as an insertion of a point charge, see [4, 33, 23, 22, 27] for recent investigations of the models (1.3) in this situation. Such insertion of a point charge has also been studied in the theory of planar orthogonal polynomials [12, 13, 17, 51, 52, 53, 16]. On the other hand, the parameter \(\tau\in[0,1)\) captures the non-Hermiticity of the model. To be more precise, the models (1.3) associated with \(Q\) interpolate the complex/symplectic Ginibre ensembles (\(\tau=0\)) with the Gaussian Unitary/Symplectic ensembles (\(\tau=1\)) conditioned to have zero eigenvalues, see Remark 1.6 for further discussion in relation to our main results. For the case \(c=0\), the terminology "elliptic" comes from the fact that the limiting spectrum \(S_{\tau,0}\) is given by the ellipse \[S_{\tau,0}:=\Big{\{}(x,y)\in\mathbb{R}^{2}:\Big{(}\frac{x}{1+\tau}\Big{)}^{2}+ \Big{(}\frac{y}{1-\tau}\Big{)}^{2}\leq 1\Big{\}}, \tag{1.8}\] which is known as the elliptic law, see e.g. [34, 37]. We refer to [50, 5, 8, 57, 20, 19] and references therein for more about the recent progress on the complex elliptic Ginibre ensembles and [43, 6, 24, 25] for their symplectic counterparts. For the rotationally invariant case when \(\tau=0\), it is easy to show that the associated droplet \(S_{0,c}\) is given by \[S_{0,c}:=\Big{\{}(x,y)\in\mathbb{R}^{2}:c\leq x^{2}+y^{2}\leq 1+c\Big{\}}, \tag{1.9}\] see e.g. [59, Section IV.6] and [26, Section 5.2]. The primary goal of this work is to determine the precise shape of the droplet associated with the potential (1.7) for general \(\tau\in[0,1)\) and \(c\geq 0\). For this, we set some notations. Let us write \[\tau_{c}:=\frac{1}{1+2c} \tag{1.10}\] for the critical non-Hermiticity parameter. For \(\tau\in(\tau_{c},1)\), we define \[f(z)\equiv f_{\tau}(z):=\frac{(1+\tau)(1+2c)}{2}\frac{(1-az)(z-a\tau)^{2}}{z(z- a)},\qquad a=-\frac{1}{\sqrt{\tau(1+2c)}}. \tag{1.11}\] We are now ready to present our main result. **Theorem 1.1**.: _Let \(Q\) be given by (1.7). Then the droplet \(S\equiv S_{\tau,c}=S_{Q}\) of the equilibrium measure_ \[d\mu_{Q}(z)=\frac{1}{1-\tau^{2}}\,\mathbb{1}_{S}(z)\,dA(z) \tag{1.12}\] _is given as follows._ * **(Post-critical case)** _If_ \(\tau\in(0,\tau_{c}]\)_, we have_ (1.13) \[S_{\tau,c}=\Big{\{}(x,y)\in\mathbb{R}^{2}:\Big{(}\frac{x}{(1+\tau)\sqrt{1+c}} \Big{)}^{2}+\Big{(}\frac{y}{(1-\tau)\sqrt{1+c}}\Big{)}^{2}\leq 1\,,\,\frac{x^{2}+y^{2 }}{(1-\tau^{2})c}\geq 1\Big{\}}.\] * **(Pre-critical case)** _If_ \(\tau\in[\tau_{c},1)\)_, the droplet_ \(S_{\tau,c}\) _is the closure of the interior of the real analytic Jordan curves given by the image of the unit circle with respect to the map_ \(z\mapsto\pm\sqrt{f(z)}\)_, where_ \(f\) _is given by (_1.11_)._ Figure 1. The droplet \(S_{\tau,c}\), where \(c=1\) and a Fekete point configuration with \(N=2048\). Note that if \(c=0\) (resp., \(\tau=0\)), the droplet (1.13) corresponds to (1.8) (resp., (1.9)). We mention that the post-critical case of Theorem 1.1 is indeed shown in a more general setup, see (2.1) and Proposition 2.1 below. **Remark 1.2** (Phase transition of the droplet).: _In Theorem 1.1, we observe that if \(c>0\), the topology of the droplet reveals a phase transition. Namely, for the post-critical case when \(\tau<\tau_{c}\), the droplet is a doubly connected domain, whereas for the pre-critical case \(\tau>\tau_{c}\), it consists of two disconnected components. At criticality when \(\tau=\tau_{c}\), the droplet contains two symmetric double points. We refer to [12, 14, 3, 36] for further models whose droplets reveal various phase transitions. Let us also mention that recently, there have been several works on the models (1.3) with multi-component droplets, see e.g. [13, 30, 9, 10]. In this pre-critical regime, some theta-function oscillations are expected to appear for various kinds of statistics; cf. [32, 9]. The precise asymptotic behaviours of the partition function would also be interesting in connection with the conjecture that these depend on the Euler index of the droplets, see [42, 29] and [26, Sections 4.1 and 5.3] for further discussion._ **Remark 1.3** (Fekete points and numerics).: _A configuration \(\{\zeta_{j}\}_{j=1}^{N}\) which makes the Hamiltonians (1.1) or (1.2) minimal is known as a Fekete configuration. This can be interpreted as the ensembles (1.3) with low temperature limit \(\beta=\infty\), see e.g. [61, 58, 7, 11] and references therein. Since the droplet is independent of the inverse temperature \(\beta>0\) (excluding the high-temperature regime [2] when \(\beta=O(1/N)\)), the Fekete points are useful to numerically observe the shape of the droplets. In Figures 1 and 2, Fekete configurations associated with the Hamiltonian (1.1) are also presented, which show good fits with Theorems 1.1 and 1.4._ Notice that the potential (1.7) and the droplet \(S_{\tau,c}\) are invariant under the map \(\zeta\mapsto-\zeta\). We now discuss an equivalent formulation of Theorem 1.1 under the removal of such symmetry. (See [36, Section 1.3] for a similar discussion in a vector equilibrium problem on a sphere with point charges.) The motivation for this formulation will be clear in the next subsection. For this purpose, we denote \[\widehat{Q}(\zeta):=\frac{2}{1-\tau^{2}}\Big{(}|\zeta|-\tau\operatorname{Re} \zeta\Big{)}-2c\log|\zeta|. \tag{1.14}\] By definition, the potentials \(Q\) in (1.7) and \(\widehat{Q}\) in (1.14) are related as \[Q(\zeta)=\frac{1}{2}\widehat{Q}(\zeta^{2}). \tag{1.15}\] Denoting by \(\widehat{S}\) the droplet associated with \(\widehat{Q}\), it follows from [14, Lemma 1] that \[S=\{\zeta\in\mathbb{C}:\zeta^{2}\in\widehat{S}\}. \tag{1.16}\] Due to the relation (1.16) and \[\Delta\widehat{Q}(\zeta)=\frac{1}{2(1-\tau^{2})}\frac{1}{|\zeta|}, \tag{1.17}\] we have the following equivalent formulation of Theorem 1.1. **Theorem 1.4**.: _Let \(\widehat{Q}\) be given by (1.14). Then the droplet \(\widehat{S}\equiv\widehat{S}_{\tau,c}=S_{\widehat{Q}}\) of the equilibrium measure_ \[d\mu_{\widehat{Q}}(\zeta)=\frac{1}{2(1-\tau^{2})}\frac{1}{|\zeta|}\mathbb{1}_ {\widehat{S}}(\zeta)\,dA(\zeta) \tag{1.18}\] _is given as follows._ * **(Post-critical case)** _If_ \(\tau\in(0,\tau_{c}]\)_, we have_ (1.19) \[\widehat{S}_{\tau,c}=\Big{\{}(x,y)\in\mathbb{R}^{2}:\Big{(}\frac{x-2\tau(1+c) }{(1+\tau^{2})(1+c)}\Big{)}^{2}+\Big{(}\frac{y}{(1-\tau^{2})(1+c)}\Big{)}^{2} \leq 1\,,\,\frac{x^{2}+y^{2}}{(1-\tau^{2})^{2}c^{2}}\geq 1\Big{\}}.\] * **(Pre-critical case)** _If_ \(\tau\in[\tau_{c},1)\)_, the droplet_ \(\widehat{S}_{\tau,c}\) _is the closure of the interior of the real analytic Jordan curve given by the image of the unit circle with respect to the rational map_ \(z\mapsto f(z)\)_, where_ \(f\) _is given by (_1.11_)._ **Remark 1.5** (Joukowsky transform in the critical case).: _If \(\tau=\tau_{c}\) with (1.10), we have \(a=1/a=-1\). Thus in this critical case, the rational function \(f_{\tau}\) in (1.11) is simplified as_ \[f_{\tau_{c}}(z)=(1+c)\frac{(z+\tau)^{2}}{z}=(1+c)\Big{(}z+2\tau+\frac{\tau^{2} }{z}\Big{)}. \tag{1.20}\] _Note that compared to the general case (1.11), there is one less zero and one less pole in (1.20). Indeed, in the critical case, the rational map \(f_{\tau_{c}}\) is a (shifted) Joukowsky transform_ \[f_{\tau_{c}}:\mathbb{D}^{c}\to\Big{\{}(x,y)\in\mathbb{R}^{2}:\Big{(}\frac{x-2 \tau(1+c)}{(1+\tau^{2})(1+c)}\Big{)}^{2}+\Big{(}\frac{y}{(1-\tau^{2})(1+c)} \Big{)}^{2}\geq 1\Big{\}}. \tag{1.21}\] _In [3], a similar type of Joukowsky transform was used to solve an equilibrium measure problem. For the models under consideration in the present work, due to a more complicated form of the rational function (1.11), the required analysis for the associated equilibrium problem turns out to be more involved._ **Remark 1.6** (A non-Hermitian extension of the Marchenko-Pastur distribution).: _In the Hermitian limit \(\tau\uparrow 1,\) by (1.7) and (1.14), we have_ \[\lim_{\tau\uparrow 1}Q(x+iy)=V(x):=\begin{cases}\dfrac{x^{2}}{2}-2c \log|x|,&\text{if }y=0,\\ +\infty&\text{otherwise},\end{cases} \tag{1.23}\] \[\lim_{\tau\uparrow 1}\widehat{Q}(x+iy)=\widehat{V}(x):=\begin{cases}x-2c \log|x|,&\text{if }y=0,\,x>0,\\ +\infty&\text{otherwise}.\end{cases} \tag{1.22}\] _Then the associated equilibrium measures are given by the well-known Marchenko-Pastur law (with squared variables) [38, Proposition 3.4.1], i.e._ \[d\mu_{V}(x) =\frac{1}{2\pi|x|}\,\sqrt{(\lambda_{+}^{2}-x^{2})(x^{2}-\lambda_{ -}^{2})}\cdot\mathbb{1}_{[-\lambda_{+},-\lambda_{-}]\cup[\lambda_{-},\lambda_{ +}]}\,dx, \tag{1.25}\] \[d\mu_{\widehat{V}}(x) =\frac{1}{2\pi x}\,\sqrt{(\lambda_{+}^{2}-x)(x-\lambda_{-}^{2})} \cdot\mathbb{1}_{[\lambda_{-}^{2},\lambda_{+}^{2}]}\,dx, \tag{1.24}\] _where \(\lambda_{\pm}:=\sqrt{2c+1}\pm 1\), cf. Remark 2.3. Therefore one can interpret Theorem 1.1 (resp., Theorem 1.4) as a non-Hermitian generalisation of the Marchenko-Pastur distribution (1.24) (resp., (1.25)), see [8, Section 2] for more about the geometric meaning with the notion of the statistical cross-section. We also refer to [3] for another non-Hermitian extension of (1.24) and (1.25) in the context of the chiral Ginibre ensembles._ **Remark 1.7** (Inclusion relations of the droplets).: _Let us write_ \[S_{1}=\Big{\{}(x,y)\in\mathbb{R}^{2}:\Big{(}\frac{x}{1+\tau}\Big{)}^{2}+\Big{(} \frac{y}{1-\tau}\Big{)}^{2}\leq 1+c\Big{\}},\qquad S_{2}:=\Big{\{}(x,y)\in \mathbb{R}^{2}:x^{2}+y^{2}\leq(1-\tau^{2})c\Big{\}} \tag{1.26}\] _and denote by \(\widehat{S}_{j}\)\((j=1,2)\) the image of \(S_{j}\) under the map \(z\mapsto z^{2}\). Then it follows from the definition (1.10) that_ \[\tau\in(0,\tau_{c})\qquad\text{if and only if}\qquad S_{1}^{c}\cap S_{2}=\emptyset. \tag{1.27}\] _By Theorems 1.1 and 1.4, for general \(\tau\in[0,1)\) and \(c\geq 0\), one can observe that_ \[S_{\tau,c}\subseteq S_{1}\cap(\operatorname{Int}S_{2})^{c},\qquad\widehat{S} _{\tau,c}\subseteq\widehat{S}_{1}\cap(\operatorname{Int}\widehat{S}_{2})^{c}. \tag{1.28}\] _Here, equality in (1.28) holds if and only if in the post-critical case. (This property holds in a more general setup, see Proposition 2.1.) On the other hand, in the pre-critical case one can interpret that the particles in \(S_{1}^{c}\cap S_{2}\) are smeared out to \(S_{1}\cap S_{2}^{c}\) which makes the inclusion relations (1.28) strictly hold, see Figure 3._ ### Outline of the proof Recall that \(\mu_{W}\) is a unique minimiser of the energy (1.4). It is well known that the equilibrium measure \(\mu_{W}\) is characterised by the variational conditions (see [59, p.27]) \[\int\log\frac{1}{|\zeta-z|^{2}}\,d\mu_{W}(z)+W(\zeta)=C,\quad\text {q.e.}\quad\text{if }\zeta\in S_{W}; \tag{1.30}\] \[\int\log\frac{1}{|\zeta-z|^{2}}\,d\mu_{W}(z)+W(\zeta)\geq C,\quad \text{q.e.}\quad\text{if }\zeta\notin S_{W}. \tag{1.29}\] Here, q.e. stands for quasi-everywhere. (Nevertheless, this notion is not important in the sequel as we will show that for the models we consider the conditions (1.29) and (1.30) indeed hold everywhere.) Due to the uniqueness of the equilibrium measure, all we need to show is that if \(W=Q\), then \(\mu_{Q}\) in (1.12) satisfies the variational principles (1.29) and (1.30). Equivalently, by (1.16), it also suffices to show the variational principles for the equilibrium measure \(\mu_{\widehat{Q}}\) in (1.18). However, it is far from being obvious to obtain the "correct candidate" of the droplets. Perhaps one may think that at least for the post-critical case, the shape of the droplet (1.13) is quite natural given the well-known cases (1.8) and (1.9) as well as the fact that the area of \(S_{\tau,c}\) should be \((1-\tau^{2})\pi\). On the other hand, for the pre-critical case, one can easily notice that there is some secret behind deriving the explicit formula of the rational function (1.11). To derive the correct candidate, we use the conformal mapping method with the help of the Schwarz function, see Appendix A. **Remark 1.8** (Removal of symmetry).: _We emphasise that the conformal mapping method does not work for the multi-component droplet, i.e. the pre-critical case of Theorem 1.1. This is essentially due to the lack of the Riemann mapping theorem. Nevertheless, one can observe that once we remove the symmetry \(\zeta\mapsto-\zeta\), the droplet in the pre-critical case of Theorem 1.4 is simply connected. This explains the reason why we need the idea of removing symmetry._ Figure 3. The droplets \(S_{\tau,c}\) and \(\widehat{S}_{\tau,c}\) in the pre-critical case, where \(c=2\) and \(\tau=0.7>\tau_{c}\). Here, the dashed lines display the boundaries of \(S_{j}\) and \(\widehat{S}_{j}\)\((j=1,2)\). The rest of this paper is organised as follows. * In Section 2, we prove Theorems 1.1 and 1.4. In Subsection 2.1, we show the post-critical case of Theorem 1.1 in a more general setup, see Proposition 2.1. On the other hand, in Subsection 2.2, we show the pre-critical case of Theorem 1.4. Then by the relation (1.16), these complete the proof of our main results. * This article contains two appendices. In Appendix A, we explain the conformal mapping method to derive the "correct candidate" of the droplets. In Appendix B, we present a way to solve a one-dimensional equilibrium problem in Remark 2.3, which shares a common feature with the conformal mapping method. These appendices are made only for instructive purposes and the readers who only want the proof of the main theorems may stop at the end of Section 2. ## 2. Proof of main theorem In this section, we show Theorems 1.1 and 1.4. ### Post critical cases Extending (1.7), we consider the potential \[Q_{p}(\zeta):=\frac{1}{1-\tau^{2}}\Big{(}|\zeta|^{2}-\tau\operatorname{Re} \zeta^{2}\Big{)}-2c\log|\zeta-p|,\qquad p\in\mathbb{C}. \tag{2.1}\] For the case \(\tau=0\), the shape of the droplet associated with the potential (2.1) was fully characterised in [12]. (In this case, it suffices to consider the case \(p\geq 0\) due to the rotational invariance.) In particular, it was shown that if \[c<\frac{(1-p^{2})^{2}}{4p^{2}},\qquad\tau=0,\qquad p\geq 0, \tag{2.2}\] the droplet is given by \(S=\overline{\mathbb{D}(0,\sqrt{1+c})}\setminus\mathbb{D}(p,\sqrt{c})\), where \(\mathbb{D}(p,R)\) is the disc with centre \(p\) and radius \(R\), cf. see Remark A.5 for the other case \(c>(1-p^{2})^{2}/(4p^{2})\). To describe the droplets associated with \(Q_{p}\), we denote \[S_{1}:=\Big{\{}(x,y)\in\mathbb{R}^{2}:\Big{(}\frac{x}{1+\tau}\Big{)}^{2}+ \Big{(}\frac{y}{1-\tau}\Big{)}^{2}\leq 1+c\Big{\}} \tag{2.3}\] and \[S_{2}:=\Big{\{}(x,y)\in\mathbb{R}^{2}:(x-\operatorname{Re}p)^{2}+(y- \operatorname{Im}p)^{2}\leq(1-\tau^{2})c\Big{\}}. \tag{2.4}\] Then we obtain the following. **Proposition 2.1**.: _Suppose that the parameters \(\tau,c\in\mathbb{R}\) and \(p\in\mathbb{C}\) are given to satisfy_ \[S_{2}\subset S_{1}, \tag{2.5}\] _where \(S_{1}\) and \(S_{2}\) are given by (2.3) and (2.4). Then the droplet \(S\equiv S_{Q_{p}}\) associated with (2.1) is given by_ \[S=S_{1}\cap(\operatorname{Int}S_{2})^{c}. \tag{2.6}\] See Figure 4 for the shape of the droplets and numerical simulations of Fekete point configurations. We remark that with slight modifications, Proposition 2.1 can be further extended to the case with multiple point charges, i.e. the potential of the form \[\frac{1}{1-\tau^{2}}\Big{(}|\zeta|^{2}-\tau\operatorname{Re}\zeta^{2}\Big{)}- 2\sum c_{j}\log|\zeta-p_{j}|,\qquad p_{j}\in\mathbb{C},\quad c_{j}\geq 0. \tag{2.7}\] (See Remark A.4 for a related discussion.) Let us also mention that a similar statement for an equilibrium problem on the sphere was shown in [21]. For a treatment of a more general case, we refer the reader to [35, 54, 36]. **Remark 2.2**.: _If \(p=0\), the condition (2.5) corresponds to_ \[\tau<\frac{1}{1+2c}=\tau_{c}. \tag{2.8}\] _Therefore Proposition 2.1 for the special value \(p=0\) gives Theorem 1.1 (i). As a consequence, by (1.16), Theorem 1.4 (i) also follows. We also mention that if \(\tau=0\) and \(p>0\), the condition (2.5) coincides with (2.2)._ **Remark 2.3** (Equilibrium measure in the Hermitian limit).: _Before moving on to the planar equilibrium problem for (2.1), we first discuss the one-dimensional problem arising in the Hermitian limit. For \(p\in\mathbb{R}\), the Hermitian limit \(\tau\uparrow 1\) of the potential \(Q_{p}\) is given by_ \[\lim_{\tau\uparrow 1}Q_{p}(x+iy)=V_{p}(x):=\begin{cases}\dfrac{x^{2}}{2}-2c \log|x-p|,&\text{if }y=0,\\ +\infty&\text{otherwise}.\end{cases} \tag{2.9}\] _Then one can show that the associated equilibrium measure \(\mu_{V}\equiv\mu_{V_{p}}\) is given by_ \[\frac{d\mu_{V}(x)}{dx}=\frac{\sqrt{-\prod_{j=1}^{4}(x-\lambda_{j})}}{2\pi|x-p |}\cdot\mathbbm{1}_{[\lambda_{1},\lambda_{2}]\cup[\lambda_{3},\lambda_{4}]}(x), \tag{2.10}\] _where_ \[\lambda_{1} =\frac{p-2-\sqrt{(p+2)^{2}+8c}}{2}, \lambda_{2} =\frac{p+2-\sqrt{(p-2)^{2}+8c}}{2}, \tag{2.12}\] \[\lambda_{3} =\frac{p-2+\sqrt{(p+2)^{2}+8c}}{2}, \lambda_{4} =\frac{p+2+\sqrt{(p-2)^{2}+8c}}{2}. \tag{2.11}\] _We remark that when \(p=0\), it recovers (1.24). See Figure 5 for the graphs of the equilibrium measure \(\mu_{V_{p}}\). The equilibrium measure (2.10) follows from the standard method using the Stieltjes transform and the Sokhotski-Plemelj inversion formula. For reader's convenience, we provide a proof of (2.10) in Appendix B._ Figure 4. The droplet \(S\) in Proposition 2.1, where \(\tau=1/3\) and \(c=1/7\). Here, a Fekete point configuration with \(N=2048\) is also displayed. Figure 5. Graphs of the equilibrium measure \(\mu_{V_{p}}\), where \(c=1\). In the rest of this subsection, we prove Proposition 2.1. First, let us show the following elementary lemmas. **Lemma 2.4**.: _For \(a,b>0\), let_ \[K:=\Big{\{}(x,y)\in\mathbb{R}^{2}:\Big{(}\frac{x}{a}\Big{)}^{2}+\Big{(}\frac{y}{ b}\Big{)}^{2}\leq 1\Big{\}}.\] _Then we have_ \[\int_{K}\frac{1}{\zeta-z}\,dA(z)=\begin{cases}\bar{\zeta}-\frac{a-b}{a+b}\,\zeta &\text{if }\zeta\in K,\\ \frac{2ab}{a^{2}-b^{2}}\Big{(}\zeta-\sqrt{\zeta^{2}-a^{2}+b^{2}} \Big{)}&\text{otherwise.}\end{cases} \tag{2.13}\] _In particular, for \(\zeta\in K\), there exists a constant \(c_{0}\in\mathbb{R}\) such that_ \[\int_{K}\log|\zeta-z|^{2}\,dA(z)=|\zeta|^{2}-\frac{a-b}{a+b}\operatorname{Re }\zeta^{2}+c_{0}. \tag{2.14}\] **Remark 2.5**.: _The Cauchy transform in (2.13) is useful to explicitly compute the moments of the equilibrium measure. Namely, by definition, we have_ \[\int_{K}\frac{1}{\zeta-z}\,dA(z)=\frac{1}{\zeta}\sum_{k=0}^{\infty}\frac{1}{ \zeta^{k}}\int_{K}z^{k}\,dA(z),\qquad\zeta\to\infty.\] _On the other hand, we have_ \[\zeta-\sqrt{\zeta^{2}-a^{2}+b^{2}}=\frac{a^{2}-b^{2}}{\zeta}\sum_{k=0}^{\infty }\binom{1/2}{k+1}\frac{(b^{2}-a^{2})^{k}}{\zeta^{2k}},\qquad\zeta\to\infty.\] _Combining the above equations with (2.13), we obtain that for any non-negative integer \(k\),_ \[\frac{1}{ab}\int_{K}z^{2k}\,dA(z)=2\binom{1/2}{k+1}(b^{2}-a^{2})^{k}. \tag{2.15}\] Proof of Lemma 2.4.: Recall that \(\mathbb{D}\) is the unit disc with centre the origin. Then the Joukowsky transform \(f:\bar{\mathbb{D}}^{c}\to K^{c}\) is given by \[f(z)=\frac{a+b}{2}\,z+\frac{a-b}{2}\,\frac{1}{z}. \tag{2.16}\] By applying Green's formula, we have \[\int_{K}\frac{1}{\zeta-z}\,dA(z)=\frac{1}{2\pi i}\int_{\partial K}\frac{\bar{ z}}{\zeta-z}\,dz+\bar{\zeta}\cdot\mathbbm{1}_{\{\zeta\in K\}}. \tag{2.17}\] Furthermore, by the change of variable \(z=f(w)\), it follows that \[\int_{\partial K}\frac{\bar{z}}{\zeta-z}\,dz=\int_{\partial\mathbb{D}}\frac{ \overline{f(1/\bar{w})}}{\zeta-f(w)}f^{\prime}(w)\,dw=\int_{\partial\mathbb{D }}g_{\zeta}(w)\,dw, \tag{2.18}\] where \(g_{\zeta}\) is the rational function given by \[g_{\zeta}(w):=\frac{1}{\zeta-f(w)}\Big{(}\frac{a+b}{2}\frac{1}{w}+\frac{a-b}{2 }w\Big{)}\Big{(}\frac{a+b}{2}-\frac{a-b}{2}\frac{1}{w^{2}}\Big{)}. \tag{2.19}\] Observe that \[\zeta=f(w)\qquad\text{if and only if}\qquad w=w_{\zeta}^{\pm}:=\frac{\zeta\pm \sqrt{\zeta^{2}-a^{2}+b^{2}}}{a+b},\] i.e. the points \(w_{\zeta}^{\pm}\) are solutions to the quadratic equation \[(a+b)w^{2}-2\zeta\,w+(a-b)=0.\] Here, the branch of the square root is chosen such that \[w_{\zeta}^{-}\to 0\qquad\zeta\to\infty.\] By above observation, the function \(g_{\zeta}\) has poles only at \[0,\qquad w_{\zeta}^{+},\qquad w_{\zeta}^{-}.\] Moreover note that \[\zeta\in K\qquad\text{if and only if}\qquad w_{\zeta}^{\pm}\in\mathbb{D}.\] Notice that if \(\zeta\in K^{c}\), then \(w_{\zeta}^{-}\in\mathbb{D}\) and \(w_{\zeta}^{+}\in\mathbb{D}^{c}\). Using the residue calculus, we have \[\operatorname*{Res}_{w=0}\Big{[}g_{\zeta}(w)\Big{]}=\frac{a+b}{a-b}\,\zeta. \tag{2.20}\] On the other hand, we have \[\operatorname*{Res}_{w=w_{\zeta}^{\pm}}\Big{[}g_{\zeta}(w)\Big{]}=-\overline{ f(1/\bar{w}_{\zeta}^{\pm})}=-\frac{a+b}{2}\frac{1}{w_{\zeta}^{\pm}}-\frac{a-b}{2}w_{ \zeta}^{\pm}. \tag{2.21}\] In particular, \[\operatorname*{Res}_{w=w_{\zeta}^{+}}\Big{[}g_{\zeta}(w)\Big{]}+\operatorname* {Res}_{w=w_{\zeta}^{-}}\Big{[}g_{\zeta}(w)\Big{]}=-2\frac{a^{2}+b^{2}}{a^{2}- b^{2}}\,\zeta. \tag{2.22}\] Combining all of the above, we obtain the desired identity (2.13). The second assertion immediately follows from (2.13) and the real-valuedness of \(\zeta\mapsto\int_{K}\log|\zeta-z|^{2}\,dA(z)\). **Lemma 2.6**.: _For \(R>0\) and \(p\in\mathbb{C}\) we have_ \[\int_{\mathbb{D}(p,R)}\log|\zeta-z|\,dA(z)=\begin{cases}R^{2}\log|\zeta-p|& \text{if }\zeta\notin\mathbb{D}(p,R),\\ R^{2}\log R-\frac{R^{2}}{2}+\frac{|\zeta-p|^{2}}{2}&\text{otherwise}.\end{cases} \tag{2.23}\] Proof.: First, recall the well-known Jensen's formula: for \(r>0\), \[\frac{1}{2\pi}\int_{0}^{2\pi}\log|\zeta-re^{i\theta}|\,d\theta=\begin{cases} \log r&\text{if }r>|\zeta|,\\ \log|\zeta|&\text{otherwise}.\end{cases} \tag{2.24}\] By the change of variables, we have \[\int_{\mathbb{D}(p,R)}\log|\zeta-z|\,dA(z)=\int_{\mathbb{D}(0,R)}\log|\zeta-p -z|\,dA(z)=\frac{1}{\pi}\int_{0}^{R}r\int_{0}^{2\pi}\log|\zeta-p-re^{i\theta} |\,d\theta\,dr.\] Suppose that \(\zeta\notin\mathbb{D}(p,R)\). Then by applying (2.24), we have \[\frac{1}{\pi}\int_{0}^{R}r\int_{0}^{2\pi}\log|\zeta-p-re^{i\theta}|\,d\theta\, dr=2\int_{0}^{R}r\,\log|\zeta-p|\,dr=R^{2}\log|\zeta-p|.\] On the other hand if \(\zeta\in\mathbb{D}(p,R)\), we have \[\frac{1}{\pi}\int_{0}^{R}r\int_{0}^{2\pi}\log|\zeta-p-re^{i\theta }|\,d\theta\,dr =2\int_{0}^{|\zeta-p|}r\,\log|\zeta-p|\,dr+2\int_{|\zeta-p|}^{R}r \,\log r\,dr\] \[=R^{2}\log R-\frac{R^{2}}{2}+\frac{|\zeta-p|^{2}}{2},\] which completes the proof. We are now ready to complete the proof of Proposition 2.1. Proof of Proposition 2.1.: Note that by (1.5), the equilibrium measure \(\mu\) associated with \(Q_{p}\) is of the form \[d\mu(z):=\Delta Q_{p}(z)\cdot\mathbb{1}_{S}(z)\,dA(z)=\frac{1}{1-\tau^{2}} \cdot\mathbb{1}_{S}(z)\,dA(z). \tag{2.25}\] Due to the assumption (2.5), we have \[\int\log\frac{1}{|\zeta-z|^{2}}\,d\mu(z)=\frac{1}{1-\tau^{2}}\Big{(}\int_{S_{1 }}\log\frac{1}{|\zeta-z|^{2}}\,dA(z)-\int_{S_{2}}\log\frac{1}{|\zeta-z|^{2}}\, dA(z)\Big{)}.\] Note that by Lemma 2.4, there exists a constant \(c_{0}\) such that \[\int_{S_{1}}\log\frac{1}{|\zeta-z|^{2}}\,dA(z)=-|\zeta|^{2}+\tau\,\mathrm{Re}\, \zeta^{2}-c_{0}. \tag{2.26}\] On the other hand, by Lemma 2.6, we have \[\int_{S_{2}}\log\frac{1}{|\zeta-z|^{2}}\,dA(z)=-2(1-\tau^{2})c\,\log|\zeta-p|. \tag{2.27}\] Combining (2.26), (2.27) and (2.1), we obtain \[\int\log\frac{1}{|\zeta-z|^{2}}\,d\mu(z)=-Q_{p}(\zeta)-\frac{c_{0}}{1-\tau^{2}}, \tag{2.28}\] which leads to (1.29). Next, we show the variational inequality (1.30). Note that if \(\zeta\in S_{2}\), it immediately follows from Lemma 2.6. Thus it is enough to verify (1.30) for the case \(\zeta\in S_{1}^{c}\). Let \[H_{p}(\zeta):=\int\log\frac{1}{|\zeta-z|^{2}}\,d\mu(z)+Q_{p}(\zeta). \tag{2.29}\] Suppose that the variational inequality (1.30) does not hold. Then since \(H_{p}(\zeta)\to\infty\) as \(\zeta\to\infty\), there exists \(\zeta_{*}\in S_{1}^{c}\) such that \[\partial_{\zeta}H_{p}(\zeta)|_{\zeta=\zeta_{*}}=0. \tag{2.30}\] On the other hand, by Lemmas 2.4 and 2.6, if \(\zeta\in S_{1}^{c}\), the Cauchy transform of the measure \(\mu\) is computed as \[\int\frac{d\mu(z)}{\zeta-z}=\frac{1}{2\tau}\Big{(}\zeta-\sqrt{\zeta^{2}-4\tau( 1+c)}\Big{)}-\frac{c}{\zeta-p}. \tag{2.31}\] Together with (2.1), this gives rise to \[\partial_{\zeta}Q_{p}(\zeta)-\int\frac{d\mu(z)}{\zeta-z}=\frac{1}{1-\tau^{2}} \Big{(}\tilde{\zeta}-\tau\zeta\Big{)}-\frac{1}{2\tau}\Big{(}\zeta-\sqrt{ \zeta^{2}-4\tau(1+c)}\Big{)}. \tag{2.32}\] Then it follows that the condition \(\partial_{\zeta}H_{p}(\zeta)=0\) is equivalent to \[(1+\tau^{2})|\zeta|^{2}-\tau(\zeta^{2}+\bar{\zeta}^{2})=(1-\tau^{2})^{2}(1+c). \tag{2.33}\] Therefore, by (2.3), one can notice that \(\partial_{\zeta}H_{p}(\zeta)=0\) if and only if \(\zeta\in\partial S_{1}\). This yields a contradiction with the assumption \(\zeta_{*}\in S_{1}^{c}.\) Therefore we conclude that the variational inequality (1.30) holds for \(\zeta\in S^{c}\), which completes the proof. **Remark 2.7**.: _Let us denote by_ \[m_{k}:=\int z^{k}\,d\mu(z) \tag{2.34}\] _the \(k\)-th moment of the equilibrium measure. Notice that the Cauchy transform of \(\mu\) satisfies the asymptotic expansion_ \[\int\frac{d\mu(z)}{\zeta-z}=\frac{1}{\zeta}\sum_{k=0}^{\infty}\frac{m_{k}}{ \zeta^{k}},\qquad\zeta\to\infty. \tag{2.35}\] _Using this property and (2.31), after straightforward computations, one can verify that the equilibrium measure \(\mu\) in Proposition 2.1 has the moments_ \[m_{2k}=2\frac{(2k-1)!}{(k-1)!(k+1)!}\tau^{k}(1+c)^{k+1}-c\,p^{2k},\qquad m_{2k +1}=-c\,p^{2k+1}. \tag{2.36}\] _Notice in particular that if \(p=0\), all odd moments vanish._ ### Pre-critical case In this subsection, we show Theorem 1.4 (ii). Then by (1.16), Theorem 1.1 (ii) follows. Proof of Theorem 1.4 (ii).: Recall that \(\widehat{Q}\) is given by (1.14) and that all we need to show is the variational principles (1.29) and (1.30) for \(W=\widehat{Q}.\) For this, similar to above, let \[H(\zeta):=\int\log\frac{1}{|\zeta-z|^{2}}\,d\widehat{\mu}(z)+\widehat{Q}(\zeta), \tag{2.37}\] where \(\widehat{\mu}\) is the equilibrium measure associated with \(\widehat{Q}\). Then \[\partial_{\zeta}H(\zeta)=\partial_{\zeta}\widehat{Q}(\zeta)-C(\zeta)=\frac{1 }{1-\tau^{2}}\Big{(}\sqrt{\frac{\bar{\zeta}}{\zeta}}-\tau\Big{)}-\frac{c}{ \zeta}-C(\zeta), \tag{2.38}\] where \(C(\zeta)\) is the Cauchy transform of \(\widehat{\mu}\) given by \[C(\zeta)=\frac{1}{2(1-\tau)^{2}}\int_{\widehat{S}}\frac{1}{\zeta-z}\frac{1}{| z|}\,dA(z). \tag{2.39}\] Here, we have used (1.5). Applying Green's formula, we have \[(1-\tau^{2})C(\zeta)=\frac{1}{2\pi i}\int_{\partial\widehat{S}}\frac{1}{ \zeta-z}\sqrt{\frac{\bar{z}}{z}}\,dz+\sqrt{\frac{\bar{\zeta}}{\zeta}}\cdot \mathbb{1}_{\{\zeta\in\mathrm{Int}(\widehat{S})\}}. \tag{2.40}\] Recall that \(f\) is given by (1.11). Let \[g(w):=\sqrt{\frac{\overline{f(1/\bar{w})}}{f(w)}}\,f^{\prime}(w). \tag{2.41}\] Since \(f^{\prime}(a\tau)=0\), the function \(g(w)\) has poles only at \(0,1/a,a\). We also write \[h_{\zeta}(w):=\frac{g(w)}{\zeta-f(w)}. \tag{2.42}\] Using the change of variable \(z=f(w)\), \[\frac{1}{2\pi i}\int_{\partial\widehat{S}}\frac{1}{\zeta-z}\sqrt{\frac{\bar {z}}{z}}\,dz=\frac{1}{2\pi i}\int_{\partial\mathbb{D}}\frac{1}{\zeta-f(w)} \sqrt{\frac{\overline{f(1/\bar{w})}}{f(w)}}\,f^{\prime}(w)\,dw=\frac{1}{2\pi i }\int_{\partial\mathbb{D}}h_{\zeta}(w)\,dw. \tag{2.43}\] By the residue calculus, we have \[\mathop{\mathrm{Res}}_{w=0}\Big{[}h_{\zeta}(w)\Big{]}=\frac{1}{\tau},\qquad \mathop{\mathrm{Res}}_{w=a}\Big{[}h_{\zeta}(w)\Big{]}=0. \tag{2.44}\] Note that \(\zeta=f(w)\) is equivalent to \[d(1-aw)(w-a\tau)^{2}=w(w-a)\zeta,\qquad d=\frac{(1+\tau)(1+2c)}{2}, \tag{2.45}\] which can be rewritten as a cubic equation \[adw^{3}-(d+2a^{2}d\tau-\zeta)w^{2}+a(2d\tau+a^{2}\tau^{2}d-\zeta)w-a^{2}\tau^{ 2}d=0. \tag{2.46}\] For given \(\zeta\in\mathbb{C}\), there exist \(w_{\zeta}^{(j)}\) (\(j=1,2,3\)) such that \(f(w_{\zeta}^{(j)})=\zeta.\) Note that by (2.46), we have \[w_{\zeta}^{(1)}w_{\zeta}^{(2)}w_{\zeta}^{(3)}=a\tau^{2}\in(-1,0). \tag{2.47}\] Furthermore, since \(f\) is a conformal map from \(\mathbb{D}^{c}\) onto \(\widehat{S}^{c}\), we have the following: 1. If \(\zeta\in\mathrm{Int}(\widehat{S})\), then all \(w_{\zeta}^{(j)}\)'s are in \(\mathbb{D}\); 2. If \(\zeta\in\widehat{S}^{c}\), then two of \(w_{\zeta}^{(j)}\)'s are in \(\mathbb{D}\). By the residue calculus using (1.11) and (2.41), for each \(j\), \[\begin{split}\operatorname*{Res}_{w=w_{\zeta}^{(j)}}\Big{[}h_{ \zeta}(w)\Big{]}&=-\frac{g(w_{\zeta}^{(j)})}{f^{\prime}(w_{\zeta}^{ (j)})}=-\frac{(w_{\zeta}^{(j)}-a)(1-a\tau w_{\zeta}^{(j)})}{(w_{\zeta}^{(j)}-a \tau)(1-aw_{\zeta}^{(j)})}\\ &=\frac{d}{\zeta}\frac{(a\tau w_{\zeta}^{(j)}-1)(w_{\zeta}^{(j)}- a\tau)}{w_{\zeta}^{(j)}}=\frac{d}{\zeta}\Big{(}a\tau w_{\zeta}^{(j)}-(a^{2} \tau^{2}+1)+\frac{a\tau}{w_{\zeta}^{(j)}}\Big{)},\end{split} \tag{2.48}\] where we have used (2.45). On the other hand, it follows from (2.46) that \[\sum_{j=1}^{3}w_{\zeta}^{(j)}=\frac{d+2a^{2}d\tau-\zeta}{ad},\qquad\sum_{j=1}^{ 3}\frac{1}{w_{\zeta}^{(j)}}=\frac{-\zeta+2d\tau+a^{2}\tau^{2}d}{a\tau^{2}d}. \tag{2.49}\] These relations give rise to \[\begin{split} d\sum_{j=1}^{3}\Big{(}a\tau w_{\zeta}^{(j)}-(a^{2} \tau^{2}+1)+\frac{a\tau}{w_{\zeta}^{(j)}}\Big{)}&=d\tau+2a^{2}d \tau^{2}-\tau\zeta-\frac{\zeta}{\tau}+2d+a^{2}\tau d-3d(a^{2}\tau^{2}+1)\\ &=-\Big{(}\tau+\frac{1}{\tau}\Big{)}\zeta-c(1-\tau^{2}).\end{split} \tag{2.50}\] Combining all of the above, we have shown that if \(\zeta\in\operatorname{Int}(\widehat{S})\), \[\sum_{j=1}^{3}\operatorname*{Res}_{w=w_{\zeta}^{(j)}}\Big{[}h_{\zeta}(w) \Big{]}=-\Big{(}\tau+\frac{1}{\tau}\Big{)}-\frac{c(1-\tau^{2})}{\zeta}. \tag{2.51}\] Therefore if \(\zeta\in\operatorname{Int}(\widehat{S})\), we obtain \[(1-\tau^{2})C(\zeta)=\sqrt{\frac{\zeta}{\zeta}}-\frac{1}{\tau}-c(1-\tau^{2})= (1-\tau^{2})\partial_{\zeta}\widehat{Q}(\zeta). \tag{2.52}\] Then by (2.40), the variational equality (1.29) follows. Now it remains to show the variational inequality (1.30). Note that by definition, \(H(\zeta)\to\infty\) as \(\zeta\to\infty\). Suppose that the variational inequality (1.30) does not hold. Then there exists \(\zeta_{*}\in\widehat{S}^{c}\) such that \[\partial_{\zeta}H(\zeta)|_{\zeta=\zeta_{*}}=\partial\widehat{Q}(\zeta_{*})-C( \zeta_{*})=0. \tag{2.53}\] Recall that if \(\zeta\in\widehat{S}^{c}\), then only one of \(w_{\zeta}^{(j)}\)'s, say \(w_{\zeta}\), is in \(\mathbb{D}^{c}\). By combining the above computations, we have that for \(\zeta\in\widehat{S}^{c}\), \[(1-\tau^{2})\Big{(}\partial_{\zeta}\widehat{Q}(\zeta)-C(\zeta)\Big{)}=\sqrt{ \frac{\zeta}{\zeta}}-\operatorname*{Res}_{w=w_{\zeta}}\Big{[}h_{\zeta}(w) \Big{]}=\sqrt{\frac{\zeta}{\zeta}}-\frac{d}{\zeta}\Big{(}a\tau w_{\zeta}-(a^{ 2}\tau^{2}+1)+\frac{a\tau}{w_{\zeta}}\Big{)}. \tag{2.54}\] Therefore the identity (2.53) holds if and only if \[|\zeta_{*}|=d\Big{(}a\tau w_{\zeta_{*}}-(a^{2}\tau^{2}+1)+\frac{a\tau}{w_{ \zeta_{*}}}\Big{)}. \tag{2.55}\] Note that by (1.11), \[-\frac{d}{f(x)}\Big{(}a\tau x-(a^{2}\tau^{2}+1)+\frac{a\tau}{x}\Big{)}=-\frac{ 1}{f(x)}\frac{(1+\tau)(1+2c)}{2}\frac{(a\tau x-1)(x-a\tau)}{x}=\frac{(a\tau x -1)(x-a)}{(ax-1)(x-a\tau)}.\] Therefore if \(x<1/(a\tau)\), \[d\Big{(}a\tau x-(a^{2}\tau^{2}+1)+\frac{a\tau}{x}\Big{)}<\tau|f(x)|<|f(x)|.\] From this, we notice that (2.55) does not hold for \(w_{\zeta_{*}}\in\mathbb{R}\). Furthermore, this implies that the right-hand side of (2.55) is real-valued if and only if \(w_{\zeta_{*}}\in\partial\mathbb{D}\), equivalently, \(\zeta_{*}\in\partial\widehat{S}.\) This contradicts with the assumption that \(\zeta_{*}\in\widehat{S}^{c}\). Now the proof is complete. ## Appendix A Conformal mapping method: the pre-critical case In this appendix, we present the conformal mapping method, which is helpful to derive the candidate of the droplet given in terms of the rational function (1.11). **Proposition A.1**.: _Let \(\tau\in(\tau_{c},1)\). Suppose that \(\widehat{S}\) in (1.18) is simply connected. Let \(f\) be a unique conformal map \((\bar{\mathbb{D}}^{c},\infty)\to(\widehat{S}^{c},\infty)\), which satisfies_ (A.1) \[f(z)=r_{1}\,z+r_{2}+O\Big{(}\frac{1}{z}\Big{)},\qquad z\to\infty.\] _Then the following holds._ 1. _The conformal map_ \(f\) _is a rational function of the form_ (A.2) \[f(z)=r_{1}z+r_{2}+\frac{r_{3}}{z}+\frac{r_{4}}{z-a},\qquad a\in(-1,0),\] _which satisfies_ (A.3) \[f(1/a)=\frac{r_{1}}{a}+r_{2}+r_{3}a+\frac{ar_{4}}{1-a^{2}}=0.\] 2. _The parameters_ \(r_{j}\)__\((j=1,\ldots,4)\) _are given by_ (A.4) \[r_{3}=\frac{1+\tau}{2}\tau\sqrt{\tau(1+2c)},\qquad\quad r_{4}= \frac{(1-\tau)^{2}(1+\tau)(1-(1+2c)\tau)}{2\tau\sqrt{\tau(1+2c)}}\] _and_ (A.5) \[a=-\frac{1}{\sqrt{\tau(1+2c)}}.\] Note that the rational function \(f\) with the choice of parameters (A.4) corresponds to (1.11). Therefore Proposition A.1 gives rise to Theorem 1.4 (ii) under the assumption that \(\widehat{S}\) is simply connected. However, there is no general theory characterising the connectivity of the droplet. (Nevertheless, we refer the reader to [49, 48] for sharp connectivity bounds of the droplets associated with a class of potentials.) Thus we need to directly verify the variational principles as in Subsection 2.2. Proof of Proposition a.1 (i).: By differentiating the variational equality (1.29), we have (A.6) \[\partial_{\zeta}\widehat{Q}(\zeta)=C(\zeta):=\int\frac{d\widehat{\mu}(z)}{ \zeta-z},\qquad\zeta\in\widehat{S}.\] Using (1.14), this can be rewritten as (A.7) \[\bar{\zeta}=\zeta\Big{[}(1-\tau^{2})\Big{(}C(\zeta)+\frac{c}{\zeta}\Big{)}+ \tau\Big{]}^{2}.\] Therefore the Schwarz function \(F\) associated with the droplet \(\widehat{S}\) exists. Furthermore, it is expressed in terms of \(C\) as (A.8) \[F(\zeta)=\zeta\Big{[}(1-\tau^{2})\Big{(}C(\zeta)+\frac{c}{\zeta}\Big{)}+\tau \Big{]}^{2}.\] Note that for \(z\in\partial\mathbb{D}\), (A.9) \[\overline{f(1/\bar{z})}=\overline{f(z)}=f(z)\Big{[}(1-\tau^{2})\Big{(}C(f(z))+ \frac{c}{f(z)}\Big{)}+\tau\Big{]}^{2}.\] Using this, we define \(f:\bar{\mathbb{D}}\backslash\{0\}\to\mathbb{C}\) by analytic continuation as (A.10) \[f(z):=\overline{f(1/\bar{z})\Big{[}(1-\tau^{2})\Big{(}C(f(1/\bar{z}))+\frac{c} {f(1/\bar{z})}\Big{)}+\tau\Big{]}^{2}}.\] Therefore \(f\) has simple poles only at \(0,\infty\) and the point \(a\in\mathbb{R}\) such that \(f(1/a)=0\), which leads to (A.2). Next, we need to specify the constants \(r_{j}\) and \(a.\) For this, we shall find interrelations among the parameters. **Lemma A.2**.: _We have_ (A.11) \[r_{3}=r_{1}\tau^{2}\] _and_ (A.12) \[r_{4}=a(1-\tau^{2})\Big{(}r_{2}-2\tau(1+c)\Big{)}.\] _Furthermore, we have_ (A.13) \[r_{2}=r_{1}\frac{1+a^{2}\tau^{2}}{1-a^{2}\tau^{2}}\frac{a^{2}-1}{a}+\frac{2a^ {2}(1-\tau^{2})\tau(1+c)}{1-a^{2}\tau^{2}}.\] Proof.: Note that (A.14) \[\overline{f(1/\bar{z})}=\frac{r_{1}}{z}+r_{2}+r_{3}z+\frac{r_{4}z}{1-az}.\] Therefore, we have (A.15) \[\frac{1}{\overline{f(1/\bar{z})}}=\frac{1}{r_{1}}\,z-\frac{r_{2}}{r_{1}^{2}} \,z^{2}+\frac{r_{2}^{2}-r_{1}r_{3}-r_{1}r_{4}}{r_{1}^{3}}\,z^{3}+O(z^{4}), \qquad z\to 0.\] Since the Cauchy transform \(C\) satisfies the asymptotic behaviour (A.16) \[C(\zeta)=\frac{1}{\zeta}+O(\frac{1}{\zeta^{2}}),\qquad\zeta\to\infty,\] we have (A.17) \[\overline{C(f(1/\bar{z}))}=\frac{1}{r_{1}}\,z+O(z^{2}),\qquad z\to 0.\] Combining these equations with (A.10), we obtain (A.18) \[f(z)=\frac{r_{1}\tau^{2}}{z}+\Big{(}r_{2}\tau^{2}+2\tau(1-\tau^{2})(1+c) \Big{)}+O(z),\qquad z\to 0.\] On the other hand, by using (A.2), we have (A.19) \[f(z)=\frac{r_{3}}{z}+\Big{(}r_{2}-\frac{r_{4}}{a}\Big{)}+O(z),\qquad z\to 0.\] Then by comparing the coefficients in (A.18) and (A.19), we obtain (A.11) and (A.12). Note that by (A.3), we have (A.20) \[r_{4}=\frac{a^{2}-1}{a}\Big{(}\frac{r_{1}}{a}+r_{2}+r_{3}a\Big{)}.\] Then by (A.11), we have (A.21) \[r_{4}=\frac{a^{2}-1}{a}\Big{(}\frac{r_{1}}{a}+r_{2}+r_{1}a\tau^{2}\Big{)}=r_{ 1}\frac{(1+a^{2}\tau^{2})(a^{2}-1)}{a^{2}}+r_{2}\frac{a^{2}-1}{a}.\] Combining this identity with (A.12), we obtain (A.22) \[a(1-\tau^{2})r_{2}-2a(1-\tau^{2})\tau(1+c)=r_{1}\frac{(1+a^{2}\tau^{2})(a^{2}- 1)}{a^{2}}+r_{2}\frac{a^{2}-1}{a},\] which leads to (A.13). **Lemma A.3**.: _We have_ (A.23) \[\Big{(}(2-a^{2}+a^{4}\tau^{2})r_{1}+ar_{2}\Big{)}\Big{(}r_{2}-2\tau(1+c)\Big{)} =(1-\tau^{2})c^{2}a(a^{2}-1).\] Proof.: Using (A.3), we have (A.24) \[\frac{1}{\overline{f(1/\bar{z})}}=\frac{a^{2}(a^{2}-1)}{(2-a^{2})r_{1}+ar_{2}+a^{ 4}r_{3}}\,\frac{1}{z-a}+O(1),\qquad z\to a.\] Then by (A.10) and (A.11), we obtain (A.25) \[r_{4}=\frac{(1-\tau^{2})^{2}c^{2}\,a^{2}(a^{2}-1)}{(2-a^{2})r_{1}+ar_{2}+a^{4} r_{3}}=\frac{(1-\tau^{2})^{2}c^{2}\,a^{2}(1-a^{2})^{2}}{r_{1}(a^{2}\tau^{2}-1)(1 -a^{2})^{2}+r_{4}a^{2}}.\] Now lemma follows from (A.12). Proof of Proposition a.1 (ii).: Since \(\widehat{\mu}\) is a probability measure, we have (A.26) \[1=\int_{\widehat{S}}\frac{1}{2(1-\tau^{2})}\frac{1}{|z|}\,dA(z)=\frac{1}{2\pi i }\int_{\partial\widehat{S}}\frac{1}{1-\tau^{2}}\sqrt{\frac{\bar{z}}{z}}\,dz,\] where we have used Green's formula for the second identity. Using the change of variable \(z=f(w)\), where \(f\) is of the form (A.2), this can be rewritten as (A.27) \[\frac{1}{2\pi i}\int_{\partial\mathbb{D}}\sqrt{\overline{f(1/\bar{w})}f(w)}\, \frac{f^{\prime}(w)}{f(w)}\,dw=1-\tau^{2}.\] By Lemma A.2 and (A.2), we have (A.28) \[f(z) =\frac{1-az}{z(z-a)}\Big{(}-\frac{r_{1}}{a}z^{2}+\Big{(}\frac{a^{ 2}-1}{a^{2}}r_{1}-\frac{r_{2}}{a}\Big{)}z-a\tau^{2}r_{1}\Big{)},\] (A.29) \[\overline{f(1/\bar{z})} =\frac{z-a}{z(1-az)}\Big{(}-a\tau^{2}r_{1}z^{2}+\Big{(}\frac{a^{2 }-1}{a^{2}}r_{1}-\frac{r_{2}}{a}\Big{)}z-\frac{r_{1}}{a}\Big{)}.\] Note here that by construction, two zeros of \(f\) other than \(1/a\) are contained in the unit disc. Using these together with straightforward residue calculus, we obtain (A.30) \[\operatorname{Res}_{w=0}\Big{[}\sqrt{\overline{f(1/\bar{w})}f(w)}\,\frac{f^{ \prime}(w)}{f(w)}\Big{]}=(1+c)(1-\tau^{2})\] and (A.31) \[\operatorname{Res}_{w=a}\Big{[}\sqrt{\overline{f(1/\bar{w})}f(w)}\,\frac{f^{ \prime}(w)}{f(w)}\Big{]}=-\frac{1}{a}\Big{[}\Big{(}\frac{1+a^{2}\tau^{2}}{a}r _{1}+r_{2}\Big{)}\Big{(}\frac{a^{4}\tau^{2}-a^{2}+2}{a}r_{1}+r_{2}\Big{)} \Big{]}^{1/2}.\] Furthermore, it follows from Lemma A.3 that (A.32) \[\operatorname{Res}_{w=a}\Big{[}\sqrt{\overline{f(1/\bar{w})}f(w)}\,\frac{f^{ \prime}(w)}{f(w)}\Big{]}=-c(1-\tau^{2}).\] Combining (A.27), (A.30) and (A.32), one can notice that the function \(f\) has a double zero, which implies that (A.33) \[\frac{a^{2}-1}{a^{2}}r_{1}-\frac{r_{2}}{a}=2r_{1}\tau.\] By solving the system of equations given in Lemmas A.2, A.3 and (A.33), the desired result follows. **Remark A.4** (The use of higher moments of the equilibrium measure).: _In a more complicated case, for instance for the case with multiple point charges such as (2.7), the mass-one condition (A.26) may not be enough to characterise the parameters. In this case, one can further use the higher order asymptotic expansions appearing in the above lemmas, which involve the \(k\)-th moments of the equilibrium measure; cf. (2.35). Thus in principle, one can always find enough (algebraic) interrelations to characterise the parameters appearing in the conformal map._ **Remark A.5**.: _For the case \(\tau=0\) and \(p>0\), it was shown in [12] that if_ \[c>\frac{(1-p^{2})^{2}}{4p^{2}},\] the droplet associated with (2.1) is a simply connected domain whose boundary is given by the image of the conformal map_ \[f(z)=R\,z-\frac{\kappa}{z-q}-\frac{\kappa}{q},\qquad R=\frac{1+p^{2}q^{2}}{2pq}, \qquad\kappa=\frac{(1-q^{2})(1-p^{2}q^{2})}{2pq}.\] _Here, \(q\) is given by the unique solution of \(P(q^{2})=0\), where_ \[P(x):=x^{3}-\Big{(}\frac{p^{2}+4c+2}{2p^{2}}\Big{)}x^{2}+\frac{1}{2p^{4}}\] _such that \(0<q<1\) and \(\kappa>0\)._ _Beyond the case \(\tau=0\), the conformal mapping method described above also works for the potential (2.1) with general \(\tau\in[0,1),c\in\mathbb{R}\) and \(p\in\mathbb{C}\) under the assumption that the associated droplet is simply connected. Under this assumption, one can show that the boundary of the droplet is given by the image of the rational conformal map \(f\) of the form_ (A.34) \[f(z)=R_{1}\,z+R_{2}+\frac{R_{3}}{z}+\frac{R_{4}}{z-q},\qquad q\in\mathbb{D},\] _which satisfies \(f(1/q)=0\). Furthermore, following the strategy above, one can characterise the coefficients \(R_{j}\) (\(j=1,\ldots,4\)) of this rational map as well as the position of the pole \(q\in\mathbb{D}\)._ _However, as previously mentioned, it is far from being obvious to characterise a condition for which the droplet is simply connected. Nevertheless, since the radius of curvature of the ellipse (2.3) at the point \((1+\tau)\sqrt{1+c}\) is given by_ \[\frac{(1-\tau)^{2}}{1+\tau}\,\sqrt{1+c},\] _one can expect that if_ (A.35) \[p>\max\Big{\{}\frac{4\tau}{1+\tau}\sqrt{1+c}\,,\,(1+\tau)\sqrt{1+c}-\sqrt{1- \tau^{2}}\sqrt{c}\Big{\}}\] _then the droplet is a simply connected domain._ ## Appendix B One-dimensional equilibrium measure problem in the Hermitian limit In this appendix, we present a proof of (2.10). Let us write (B.1) \[V(z)\equiv V_{p}(z)=\frac{z^{2}}{2}-2c\log|z-p|.\] Recall that \(\mu_{V}\equiv\mu_{V_{p}}\) is the equilibrium measure associated with \(V_{p}(x)\) (\(x\in\mathbb{R}\)). We define (B.2) \[R(z):=\Big{(}\frac{V^{\prime}(z)}{2}\Big{)}^{2}-\int_{\mathbb{R}}\frac{V^{ \prime}(z)-V^{\prime}(s)}{z-s}\,d\mu_{V}(s).\] By applying Schiffer variations (see e.g. [35, Section 3]), we have (B.3) \[R(z)=\Big{(}\int\frac{d\mu_{V}(s)}{z-s}-\frac{V^{\prime}(z)}{2}\Big{)}^{2}, \qquad z\in\mathbb{C}\setminus\operatorname{supp}(\mu_{V}).\] Combining the asymptotic behaviour \[\int\frac{d\mu_{V}(s)}{z-s}\sim\frac{1}{z},\qquad z\to\infty,\] with (B.3), we obtain (B.4) \[R(z)=\frac{1}{4}z^{2}-(c+1)-\frac{cp}{z}+O\Big{(}\frac{1}{z^{2}}\Big{)},\qquad z \to\infty.\] On the other hand, since \[V^{\prime}(z)=z-\frac{2c}{z-p},\qquad\frac{V^{\prime}(z)-V^{\prime}(s)}{z-s}= 1+\frac{2c}{z-p}\frac{1}{s-p},\] we have (B.5) \[R(z)=\frac{1}{4}\Big{(}z-\frac{2c}{z-p}\Big{)}^{2}-1-\frac{2c}{z-p}\int_{\mathbb{ R}}\frac{d\mu_{V}(s)}{s-p}.\] Thus we obtain (B.6) \[R(z)=\frac{c^{2}}{(z-p)^{2}}+O\Big{(}\frac{1}{z-p}\Big{)},\qquad z\to p.\] In the expression (B.5), one can observe that \(R\) is a rational function with a double pole at \(z=p\). Therefore it is of the form (B.7) \[R(z)=\frac{1}{4}z^{2}+\frac{Az^{2}+Bz+C}{(z-p)^{2}}\] for some constants \(A,B\) and \(C\). As in the previous subsection, we need to specify these parameters. By direct computations, we have (B.8) \[R(z)=\frac{1}{4}z^{2}+A+\frac{2Ap+B}{z}+O\Big{(}\frac{1}{z^{2}}\Big{)},\qquad z \to\infty,\] and (B.9) \[R(z)=\frac{Ap^{2}+Bp+C}{(z-p)^{2}}+O\Big{(}\frac{1}{z-p}\Big{)},\qquad z\to p.\] By comparing coefficients in (B.4) and (B.8), we have (B.10) \[A=-c-1,\qquad-cp=2Ap+B.\] Similarly, by (B.6) and (B.9), (B.11) \[Ap^{2}+Bp+C=c^{2}.\] By solving these algebraic equations, we obtain (B.12) \[B=p(c+2),\qquad C=c^{2}-p^{2}.\] Combining all of the above with (B.7), we have shown that (B.13) \[\begin{split} R(z)&=\frac{1}{4}z^{2}+\frac{-(c+1)z^ {2}+p(c+2)z+(c^{2}-p^{2})}{(z-p)^{2}}\\ &=\frac{((z-p)(z-2)-2c)((z-p)(z+2)-2c)}{4(z-p)^{2}}=\frac{\prod_{ j=1}^{4}(z-\lambda_{j})}{4(z-p)^{2}},\end{split}\] where \(\lambda_{j}\)'s are given by (2.11) and (2.12). Therefore by (B.3), the Stieltjes transform of \(\mu_{V}\) is given by (B.14) \[\int\frac{d\mu_{V}(s)}{z-s}=\frac{V^{\prime}(z)}{2}-R(z)^{1/2}=\frac{z}{2}- \frac{c}{z-p}-\frac{1}{2}\sqrt{\frac{\prod_{j=1}^{4}(z-\lambda_{j})}{(z-p)^{2 }}}.\] Letting \(z=x+i\varepsilon\to x\in\mathbb{R}\), we find \[\lim_{\varepsilon\to 0+}\operatorname{Im}\int\frac{d\mu_{V}(s)}{(x+i \varepsilon)-s}=\begin{cases}\frac{\sqrt{-\prod_{j=1}^{4}(x-\lambda_{j})}}{2|x -p|}&\text{if }x\in[\lambda_{1},\lambda_{2}]\cup[\lambda_{3},\lambda_{4}],\\ 0&\text{otherwise}.\end{cases}\] Now the desired identity (2.10) follows from the Sokhotski-Plemelj inversion formula, see e.g. [39, Section I.4.2]. ### Acknowledgements The author is greatly indebted to Yongwoo Lee for the figures and numerical simulations.
2310.19134
CoBarS: Fast reweighted sampling for polygon spaces in any dimension
We present the first algorithm for sampling random configurations of closed $n$-gons with any fixed edgelengths $r_1, \dots, r_n$ in any dimension $d$ which is proved to sample correctly from standard probability measures on these spaces. We generate open $n$-gons as weighted sets of edge vectors on the unit sphere and close them by taking a M\"obius transformation of the sphere which moves the center of mass of the edges to the origin. Using previous results of the authors, such a M\"obius transformation can be found in $O(n)$ time. The resulting closed polygons are distributed according to a pushforward measure. The main contribution of the present paper is the explicit calculation of reweighting factors which transform this pushforward measure to any one of a family of standard measures on closed polygon space, including the symplectic volume for polygons in $\mathbb{R}^3$. For fixed dimension, these reweighting factors may be computed in $O(n)$ time. Experimental results show that our algorithm is efficient and accurate in practice, and an open-source reference implementation is provided.
Jason Cantarella, Henrik Schumacher
2023-10-29T19:48:06Z
http://arxiv.org/abs/2310.19134v1
# CoBarS: Fast reweighted sampling ###### Abstract We present the first algorithm for sampling random configurations of closed \(n\)-gons with any fixed edgelengths \(r_{1},\ldots,r_{n}\) in any dimension \(d\) which is proved to sample correctly from standard probability measures on these spaces. We generate open \(n\)-gons as weighted sets of edge vectors on the unit sphere and close them by taking a Mobius transformation of the sphere which moves the center of mass of the edges to the origin. Using previous results of the authors, such a Mobius transformation can be found in \(O(n)\) time. The resulting closed polygons are distributed according to a pushforward measure. The main contribution of the present paper is the explicit calculation of reweighting factors which transform this pushforward measure to any one of a family of standard measures on closed polygon space, including the symplectic volume for polygons in \(\mathbb{R}^{3}\). For fixed dimension, these reweighting factors may be computed in \(O(n)\) time. Experimental results show that our algorithm is efficient and accurate in practice, and an open-source reference implementation is provided. **MSC-2020 classification:** 60D05, 65D18, 82D60 ## 1 Introduction We consider configurations of \(n\) points in \(\mathbb{R}^{d}\) with positions \(v_{1},\ldots,v_{n}\) separated by a length vector \(r\) of fixed distances \(r_{1},\ldots,r_{n}>0\). We treat indices cyclically, so we may write the displacement vectors \(v_{i+1}-v_{i}=r_{i}\,y_{i}\) where \(y_{i}\) is a unit vector in \(\mathbb{R}^{d}\), letting \(r_{n}\,y_{n}=v_{n}-v_{1}\). The elements of such a space are polygons with edgelengths given by the \(r_{i}\). Our goal in this paper is to give an efficient way to randomly sample such polygons. This sampling problem is of interest in the statistical physics of polymers, where the configuration space of polygons with fixed edgelengths is the state space for the _freely-jointed chain model_[27] of a ring polymer. (See the survey paper [33] for many applications of these kinds of models in physics and biology.) The same space is studied in robotics [15], where it models the kinematic configuration space of a robot arm with spherical revolute joints forming a closed loop.1 It is also of intrinsic interest in differential geometry and topology [16, 19, 24, 28]. Footnote 1: If this seems an unusual special case, the standard example [29] of a kinematic loop is that of a robot in a fixed position manipulating an object which is also constrained, such as a door handle. Here there are implicit constraints connecting the base of the robot to the door’s hinge and the hinge to the door handle. In more complicated situations, multiple kinematic loops may be present at the same time, but this is beyond the scope of the present paper. Various algorithms have been proposed to construct random polygons [30, 31, 32, 34, 35, 1, 3, 3, 1]. However, all of them suffer from one or more deficiencies; they are either explicitly restricted to dimension \(d=2\) or \(d=3\), not proved to sample the correct measure, and/or only generate equilateral polygons. Therefore, there is a need for a polygon sampling method which is fast, can be proved to sample the correct measure, and gracefully accommodates arbitrary choices of dimension and edgelengths. In this paper, we present such a method: conformal barycenter sampling (CoBarS). We start by observing that it is easy to construct configurations of \(n+1\) points \(v_{1},\ldots,v_{n+1}\) so that \(|v_{i+1}-v_{i}|=r_{i}\) by sampling unit vectors \(x_{i}\) uniformly from a product of spheres and letting \(v_{i}=\sum_{j<i}r_{i}\,x_{i}\). These configurations have the correct edgelengths, but they usually fail to close because \(v_{n+1}=v_{1}\) is satisfied if and only if \(\sum_{i=1}^{n}r_{i}\,x_{i}=0\). However, we will build a map (Definition 4) from the space of open polygons to the space of closed polygons using the conformal barycenter (Definition 1). This map is only defined implicitly, but can be computed efficiently using the algorithm in [7]. This gives us a fast sampling algorithm, but the resulting samples are biased. The main theoretical contribution of this paper is a fast and explicit way to compute reweighting factors which eliminate this sampling bias. We then see in experiments that we can compute integrals over the configuration space of polygons quickly and accurately using this reweighting. The resulting method is faster and more general than the Action-Angle method described in [3, 8]. We note that the idea of generating closed polygons from open ones or (more or less equivalently) polygons with given edgelengths from arbitrary closed polygons is definitely not a new one and a variety of polygon closure or resampling algorithms have been proposed [2]. Any of these could be used to generate (biased) samples from closed polygon space, which could in principle be reweighted to sample the standard measure. The key new feature of this approach is that our closure algorithm is mathematically controlled enough that we can prove that it (almost) always converges, provide time bounds, and explicitly compute reweighting factors. ## Acknowledgments The authors are grateful to many colleagues for helpful discussions of polygons and hyperbolic geometry, especially Clayton Shonkwiler, Tetsuo Deguchi, and Erica Uehara. ## 2 Opening and Closing Polygons In order to sample, we need to carefully define a probability space and a corresponding measure. Set \(\mathbb{S}\coloneqq S^{d-1}\), \(\mathbb{B}\coloneqq B^{d-1}=\{\,x\in\mathbb{R}^{d}\mid|x|<1\,\}\), and \(\mathbb{\bar{B}}\coloneqq\{\,x\in\mathbb{R}^{d}\mid|x|\leq 1\,\}\) and assume that \(r\in\mathbb{R}_{+}^{n}\). Then we can define spaces of open polygonal "arms" and closed polygons by letting \[\mathrm{Arm}_{d}(n)\coloneqq\{\ x\in(\mathbb{S})^{n}\,\}\quad\text{ and}\quad\mathrm{Pol}_{d}(n;r)\coloneqq\{\ y\in\mathrm{Arm}_{d}(n)\ \big{|}\ \sum_{i=1}^{n}r_{i}\,y_{i}=0\,\}\] and by associating directions \(x_{i}\) or \(y_{i}\) with vertices \(v_{i}\) by adding and shifting the center of mass of the \(v_{i}\) to the origin. We make the standing assumption that \(d\geq 2\) and \(n\geq 3\). Since polygons cannot close if some edgelength \(r_{j}\) is greater than or equal to \(\frac{1}{2}\sum_{i=1}^{n}r_{i}\), we will also make the standing assumption that \[\text{for each }j\in\{\,1,\ldots,n\,\}\colon\quad r_{j}<\frac{1}{2}\sum_{i=1}^{n }r_{i}. \tag{1}\] We will assume that \(d\), \(n\) and \(r\) are fixed, and replace \(\mathrm{Arm}_{d}(n)\) with \(\mathrm{Arm}\) and \(\mathrm{Pol}_{d}(n;r)\) with \(\mathrm{Pol}\) for brevity. We define a Riemannian metric \(g_{\varnothing}\) by scaling the standard metric \(g_{\mathbb{S}}\) on each \(\mathbb{S}\) by \(\varrho_{i}^{2}\) and assuming that the tangent spaces to different spheres are orthogonal: \[g_{\varnothing}(u,v)\coloneqq\sum_{i=1}^{n}\varrho_{i}^{2}\,g_{\mathbb{S}}(u_{ i},v_{i})\quad\text{for all }u,v\in T_{x}\mathrm{Arm}=T_{x_{1}}\mathbb{S}\times\cdots\times T_{x_{n}} \mathbb{S}.\] This is the restriction of the metric \(\langle u,v\rangle_{\varrho}=\sum_{i=1}^{n}\varrho_{i}^{2}\langle u_{i},v_{i}\rangle\) on \(T(\mathbb{R}^{d})^{n}\). The metric \(g_{\varnothing}\) generates a corresponding volume measure \(\mathrm{vol}_{\varrho}\) and probability measure \(P_{\varrho}\) on \(\mathrm{Arm}\), which happen to be products of measures on \(\mathbb{S}\): \[\mathrm{vol}_{\varrho}=\prod_{i=1}^{n}\varrho_{i}^{d-1}\mathrm{vol}_{\mathbb{ S}},\quad P_{\varrho}\coloneqq\frac{\mathrm{vol}_{\varrho}}{\mathrm{vol}_{ \varrho}(\mathrm{Arm})}=\prod_{i=1}^{n}\frac{\mathrm{vol}_{\mathbb{S}}}{ \mathrm{vol}_{\mathbb{S}}(\mathbb{S})}.\] Figure 1: (a) Geodesics joining some point \(w\) in \(\mathbb{B}\) to three points \(x_{1},x_{2},x_{3}\) in \(\mathbb{S}\) and the corresponding conformal directors. The directors \(V_{x_{i}}(w)\) do not sum up to \(0\). (b) Same as (a), but here the sum of directors \(V_{x_{i}}(w)\) vanishes; thus \(w\) is the conformal barycenter of the \(x_{i}\). (c) Geometric construction of the directors: Each geodesic emanating from \(x\) intersects the secant \(px\) in the same angle. Thus all directors \(V_{x}(w)\) for \(w\) on the secant \(px\) point in the same direction. We note that \(P_{\varrho}\) is independent of \(\varrho\), even though \(\operatorname{vol}_{\varrho}\) and \(g_{\varrho}\) depend on our choice of \(\varrho_{i}\). When \(\varrho_{i}=\sqrt{r_{i}}\), the volume corresponds to the symplectic volume of Millson and Kapovich [24]. When \(\varrho_{i}=r_{i}\), the volume is equivalent to taking a product of spheres with radii \(r_{i}\). It is known that Pol is a Riemannian submanifold of the Riemannian manifold \(\operatorname{Arm}\), with isolated singularities only at points where all \(y_{i}\) are colinear [16, Prop. 3.1]. So Pol inherits a submanifold metric \(g_{\varrho,r}\), (Hausdorff) volume measure \(\operatorname{vol}_{\varrho,r}\), and probability measure \(P_{\varrho,r}\). We will assume for the rest of the paper that we have fixed \(\varrho\) and \(r\), and therefore fixed \(P_{\varrho,r}\), which we will refer to as _the_ measure on Pol. We will now connect polygon spaces to hyperbolic geometry (as in [24]). We think of \(\mathbb{B}\) as the Poincare disk model of hyperbolic space, where \(\mathbb{S}\) is the sphere at infinity. **Definition 1**.: Given \(x_{i}\in\mathbb{S}\), at every \(w\in\mathbb{B}\) there is a unique geodesic ray joining \(w\) to \(x_{i}\). The unit tangent vector to this geodesic ray is called the _director_\(V_{x_{i}}(w)\). If \(x\in\operatorname{Arm}\), \(r\in\mathbb{R}_{+}^{n}\) and \(w_{*}\) has the property that \(\sum r_{i}\,V_{x_{i}}(w_{*})=0\), we say \(w_{*}\) is a _conformal barycenter_ of \(x\) with weights \(r\). (See Fig. 1 for a \(2\)-dimensional illustration.) Even under the assumption (1), there will be some polygons we are unable to close with our method (see Fig. 2). So we now impose an additional (mild) hypothesis. **Definition 2**.: If \(x\in\operatorname{Arm}\) and for every \(v\in\mathbb{S}\), \(\sum_{i\,|\,x_{i}=v}r_{i}<\frac{1}{2}\sum_{i=1}^{n}r_{i}\), then we say \(x\) is stable (with respect to \(r\)). **Proposition 3** ([12], Section 11).: _If \(x\in\operatorname{Arm}\) is stable with respect to \(r\), then it has a unique conformal barycenter \(w_{*}(x)\) in the interior of \(\mathbb{B}\)._ By construction, the conformal barycenter is equivariant under hyperbolic isometries of \(\mathbb{B}\), i.e. for every hyperbolic isometry \(\varphi:\mathbb{B}\to\mathbb{B}\) we have \[w_{*}(\varphi(x_{1}),\dots,\varphi(x_{n}))=\varphi(w_{*}(x_{1},\dots,x_{n})),\] where we define \(\varphi\) on \(\mathbb{S}\) to be the unique continuation of \(\varphi\) on \(\mathbb{B}\). Figure 2: (a) This polygon has edgelengths \(r=(8,16,3,4)\), which do not satisfy (1). Regardless of their directions, the short edges cannot close the polygon, so we do not consider such \(r\). (b) This polygon has edgelengths \(r=(8,12,3,4)\), which do satisfy (1). For this polygon \(x_{2}=x_{4}\) and their edgelength sum is greater than the remaining two edges. Thus \(x\) is not stable with respect to \(r\) in the sense of Definition 2. It will turn out to be the case that we cannot close this polygon. (c) This polygon has edgelengths \(r=(8,12,3,4)\) and unique directions \((x_{i}\neq x_{j}\) when \(i\neq j)\). This \(x\) is stable and we will be able to close this polygon. Now the geodesics from \(0\) to each \(x\in\mathbb{S}\) are radial lines. Because the Poincare metric at the origin is twice the Euclidean metric we have \(V_{x}(0)=\frac{1}{2}x\). Hence, if \(w_{*}(x_{1},\dots,x_{n})=0\), then \(\sum_{i=1}^{n}r_{i}\,x_{i}=2\sum_{i=1}^{n}r_{i}\,V_{x}(0)=0\) and the polygon is closed. By equivariance, we know that if \(\varphi\) is a hyperbolic isometry that brings \(w_{*}(x_{1},\dots,x_{n})\) to the origin, then \((\varphi(x_{1}),\dots,\varphi(x_{n}))\) will be a closed polygon. However, there are many such isometries. We now choose a specific one. **Definition 4**.: For any \(w\in\mathbb{B}\), there is a unique hyperbolic translation which maps \(w\) to \(0\). We call this a _shift map_ and denote it by \(\sigma(w,-)\). This map extends to the sphere at infinity and is given by the formula \[\sigma\colon\mathbb{B}\times\bar{\mathbb{B}}\to\bar{\mathbb{B}},\quad\sigma(w,z)\coloneqq\frac{(1-|w|^{2})\,z-(1+|z|^{2}-2\,\langle w,z\rangle)\,w}{1-2 \langle w,z\rangle+|w|^{2}|z|^{2}}. \tag{2}\] For \(x\in\mathrm{Arm}\), we define \(\sigma(w,x)\in\mathrm{Arm}\) by \(\sigma(w,x)_{i}=\sigma(w,x_{i})\). If \(x\) is stable with respect to \(r\), the _conformal barycenter closure_\(y_{*}(x)\in\mathrm{Pol}\) is defined by \(y_{*}(x)\coloneqq\sigma(w_{*}(x),x)\). The _conformal closure map_\(\mathrm{cl}\colon\mathrm{Arm}\to\mathbb{B}\times\mathrm{Pol}\) is given by \[\mathrm{cl}(x)=\Big{(}w_{*}(x),y_{*}(x)\Big{)}.\] The conformal barycenter closure gives us a canonical way to close polygons without changing their edgelengths, as long as the initial directions are stable. We can even interpolate between the open polygon \(x\) and the closed polygon \(y\) by taking \(\sigma(t\,w_{*},x)\), \(t\in[0,1]\), as in Fig. 3. Since the conformal barycenter is only defined implicitly, we cannot give a closed-form expression for \(\mathrm{cl}\). However, we can evaluate \(\mathrm{cl}\) using the algorithm in [7] in \(O(n)\) time using the open-source implementation in [6]. Further, we have a closed form for the inverse of \(\mathrm{cl}\), which we discuss next. **Definition 5**.: The _conformal opening map_\(\mathrm{op}\colon\mathbb{B}\times\mathrm{Pol}\to\mathrm{Arm}\) is given by \[\mathrm{op}(w,y)=\sigma(-w,y).\] We have studied the shift map in detail in [7]. In particular, for \(|w|<1\), the shift map is a conformal diffeomorphism \(\mathbb{B}\to\mathbb{B}\) and \(\mathbb{S}\to\mathbb{S}\). We now want to show that \(\mathrm{cl}\) and \(\mathrm{op}\) are inverse maps. This is not true everywhere, but it is true on subsets of full measure, which will be good enough for our purposes. Figure 3: From left to right, we interpolate between the original polygon \(x\) and its conformal closure \(y\) along the path \(\sigma(t\,w_{*}(x),x)\). The edgelength vector \(r=(4,6,3,2)\) remains the same throughout. This construction depends on the hypothesis that the initial \((x,r)\) are a stable pair; if not, we could not guarantee the existence of the conformal barycenter \(w_{*}(x)\). **Definition 6**.: We let \(\operatorname{Arm}^{\times}\coloneqq\{\,x\in\operatorname{Arm}\mid\text{for all }i\neq j \text{: }x_{i}\neq x_{j}\,\,\}\) and \(\operatorname{Pol}^{\times}\coloneqq\operatorname{Pol}\cap\operatorname{Arm}^{\times}\). We note that these are open and dense subsets of Arm and Pol, and hence subsets of full measure in these spaces. Further, if \(x\in\operatorname{Arm}^{\times}\) then \(x\) is stable with respect to \(r\). **Proposition 7**.: _The restrictions of the maps \(\operatorname{cl}\) and \(\operatorname{op}\) to \(\operatorname{Arm}^{\times}\) and \(\mathbb{B}\times\operatorname{Pol}^{\times}\) are maps \(\operatorname{cl}\colon\operatorname{Arm}^{\times}\to\mathbb{B}\times \operatorname{Pol}^{\times}\) and \(\operatorname{op}\colon\mathbb{B}\times\operatorname{Pol}^{\times}\to \operatorname{Arm}^{\times}\) and these maps are inverses of each other._ Proof.: We first observe that for any fixed \(w\in\mathbb{B}\), the Mobius transformation \(\sigma(w,-)\) is a diffeomorphism from \(\mathbb{S}^{d-1}\) to itself. Therefore, \(y_{i}\neq y_{j}\) implies \(\sigma(-w,y_{i})\neq\sigma(-w,y_{j})\) and \(\operatorname{op}\) maps \(\mathbb{B}\times\operatorname{Pol}^{\times}\) into \(\operatorname{Arm}^{\times}\). Similarly, \(x_{i}\neq x_{j}\) implies \(\sigma(w_{*},x_{i})\neq\sigma(w_{*},x_{j})\) and so \(\operatorname{cl}\) maps \(\operatorname{Arm}^{\times}\) into \(\mathbb{B}\times\operatorname{Pol}^{\times}\). We now prove that these maps are inverses of each other. Suppose that \(y\in\operatorname{Pol}^{\times}\). This means that the conformal barycenter \(w_{*}(y)=0\). By equivariance of the conformal barycenter under hyperbolic isometries, we then have \(w_{*}(\sigma(-w,y))=\sigma(-w,w_{*}(y))=\sigma(-w,0)=w\). Using that \(\sigma(-w,-)\) is the inverse of \(\sigma(w,-)\), we can now compute \[\operatorname{cl}(\operatorname{op}(w,y))=\operatorname{cl}( \sigma(-w,y)) =\Big{(}w_{*}(\sigma(-w,y)),\sigma\Big{(}w_{*}(\sigma(-w,y)),\sigma(-w,y) \Big{)}\Big{)}\] \[=\Big{(}w,\sigma(w,\sigma(-w,y))\Big{)}=(w,y).\] Conversely, suppose that \(x\in\operatorname{Arm}^{\times}\). Then \[\operatorname{op}(\operatorname{cl}(x))=\operatorname{op}\Big{(}w_{*}(x), \sigma(-w_{*}(x),x)\Big{)}=\sigma\Big{(}-w_{*}(x),\sigma(w_{*}(x),x)\Big{)}=x,\] Figure 4: We will use the conformal barycenter construction to prove that \(\mathbb{B}\times\operatorname{Pol}^{\times}\) is diffeomorphic to \(\operatorname{Arm}^{\times}\) by the maps \(\operatorname{cl}\) or \(\operatorname{op}\) (Theorem 10). Since \(\mathbb{B}\) is contractible, this shows that \(\operatorname{Arm}^{\times}\) deformation retracts onto \(\operatorname{Pol}^{\times}\). The picture above shows this schematically: the torus represents the product of spheres \(\operatorname{Arm}\) with its submanifold Pol and thin lines indices the fibers of the conformal barycenter closure which maps \(x\) to \(y\). which completes the proof. We will later show that \(\operatorname{Arm}^{\times}\) and \(\mathbb{B}\times\operatorname{Pol}^{\times}\) are smooth manifolds (Lemma 16) and that \(\operatorname{cl}\) and \(\operatorname{op}\) are diffeomorphisms between them (Theorem 10). In fact, our construction will show that \(\operatorname{Arm}^{\times}\) smoothly deformation retracts onto \(\operatorname{Pol}^{\times}\), as shown in Fig. 4. This will require more work. We start by using the change of variables formula to obtain a general formula for integration over \(\operatorname{Pol}\). **Proposition 8**.: _Suppose that \(f\colon\operatorname{Pol}\to\mathbb{R}\) and \(\chi\colon\mathbb{B}\to\mathbb{R}\) are integrable functions with \(\int\chi(w)\operatorname{dvol}_{\mathbb{B}}=1\), where \(\operatorname{vol}_{\mathbb{B}}\) is the Lebesgue measure on \(\mathbb{B}\). Then_ \[\int_{\operatorname{Pol}}f(y)\operatorname{dvol}_{\varrho,r}(y)=\int_{ \operatorname{Pol}^{\times}}f(y)\operatorname{dvol}_{\varrho,r}(y)=\int_{ \operatorname{Arm}^{\times}}f(y_{*}(x))\operatorname{K}(\operatorname{cl}(x ))\operatorname{dvol}_{\varrho}(x),\] _where \(K(w,y)=\frac{\chi(w)}{\operatorname{Jop}(w,y)}\), and \(J\operatorname{op}\) is the Jacobian determinant with respect to the metric \(g_{\varrho}\) on \(\operatorname{Arm}\) and the metric \(g_{\mathbb{B}}\times g_{\varrho,r}\) on \(\mathbb{B}\times\operatorname{Pol}\), where \(g_{\mathbb{B}}\) is the Euclidean metric on \(\mathbb{B}\)._ Proof.: Suppose we have an integrable function \(h\colon\mathbb{B}\times\operatorname{Pol}\to\mathbb{R}\) that we want to integrate with respect to the product measure \(\operatorname{vol}_{\mathbb{B}}\times\operatorname{vol}_{\varrho,r}\). We first observe that it suffices to integrate \(h\) over \(\mathbb{B}\times\operatorname{Pol}^{\times}\) as the result will be the same. Assuming that \(\operatorname{cl}\colon\operatorname{Arm}^{\times}\to\mathbb{B}\times \operatorname{Pol}^{\times}\) is a diffeomorphism (Theorem 10), we can use the change of variables formula to pull back the integral of \(h\) to \(\operatorname{Arm}^{\times}\), writing \[\int_{\mathbb{B}\times\operatorname{Pol}^{\times}}h(w,y)\operatorname{dvol}_ {\mathbb{B}}(w)\operatorname{dvol}_{\varrho,r}(y)=\int_{\operatorname{Arm}^{ \times}}h(\operatorname{cl}(x))\operatorname{Jcl}(x)\operatorname{dvol}_{ \varrho}(x).\] Here \(J\operatorname{cl}(x)\) is the nonzero Jacobian determinant given by \[J\operatorname{cl}(x)=\sqrt{\det(D\operatorname{cl}(x)^{*}D\operatorname{cl} (x))},\] where we need to keep in mind that if \(\operatorname{cl}(x)=(w,y)\), the differential \(D\operatorname{cl}(x)\) is an invertible linear map \(D\operatorname{cl}(x)\colon(T_{x}\operatorname{Arm}^{\times},g_{\varrho}) \to(T_{w}\mathbb{B}\times T_{y}\operatorname{Pol}^{\times},g_{\mathbb{B}} \times g_{\varrho,r})\). The adjoint \(D\operatorname{cl}(x)^{*}\) is defined relative to these inner products. We cannot compute \(J\operatorname{cl}(x)\) directly, but \(\operatorname{cl}\) is a diffeomorphism. So, applying the change of variables formula to the inverse map \(\operatorname{op}\), we know that \[J\operatorname{cl}(x)=\frac{1}{J\operatorname{op}(\operatorname{cl}(x))}, \quad\text{where}\quad J\operatorname{op}(w,y)=\sqrt{\det(D\operatorname{op }(w,y)^{*}D\operatorname{op}(w,y))}\neq 0.\] As above, \(D\operatorname{op}(w,y)^{*}\) is the Riemannian adjoint. This yields \[\int_{\mathbb{B}\times\operatorname{Pol}^{\times}}h(w,y)\operatorname{dvol} _{\mathbb{B}}(w)\operatorname{dvol}_{\varrho,r}(y)=\int_{\operatorname{Arm}^ {\times}}\frac{h(\operatorname{cl}(x))}{J\operatorname{op}(\operatorname{ cl}(x))}\operatorname{dvol}_{\varrho}(x).\] We actually want to integrate the integrable function \(f\colon\operatorname{Pol}\to\mathbb{R}\). So at this point we specialize this formula by assuming that we have chosen some integrable \(\chi\colon\mathbb{B}\to\mathbb{R}\) and set \(h(w,y)=\chi(w)f(y)\). We then get \[\begin{split}\int_{\operatorname{Pol}^{\times}}f(y)\operatorname{ dvol}_{\varrho,r}(y)&=\frac{\int_{\mathbb{B}\times\operatorname{Pol}^{ \times}}\chi(w)f(y)\operatorname{dvol}_{\mathbb{B}}(w)\operatorname{dvol}_{ \varrho,r}(y)}{\int_{\mathbb{B}}\chi(w)\operatorname{dvol}_{\mathbb{B}}(w)} \\ &=\frac{1}{\int_{\mathbb{B}}\chi(w)\operatorname{dvol}_{\mathbb{B} }(w)}\int_{\operatorname{Arm}^{\times}}\frac{h(\operatorname{cl}(x))}{J \operatorname{op}(\operatorname{cl}(x))}\operatorname{dvol}_{\varrho}(x).\end{split} \tag{3}\] Now \(\mathrm{cl}(x)=(w_{*}(x),y_{*}(x))\), so \[\frac{h(\mathrm{cl}(x))}{J\mathrm{op}(\mathrm{cl}(x))}=f(y_{*}(x))\,K(w_{*}(x),y_{ *}(x))\quad\text{where}\quad K(w,y)\coloneqq\frac{\chi(w)}{J\mathrm{op}(w,y)}.\] If we now add our assumption that \(\int_{\mathbb{B}}\chi(w)\,\mathrm{dvol}_{\mathbb{B}}=1\), (3) becomes the statement of the Proposition, completing the proof. We have now expressed our integral over the entire space of closed polygons \(y\in\mathrm{Pol}\) in terms of an integral over the subspace of open polygons \(x\in\mathrm{Arm}^{\times}\). It is natural to ask whether we can extend the right hand integral to all of \(\mathrm{Arm}\) since the complement of \(\mathrm{Arm}^{\times}\) is a set of measure zero. We cannot: neither the map \(\mathrm{cl}\) nor the weight function \(K\circ\mathrm{cl}\) are well-defined on all of \(\mathrm{Arm}\setminus\mathrm{Arm}^{\times}\) since this set includes some \(x\) that do not have a conformal barycenter. Of course, we want to compute integrals with respect to the normalized volume (or probability measure) on \(\mathrm{Pol}\) given by \(P_{\varrho,r}=\frac{1}{\mathrm{vol}_{\varrho,r}(\mathrm{Pol})}\mathrm{dvol}_{ \varrho,r}\). By the law of large numbers, we may use reweighted sampling to estimate expectations over \(\mathrm{Pol}\) as usual: **Corollary 9**.: _If \(f\colon\mathrm{Pol}\to\mathbb{R}\) is integrable and if \(x^{(j)}\) is a sequence of independent samples drawn from \(P_{\varrho}\) on \(\mathrm{Arm}\), then_ \[\frac{\sum_{j=1}^{N}f(y_{*}(x^{(j)}))\,K(\mathrm{cl}(x^{(j)}))}{\sum_{j=1}^{N} \,K(\mathrm{cl}(x^{(j)}))}\to\int_{\mathrm{Pol}}f(y)\,\,\mathrm{d}P_{\varrho,r }(y)\quad\text{almost surely as $N\to\infty$}. \tag{4}\] ## 3 Calculating \(J\mathrm{op}(w,y)\) Our eventual goal is to compute the weight function \(K(w,y)\). We start by proving: **Theorem 10**.: _For \((w,y)\in\mathbb{B}\times\mathrm{Pol}^{\times}\) we have_ \[J\mathrm{op}(w,y)=\left(\frac{2}{1-|w|^{2}}\right)^{d}\frac{\det\!\left(\sum_{ i=1}^{n}r_{i}\left(I_{d}-y_{i}\,y_{i}^{\mathsf{T}}\right)\right)}{\sqrt{ \det\!\left(\sum_{i=1}^{n}r_{i}^{2}/\varrho_{i}^{2}\left(I_{d}-y_{i}\,y_{i}^{ \mathsf{T}}\right)\right)}}\left(\prod_{i=1}^{n}\frac{1-|w|^{2}}{|w+y_{i}|^{2} }\right)^{d-1}\neq 0. \tag{5}\] _For fixed \(d\), this can be computed in \(O(n)\) time and memory. Since this determinant does not vanish, \(\mathrm{op}\) is a diffeomorphism from \(\mathbb{B}\times\mathrm{Pol}^{\times}\) to \(\mathrm{Arm}^{\times}\) and hence its inverse map \(\mathrm{cl}\) is a diffeomorphism from \(\mathrm{Arm}^{\times}\) to \(\mathbb{B}\times\mathrm{Pol}^{\times}\)._ This theorem is surprising, because the matrix \(D\mathrm{op}(w,y)^{*}\,D\mathrm{op}(w,y)\) is of size \(n\,(d-1)\times n\,(d-1)\), so we would expect its determinant to require \(O(n^{3})\) time and \(O(n^{2})\) memory to compute. However, (5) only contains \(d\times d\) determinants and it so may be evaluated in \(O(n)\) time and memory. The proof of this theorem will require us to do some detailed matrix computations. Accordingly, we take a moment to establish a system of coordinates and maps, along with some notation. **Lemma 11**.: _We may extend \(\operatorname{op}\colon\mathbb{B}\times\operatorname{Pol}^{\times}\subset \mathbb{B}\times\mathbb{S}^{n}\to\operatorname{Arm}^{\times}\subset\mathbb{S}^{n}\) to a smooth map \(\widetilde{\operatorname{op}}\colon\,U\to(\mathbb{R}^{d})^{n}\) where \(U\subset(\mathbb{R}^{d})^{n+1}\) is an open neighborhood of \(\mathbb{B}\times\bar{\mathbb{B}}^{n}\) by extending \(\sigma\) to \(\widetilde{\sigma}\colon\,V\to\mathbb{R}^{d}\), where \(V\subset\mathbb{R}^{d}\times\mathbb{R}^{d}\) is an open neighborhood of \(\mathbb{B}\times\bar{\mathbb{B}}\)._ Proof.: Using (2) we may define the extension \[\widetilde{\sigma}(w,z)\coloneqq\frac{(1-|w|^{2})\,z-(1+|z|^{2}-2\,\langle w, z\rangle)\,w}{1-2\langle w,z\rangle+|w|^{2}\,|z|^{2}}.\] The right hand side is in \(\mathbb{R}^{d}\) and a smooth function defined for all \(w\), \(z\in\mathbb{R}^{d}\) where the denominator does not vanish. Now the denominator is \[1-2\,\langle w,z\rangle+|w|^{2}\,|z|^{2}\geq 1-2\,|w|\,|z|+|w|^{2}\,|z|^{2}=(1-|w| \,|z|)^{2},\] so it does not vanish when \(|w|\,|z|<1\). Thus we may define \[V\coloneqq\bigcup_{0<\lambda<1}(\lambda\,\mathbb{B})\times\left(\tfrac{1}{ \lambda}\,\mathbb{B}\right)\supset\mathbb{B}\times\bar{\mathbb{B}}.\] As a union of open sets, the set \(V\) is clearly open. Now \(\operatorname{op}(w,y)=\left(\sigma(-w,y_{1}),\ldots,\sigma(-w,y_{n})\right)\), so we may let \(\widetilde{\operatorname{op}}(w,y)\coloneqq\left(\widetilde{\sigma}(-w,y_{1} ),\ldots,\widetilde{\sigma}(-w,y_{n})\right)\) and observe that \(\widetilde{\operatorname{op}}\) is defined on the analogous set \[U\coloneqq\bigcup_{0<\lambda<1}(\lambda\,\mathbb{B})\times\left(\tfrac{1}{ \lambda}\,\mathbb{B}\right)^{n}\supset\mathbb{B}\times\bar{\mathbb{B}}^{n}.\qed\] We will now assume \(w\in\mathbb{B}\) and \(y\in\operatorname{Pol}^{\times}\), define \(x\in\operatorname{Arm}^{\times}\) by \(x\coloneqq\operatorname{op}(w,y)\), and compute \(D\)op as the restriction of \(D\widetilde{\operatorname{op}}\) to the linear subspace \(T_{(w,y)}(\mathbb{B}\times\operatorname{Pol}^{\times})\subset T_{(w,y)}( \mathbb{R}^{d})^{n+1}\) in the usual coordinates on \((\mathbb{R}^{d})^{n+1}\). Now we have the following commutative diagram: Our next goal is to factor these maps through \(T_{y}\operatorname{Arm}^{\times}\) and \(T_{y}(\mathbb{R}^{d})^{n}\). To so do, we examine the structure of \(D\widetilde{\operatorname{op}}\) as a block matrix. We define \(d\times d\) matrices \[\tilde{a}_{i}\coloneqq-D_{1}\widetilde{\sigma}(-w,y_{i})\quad\text{and}\quad \tilde{b}_{i}\coloneqq D_{2}\widetilde{\sigma}(-w,y_{i}) \tag{6}\] where \(D_{1}\) and \(D_{2}\) are the derivatives of \(\widetilde{\sigma}(-,-)\) with respect to the first and second (vector) arguments. Then we have \[D\widetilde{\operatorname{op}}(w,y)=\begin{pmatrix}\tilde{a}_{1}&\tilde{b}_{1 }&0&\cdots&0\\ \vdots&0&\ddots&\ddots&\vdots\\ \vdots&\vdots&\ddots&\ddots&0\\ \tilde{a}_{n}&0&\cdots&0&\tilde{b}_{n}\end{pmatrix}=\left(\operatorname{vec }(\tilde{a})\quad\text{diag}(\tilde{b})\right). \tag{7}\] Here \(\operatorname{vec}(\tilde{a})\) denotes the \((n\,d)\times d\) matrix that results from the \(d\times d\) matrices \(\tilde{a}_{1},\dots,\tilde{a}_{n}\) by stacking them on top of each other and \(\operatorname{diag}(\tilde{b})\) denotes the \((n\,d)\times(n\,d)\) block diagonal matrix with the \(d\times d\) matrices \(\hat{b}_{i}\) on the diagonal. Since \(\widetilde{\sigma}(-w,-)\) is a Mobius transformation, its derivative \(D_{2}\widetilde{\sigma}(-w,-)\) is invertible wherever it is defined. Thus the \(\tilde{b}_{i}\) are invertible matrices for \((w,y)\in U\). We let \(\tilde{Z}\coloneqq\operatorname{diag}(\tilde{b})\). This is \(D\widetilde{\operatorname{op}}(w,-)\) evaluated at the point \(y\). Since \(\tilde{Z}\) is block diagonal and the blocks are invertible, \(\tilde{Z}\) is invertible. Since \(\tilde{Z}\) is the derivative of \(\widetilde{\operatorname{op}}(w,-)\) at \(y\), it must map \(T_{y}(\mathbb{R}^{d})^{n}\to T_{x}(\mathbb{R}^{d})^{n}\). Analogously, we let \(Z\coloneqq D\text{op}(w,-)\) at \(y\). It follows that \(Z:T_{y}\text{Arm}^{\times}\to T_{x}\text{Arm}^{\times}\). Further, \(Z\) is the block product of invertible linear maps \(b_{i}\coloneqq D_{2}\sigma(-w,y_{i})\), which map \(T_{y_{i}}\mathbb{S}^{d-1}\) to \(T_{x_{i}}\mathbb{S}^{d-1}\), so \(Z\) is invertible. It is also the restriction of \(\tilde{Z}\) to \(T_{y}\text{Arm}^{\times}\subset T_{y}(\mathbb{R}^{d})^{n}\). Now we can factor \(D\widetilde{\operatorname{op}}(w,y)=\tilde{Z}\tilde{A}\) and \(D\text{op}(w,y)=Z\,A\), where \(\tilde{A}\coloneqq\tilde{Z}^{-1}D\widetilde{\operatorname{op}}(w,y)\) and \(A\coloneqq Z^{-1}D\text{op}(w,y)\). Note that \(\tilde{A}\) takes the following simple form: \[\tilde{A}=\begin{pmatrix}\tilde{c}_{1}&I_{d}&0&\cdots&0\\ \vdots&0&\ddots&\ddots&\vdots\\ \vdots&\vdots&\ddots&\ddots&0\\ \tilde{c}_{n}&0&\cdots&0&I_{d}\end{pmatrix}\quad\text{with}\quad\tilde{c}_{i }\coloneqq\tilde{b}_{i}^{-1}\,\tilde{a}_{i}. \tag{8}\] We summarize this construction by the commutative diagram This factorization of \(D\text{op}\) leads us to the following observation: **Lemma 12**.: \[J\text{op}(w,y)^{2}=\det\bigl{(}D\text{op}(w,y)^{*}D\text{op}(w,y)\bigr{)}= \det(A^{*}Z^{*}Z\,A)=\det(A^{*}A)\,\det(Z^{*}Z).\] Proof.: The first two equalities are definitions. The last follows from the fact that the spaces \(T_{(w,y)}\mathbb{B}\times\text{Pol}^{\times}\), \(T_{y}\text{Arm}^{\times}\) and \(T_{x}\text{Arm}^{\times}\) all have the same dimension. Therefore, we could write the linear maps \(A\) and \(Z\) as square matrices, allowing us to reorder before taking determinants. Now we must compute \(\det(Z^{*}Z)\) and \(\det(A^{*}A)\). **Lemma 13**.: _We have_ \[\det(Z^{*}Z)=\prod_{i=1}^{n}\biggl{(}\frac{1-|w|^{2}}{|w+y_{i}|^{2}}\biggr{)}^ {2\,(d-1)},\] _which cannot vanish because \(|y_{i}|=1\) and \(|w|<1\)._ Proof.: The map \[Z^{*}Z\colon T_{y}\mathrm{Arm}=T_{y_{1}}\mathbb{S}\times\cdots\times T_{y_{n}} \mathbb{S}\to T_{y}\mathrm{Arm}=T_{y_{1}}\mathbb{S}\times\cdots\times T_{y_{n}} \mathbb{S}\] is the direct product of \(b_{i}^{*}b_{i}\colon T_{y_{1}}\mathbb{S}\to T_{y_{1}}\mathbb{S}\), where we recall \(b_{i}\colon T_{y_{1}}\mathbb{S}\to T_{x_{1}}\mathbb{S}\) is the map induced by \(\tilde{b}_{i}\) and where the adjoint is with respect to the Riemannian metric \(\varrho_{i}^{2}\,g_{\mathbb{S}}\) on \(\mathbb{S}\). Thus \[\det(Z^{*}Z)=\prod_{i=1}^{n}\det(b_{i}^{*}b_{i}).\] Note that \(\tilde{b}_{i}\colon(\mathbb{R}^{d},\langle\cdot,\cdot\rangle_{\mathbb{R}^{d}} )\to(\mathbb{R}^{d},\langle\cdot,\cdot\rangle_{\mathbb{R}^{d}})\) is a conformal map as it is the derivative of the Mobius transformation \(\sigma(-w,-)\). Its conformal factor \(\lambda_{i}\) with respect to the standard metric on \(\mathbb{R}^{d}\) can be computed easily and is \[\lambda_{i}\coloneqq\frac{1-|w|^{2}}{1+2\,\langle w,y_{i}\rangle+|w|^{2}}= \frac{1-|w|^{2}}{|w+y_{i}|^{2}}\] using the fact that \(|y_{i}|=1\). Since \(\tilde{b}_{i}\) maps \(T_{y_{1}}\mathbb{S}\) into \(T_{x_{1}}\mathbb{S}\), the induced mapping \(b_{i}\) has the same conformal factor with respect to the standard metric \(g_{\mathbb{S}}\). Scaling \(g_{\mathbb{S}}\) by \(\varrho_{i}^{2}\) does not change the conformal factor of \(b_{i}\). The Riemannian adjoint \(b_{i}^{*}\) is also conformal with the same conformal factor \(\lambda_{i}\). Hence \(b_{i}^{*}b_{i}\) has conformal factor \(\lambda_{i}^{2}\). Since \(T_{y_{1}}\mathbb{S}\) is \((d-1)\)-dimensional, we get \(\det(b_{i}^{*}b_{i})=\lambda_{i}^{2\,(d-1)}\), and the result follows. We now set out to compute \(\det(A^{*}A)\). We will do this in several steps. **Lemma 14**.: _Consider \(\mathbb{R}^{d}\times(\mathbb{R}^{d})^{n}\) equipped with the product metric of the standard inner product on \(\mathbb{R}^{d}\) and the rescaled inner product_ \[\langle u,v\rangle_{\varrho}=\sum_{i=1}^{n}\varrho_{i}^{2}\,\langle u_{i},v_{ i}\rangle\] _on \((\mathbb{R}^{d})^{n}\) and let \(P\) be the orthogonal projector onto the subspace \(T_{w}\mathbb{B}\times T_{y}\mathrm{Pol}^{\times}\). Then_ \[\det(A^{*}A)=\det(P\,\tilde{A}^{*}\tilde{A}\,P+I_{(n+1)\,d}-P).\] _where adjoints are with respect to the inner product._ Proof.: First, \(A^{*}A\) and \(P\,\tilde{A}^{*}\tilde{A}\,P=P^{*}\tilde{A}^{*}\tilde{A}P\) are self-adjoint matrices and may be diagonalized. Let \(e_{1},\ldots,e_{(n-1)\,d}\in T_{w}\mathbb{B}\times T_{y}\mathrm{Pol}^{\times}\) be the orthonormal eigenvectors and \(\lambda_{1},\ldots,\lambda_{d(n-1)}\) be the corresponding eigenvalues of \(A^{*}A\). Because \(A\) coincides with \(\tilde{A}\,P\) on \(T_{w}\mathbb{B}\times T_{y}\mathrm{Pol}^{\times}\), these are also eigenvectors and eigenvalues of \(P\,\tilde{A}^{*}\tilde{A}\,P\). We complete this basis to an orthonormal basis for \(\mathbb{R}^{d}\times(\mathbb{R}^{d})^{n}\) by appending some vectors \(e_{n\,(d-1)+1},\ldots,e_{(n+1)\,d}\). These new vectors form an orthonormal basis for the null space of \(P\), which is the image of the orthogonal projector \(I_{(n+1)\,d}-P\), so we may write \[P\tilde{A}^{*}\tilde{A}P+(I_{(n+1)\,d}-P)=\sum_{i=1}^{n(d-1)}\lambda_{i}\,e_{i }\,e_{i}\,e_{i}^{T}+\sum_{i=n\,(d-1)+1}^{(n+1)\,d}e_{i}\,e_{i}^{T},\] proving that \(\det(P\tilde{A}^{*}\tilde{A}P+I_{(n+1)\,d}-P)=\prod_{i=1}^{n(d-1)}\lambda_{i}= \det(A^{*}A)\) as required. ### The projector \(P\) Our next task is to find a suitable representation for \(P\). Note that \(\operatorname{Pol}\) consists of the points in \((\mathbb{R}^{d})^{n}\) where the polygon closes (so \(\sum_{i=1}^{n}r_{i}y_{i}=0\)) and the \(y_{i}\) have unit norm. This means that we can write \(\mathbb{B}\times\operatorname{Pol}\) as the zero set of the map \[\Theta\colon\mathbb{B}\times(\mathbb{R}^{d})^{n}\to\mathbb{R}^{d}\times \mathbb{R}^{n},\quad\Theta(w,y)=\begin{pmatrix}\sum_{i=1}^{n}r_{i}y_{i}\\ \frac{1}{2}(y_{1}|^{2}-1)\\ \vdots\\ \frac{1}{2}(|y_{n}|^{2}-1)\end{pmatrix}.\] The derivative \(B\coloneqq D\Theta(w,y)\) is a linear mapping \[B\colon T_{w}\mathbb{B}\times T_{y}(\mathbb{R}^{d})^{n}\cong\mathbb{R}^{d} \oplus(\mathbb{R}^{d})^{n}\to\mathbb{R}^{d}\times\mathbb{R}^{n}.\] Since \(y\mapsto\sum_{i=1}^{n}r_{i}y_{i}\) is linear in the \(y_{i}\), it is its own derivative; the derivative of \(\frac{1}{2}(|y_{i}|^{2}-1)\) is easily computed to be \(y_{i}^{T}\) (when \(|y_{i}|=1\)). Therefore, with respect to this decomposition of vector spaces, \(B\) can be written as a block matrix \[B=\begin{pmatrix}0&r_{1}\,I_{d}&\cdots&\cdots&r_{n}\,I_{d}\\ 0&y_{1}^{T}&0&\cdots&0\\ \vdots&0&\ddots&\ddots&\vdots\\ \vdots&\vdots&\ddots&\ddots&0\\ 0&0&\cdots&0&y_{n}^{T}\end{pmatrix}=\begin{pmatrix}0&\mathcal{Q}\\ 0&Y^{T}\end{pmatrix},\] where \(\mathcal{Q}\coloneqq\left(r_{1}\,I_{d}\;\;\cdots\;\;\;r_{n}\,I_{d}\right)\) and \(Y\coloneqq\operatorname{diag}(y_{1},\ldots,y_{n})\) are matrices of size \(d\times(d\,n)\) and \((n\,d)\times n\), respectively. We claim that \(B\) is surjective whenever \(y\in\operatorname{Pol}^{\times}\). We prove this by establishing the following Lemma. **Lemma 15**.: _Let \((w,y)\in\mathbb{B}\times\operatorname{Pol}^{\times}\) and \(B\coloneqq D\Theta(w,y)\). Then the matrix \(B\,B^{*}\) is invertible, where \(B^{*}\) is the adjoint of \(B\) with respect to the metric \(g_{1}\coloneqq\langle\cdot,\cdot\rangle_{\mathbb{R}^{d}}\times\langle\cdot, \cdot\rangle_{\varrho}\) on \(\mathbb{R}^{d}\oplus\mathbb{R}^{n\,d}\) and with respect to the standard metric \(g_{2}\coloneqq\langle\cdot,\cdot\rangle_{\mathbb{R}^{d}}\times\langle\cdot, \cdot\rangle_{\mathbb{R}^{n}}\) on \(\mathbb{R}^{d}\oplus\mathbb{R}^{n}\)._ Proof.: Recall that we defined the metric \(\langle\cdot,\cdot\rangle_{\varrho}\) in Section 1. Now the definition of adjoint is that for all \(u=(u_{1},u_{2})\in\mathbb{R}^{d}\oplus(\mathbb{R}^{d})^{n}\) and all \(v=(v_{1},v_{2})\in\mathbb{R}^{d}\oplus\mathbb{R}^{n}\) we have \(g_{2}(B\,u,v)=g_{1}(u,B^{*}\,v)\). We compute \[g_{2}(B\,u,v) =\langle\mathcal{Q}\,u_{2},v_{1}\rangle_{\mathbb{R}^{d}}+\langle Y ^{\mathsf{T}}u_{2},v_{2}\rangle_{\mathbb{R}^{n}}=\langle u_{2},\mathcal{Q}^{ \mathsf{T}}v_{1}+Y\,v_{2}\rangle_{(\mathbb{R}^{d})^{n}}\] \[=\langle u_{2},R^{-2}\,\mathcal{Q}^{\mathsf{T}}v_{1}+R^{-2}\,Y\, v_{2}\rangle_{\varrho}=g_{1}\Big{(}(u_{1},u_{2}),(0,R^{-2}\,\mathcal{Q}^{ \mathsf{T}}v_{1}+R^{-2}\,Y\,v_{2})\Big{)},\] where \(R\) is the blockdiagonal matrix of size \((n\,d)\times(n\,d)\) with blocks \(\varrho_{i}\,I_{d}\) on the main diagonal. That means the adjoint \(B^{*}\) has the following block structure: \[B^{*}=\begin{pmatrix}0&0\\ R^{-2}\mathcal{Q}^{\mathsf{T}}&R^{-2}\,Y\end{pmatrix},\] Note that \(R^{-2}\,Y=Y\,\mathrm{diag}(\varrho_{1}^{-2},\ldots,\varrho_{n}^{-2})\) and \(Y^{\mathsf{T}}\,Y=\mathrm{diag}(|y_{1}|^{2},\ldots,|y_{n}|^{2})=I_{n}\), because \(y\in\mathrm{Pol}^{\times}\). Hence we may write \(B\,B^{*}\) as the following block matrix: \[B\,B^{*}=\begin{pmatrix}0&\mathcal{Q}\\ 0&Y^{\mathsf{T}}\end{pmatrix}\begin{pmatrix}0&0\\ R^{-2}\mathcal{Q}^{\mathsf{T}}&R^{-2}Y\end{pmatrix}=\begin{pmatrix}\mathcal{Q} \,R^{-2}\mathcal{Q}^{\mathsf{T}}&\mathcal{Q}\,R^{-2}Y\\ Y^{\mathsf{T}}R^{-2}\mathcal{Q}^{\mathsf{T}}&\mathrm{diag}(\varrho^{-2})\end{pmatrix}.\] Any \(2\times 2\) block matrix whose lower right block is invertible may be written in UDL form \[\begin{pmatrix}A&B\\ C&D\end{pmatrix}=\begin{pmatrix}I&BD^{-1}\\ 0&I\end{pmatrix}\begin{pmatrix}A-BD^{-1}C&0\\ 0&D\end{pmatrix}\begin{pmatrix}I&0\\ D^{-1}C&I\end{pmatrix}, \tag{9}\] where the upper left block of the center matrix is the Schur complement of the lower right block of the original matrix. For our matrix \(B\,B^{*}\), this Schur complement is \[\gamma \coloneqq\mathcal{Q}\,R^{-2}\mathcal{Q}^{\mathsf{T}}-\mathcal{Q} \,R^{-2}Y\,(\mathrm{diag}(\varrho^{-2}))^{-1}Y^{\mathsf{T}}R^{-2}\mathcal{Q}^ {\mathsf{T}} \tag{10}\] \[=\mathcal{Q}\,R^{-2}\mathcal{Q}^{\mathsf{T}}-\mathcal{Q}\,R^{-1} Y\,Y^{\mathsf{T}}R^{-1}\mathcal{Q}^{\mathsf{T}}\] \[=\mathcal{Q}\,R^{-1}(I_{n\,d}-Y\,Y^{\mathsf{T}})\,R^{-1} \mathcal{Q}^{\mathsf{T}}\] and we may factorize \(B\,B^{*}\) as follows: \[B\,B^{*}=\begin{pmatrix}I_{d}&\mathcal{Q}\,Y\\ 0&I_{n}\end{pmatrix}\begin{pmatrix}\gamma&0\\ 0&\mathrm{diag}(\varrho^{-2})\end{pmatrix}\begin{pmatrix}I_{d}&0\\ Y^{\mathsf{T}}\mathcal{Q}^{\mathsf{T}}&I_{n}\end{pmatrix}. \tag{11}\] The two outer factors are triangular matrices with ones on the main diagonals. Thus they are invertible. The center matrix is block diagonal and the block \(\mathrm{diag}(\varrho^{-2})\) is invertible by assumption. Hence \(B\,B^{*}\) is invertible if and only if the Schur complement \(\gamma\) is invertible. Keeping in mind that \(R^{-1}\) and \(I_{n\,d}-Y\,Y^{\mathsf{T}}\) are block-diagonal, and that multiplying by \(\mathcal{Q}\) takes a weighted sum over rows or columns, this matrix can be written as a weighted sum of the diagonal blocks of \(I_{n\,d}-Y\,Y^{\mathsf{T}}\): \[\gamma=\mathcal{Q}\,R^{-1}(I_{n\,d}-Y\,Y^{\mathsf{T}})\,R^{-1}\mathcal{Q}^{ \mathsf{T}}=\sum_{i=1}^{n}r_{i}^{2}/\varrho_{i}^{2}\,(I_{d}-y_{i}y_{i}^{ \mathsf{T}}).\] We see that \(\gamma\) is symmetric and positive semidefinite. Assume \(\gamma\) is not positive definite. Then there is a unit vector \(V\in\mathbb{S}\) with \[0=\langle V,\gamma\,V\rangle=\sum_{i=1}^{n}r_{i}^{2}/\varrho_{i}^{2}\,(1- \langle y_{i},V\rangle^{2}).\] This can only happen if \(y_{i}\in\{-V,V\}\) for all \(i\in\{\,1,\ldots,n\,\}\). Since we require \(n\geq 3\), this implies that there must be at least one pair of indices \(i\neq j\) with \(y_{i}=y_{j}\). But this contradicts the condition \(y\in\mathrm{Pol}^{\times}\). We note that this is why we introduced \(\mathrm{Pol}^{\times}\) in the first place. Hence \(\gamma\) must be positive definite, showing that \(B\,B^{*}\) is invertible and that \(B^{*}\) is surjective. As a side effect, by the implicit function theorem (or transversality of \(\Theta\) to \(0\)) we have shown the following: **Lemma 16**.: _The set \(\mathbb{B}\times\mathrm{Pol}^{\times}\) is a smooth manifold with tangent space \(T_{(w,y)}(\mathbb{B}\times\mathrm{Pol}^{\times})=\ker(B)\)._ We are now in a position to accomplish the main goal of this section: **Proposition 17**.: _The orthoprojector \(P\) onto \(T_{(w,y)}(\mathbb{B}\times\mathrm{Pol})\) with respect to the scaled metric \(\langle\cdot,\cdot\rangle_{\mathbb{R}^{d}}\times\langle\cdot,\cdot\rangle_{ \varrho}\) is given by_ \[P=\begin{pmatrix}I_{d}&0\\ 0&E\end{pmatrix},\quad\text{ where }\quad E\coloneqq Q-R^{-2}Q\,Q^{\mathsf{T}} \gamma^{-1}Q\,Q,\quad\text{and}\quad Q\coloneqq I_{n\,d}-Y\,Y^{\mathsf{T}}. \tag{12}\] _Further, \(E\) is the orthogonal projector with respect to \(\langle\cdot,\cdot\rangle_{\varrho}\) from \((\mathbb{R}^{d})^{n}\) onto \(T_{y}\mathrm{Pol}^{\times}\)._ Proof.: Since the tangent space is the kernel of \(B\) and \(B\) is surjective, it follows that \[P=I_{(n+1)\,d}-B^{*}(B\,B^{*})^{-1}\,B.\] Inverting \(B\,B^{*}\) is easy with the factorization from (11): \[(B\,B^{*})^{-1}=\begin{pmatrix}I_{d}&0\\ -Y^{\mathsf{T}}Q^{\mathsf{T}}&I_{n}\end{pmatrix}\begin{pmatrix}\gamma^{-1}&0 \\ 0&\mathrm{diag}(\varrho^{2})\end{pmatrix}\begin{pmatrix}I_{d}&-\Omega\,Y\\ 0&I_{n}\end{pmatrix}.\] It is only a matter of some algebra to obtain the expression for \(P\) above. Since \(P\) is an orthogonal projector, it has to satisfy \(P^{*}=P\) and \(P\,P=P\). For \(E\) this implies \[E^{*}=E\quad\text{and}\quad E\,E=E, \tag{13}\] where \(E^{*}\) denotes the adjoint with respect to the metric \(\langle\cdot,\cdot\rangle_{\varrho}\). Thus, \(E\) is also an orthogonal projector. ### Determinant of \(P\,\tilde{A}^{*}\,\tilde{A}\,P+I_{(n+1)\,d}-P\) Recall that \(\tilde{c}_{i}\coloneqq\tilde{b}_{i}^{-1}\,\tilde{a}_{i}\) where \(\tilde{a}_{i}\) and \(\tilde{b}_{i}\) were defined in (6), and \(\tilde{c}_{i}\) was defined in (8). To compute the determinant of \(P\,\tilde{A}^{*}\,\tilde{A}\,P+I_{(n+1)\,d}-P\), we will eventually need more information about \(\tilde{c}_{i}\). So we start by deriving an explicit formula. **Proposition 18**.: _When \(y\in\mathrm{Arm}\), each \(\tilde{c}_{i}\) is in the form_ \[\tilde{c}_{i}=\frac{2}{1-|w|^{2}}\,\Big{(}(1+\langle w,y_{i} \rangle)\,I_{d}-(y_{i}+w)\,y_{i}^{\mathsf{T}}\Big{)}. \tag{14}\] _and further,_ \[\left(I_{d}-y_{i}\,y_{i}^{\mathsf{T}}\right)\tilde{c}_{i}=\tilde {c}_{i}. \tag{15}\] Proof.: We start by recalling that \[\widetilde{\sigma}(w,z)\coloneqq\frac{(1-|w|^{2})\,z-(1+|z|^{2}-2 \,\langle w,z\rangle)\,w}{1-2\langle w,z\rangle+|w|^{2}\,|z|^{2}}.\] while \(\tilde{a}_{i}=-D_{1}\widetilde{\sigma}(-w,y_{i})\) and \(\tilde{b}_{i}=D_{2}\widetilde{\sigma}(-w,y_{i})\) (see (6)). Mechanically differentiating the formula above using the identity \((u/v)^{\prime}=(1/v)\,(u^{\prime}-(u/v)\,v^{\prime})\), one eventually gets \[D_{1}\widetilde{\sigma}(s,z) =\frac{1}{1-2\,\langle s,z\rangle+|s|^{2}\,|z|^{2}}\Big{(}2\,z\,s ^{\mathsf{T}}-2\,z\,s^{\mathsf{T}}-(1+|z|^{2}-2\,\langle s,z\rangle)\,I_{d}+2 \,\widetilde{\sigma}(s,z)\,(z^{\mathsf{T}}-|z|^{2}\,s^{\mathsf{T}})\Big{)}\] \[D_{2}\widetilde{\sigma}(s,z) =\frac{1}{1-2\,\langle s,z\rangle+|s|^{2}\,|z|^{2}}\,\Big{(}(1-| s|^{2})\,I_{d}-2\,s\,(z^{\mathsf{T}}-s^{\mathsf{T}})+2\,\widetilde{\sigma}(s,z) \,(s^{\mathsf{T}}-|s|^{2}\,z^{\mathsf{T}})\Big{)}.\] For the rest of the proof, we will suppress the index on \(y_{i}\) for clarity, writing \(y\) instead. Substituting \(z=y\) with \(|y|=1\), this simplifies to \[D_{1}\widetilde{\sigma}(s,y) =\frac{2}{1-2\,\langle s,y\rangle+|s|^{2}}\Big{(}s\,y^{\mathsf{T} }-y\,s^{\mathsf{T}}-(1-\langle s,y\rangle)\,I_{d}+\widetilde{\sigma}(s,y)\,(y ^{\mathsf{T}}-s^{\mathsf{T}})\Big{)}\] \[D_{2}\widetilde{\sigma}(s,y) =\frac{2}{1-2\,\langle s,y\rangle+|s|^{2}}\,\Bigg{(}\frac{1-|s|^ {2}}{2}\,I_{d}-s\,(y^{\mathsf{T}}-s^{\mathsf{T}})+\widetilde{\sigma}(s,y)\,(s ^{\mathsf{T}}-|s|^{2}\,y^{\mathsf{T}})\Bigg{)}.\] With the abbreviation \(x\coloneqq\widetilde{\sigma}(-w,y)\), we obtain \[a\coloneqq-D_{1}\widetilde{\sigma}(-w,y) =\frac{2}{1+2\,\langle w,y\rangle+|w|^{2}}\Big{(}w\,y^{\mathsf{T} }-y\,w^{\mathsf{T}}+(1+\langle w,y\rangle)\,I_{d}-x\,(y^{\mathsf{T}}+w^{ \mathsf{T}})\Big{)}\] \[b\coloneqq D_{2}\widetilde{\sigma}(-w,y) =\frac{2}{1+2\,\langle w,y\rangle+|w|^{2}}\,\Bigg{(}\frac{1-|w|^ {2}}{2}\,I_{d}+w(y^{\mathsf{T}}+w^{\mathsf{T}})-x\,(w^{\mathsf{T}}+|w|^{2}\, y^{\mathsf{T}})\Bigg{)}.\] We claim that when \(|y|=1\), we have \(b^{-1}a=c\), where \[c\coloneqq\frac{2}{1-|w|^{2}}\,\Big{(}(1+\langle w,y\rangle)\,I_{d}-(y+w)\,y ^{\mathsf{T}}\Big{)}.\] Observe that \(x=\widetilde{\sigma}(-w,y)\) is in the plane of \(w\), \(y\), and \(0\). Therefore, from the formulas above, we see that \(a\), \(b\), and \(c\) map this plane to itself. In fact, \(a\), \(b\), and \(c\) also preserve the orthogonal complement of this plane. If we let \(e_{1}=y\) and \(e_{2}\) be a unit vector in this plane perpendicular to \(y\), we may complete this to an orthonormal basis for \(\mathbb{R}^{d}\). In this basis, the matrices \(a\), \(b\) and \(c\) are block-diagonal with upper left \(2\times 2\) blocks and lower right \((d-2)\times(d-2)\) blocks. For convenience, we define \(\alpha=2/(1+2\,\langle w,y\rangle+|w|^{2})\). Then we compute \[a=\begin{pmatrix}a_{11}&a_{12}&0\\ a_{21}&a_{22}&0\\ 0&0&\alpha(1+\langle w,y\rangle)I_{d-2}\end{pmatrix},\ b=\begin{pmatrix}b_{11}&b_{12} &0\\ b_{21}&b_{22}&0\\ 0&0&\alpha\frac{1-|w|^{2}}{2}I_{d-2}\end{pmatrix},\ c=\begin{pmatrix}c_{11}&c_{12 }&0\\ c_{21}&c_{22}&0\\ 0&0&\frac{2(1+\langle w,y\rangle)}{1-|w|^{2}}I_{d-2}\end{pmatrix}.\] It suffices to show that \(b\,c=a\), which we may do block-by-block. It is already clear that the product of the lower right blocks of \(b\) and \(c\) is equal to the corresponding block of \(a\). We are left checking the upper left \((2\times 2)\) blocks. If \(w=r\cos(\theta)\,e_{1}+r\sin(\theta)\,e_{2}\), the upper left blocks of \(a\), \(b\), and \(c\) are: \[\begin{pmatrix}a_{11}&a_{12}\\ a_{21}&a_{22}\end{pmatrix} =\begin{pmatrix}\frac{4r^{2}\sin^{2}(\theta)\,r\cos(\theta)+1}{(r^{ 2}+2r\cos(\theta)+1)^{2}}&-\frac{4\,r\,\sin(\theta)\,r\cos(\theta)+1}{(r^{2}+2 \,r\cos(\theta)+1)^{2}}\\ -\frac{2\,r\,\sin(\theta)\,r^{2}\cos(2\theta)+2\,r\cos(\theta)+1}{(r^{2}+2\, r\cos(\theta)+1)^{2}}&\frac{2\,(r\,\cos(\theta)+1)(r^{2}\cos(\theta)+2\,r\cos( \theta)+1)}{(r^{2}+2\,r\cos(\theta)+1)^{2}}\end{pmatrix},\] \[\begin{pmatrix}b_{11}&b_{12}\\ b_{21}&b_{22}\end{pmatrix} =\begin{pmatrix}-\frac{(r^{2}-1)\,(r^{2}\cos(2\,\theta)+2\,r\cos( \theta)+1)}{(r^{2}+2\,r\cos(\theta)+1)^{2}}&\frac{2\,r\,(r^{2}-1)\,\sin( \theta)\,(r\cos(\theta)+1)}{(r^{2}+2\,r\cos(\theta)+1)^{2}}\\ -\frac{2\,r\,(r^{2}-1)\,\sin(\theta)\,(r\cos(\theta)+1)}{(r^{2}+2\,r\cos( \theta)+1)^{2}}&-\frac{(r^{2}-1)(r^{2}\cos(2\,\theta)+2\,r\cos(\theta)+1)}{(r^ {2}+2\,r\cos(\theta)+1)^{2}}\end{pmatrix},\] \[\begin{pmatrix}c_{11}&c_{12}\\ c_{21}&c_{22}\end{pmatrix} =\begin{pmatrix}0&0\\ \frac{2\,r\sin(\theta)}{r^{2}-1}&-\frac{2(r\cos(\theta)+1)}{r^{2}-1}\end{pmatrix}.\] Now it is easy to check that the product of the upper left blocks of \(b\) and \(c\) is equal to the corresponding block of \(a\), as required for (14). To prove (15), we just observe that in our basis, \[I_{d}-yy^{\mathsf{T}}=\begin{pmatrix}0&0&0\\ 0&1&0\\ 0&0&I_{d-2}\end{pmatrix}.\] We are now ready to work on the main goal of this section: **Lemma 19**.: _For \((w,y)\in\mathbb{B}\times\mathrm{Pol}^{\chi}\), we have_ \[\det(P\,\tilde{A}^{*}\,\tilde{A}\,P+I_{(n+1)\,d}-P)=\left(\frac{ 2}{1-|w|^{2}}\right)^{2\,d}\frac{\det\Bigl{(}\sum_{i=1}^{n}r_{i}\,(I_{d}-y_{i })^{\mathsf{T}}_{i}\Bigr{)}^{2}}{\det\Bigl{(}\sum_{i=1}^{n}r_{i}^{2}/\varrho_{ i}^{2}\,(I_{d}-y_{i}y_{i}^{\mathsf{T}})\Bigr{)}}\neq 0.\] Proof.: We start by recalling from (8) that \[\tilde{A}\coloneqq\tilde{Z}^{-1}\,D\widetilde{\mathrm{op}}(w,y)= \begin{pmatrix}C&I_{nd}\end{pmatrix},\quad\text{where}\quad C\coloneqq\mathrm{ vec}(\tilde{c}).\] Thus, we obtain \[\tilde{A}\,P=\begin{pmatrix}C&E\end{pmatrix}\quad\text{and}\quad (\tilde{A}\,P)^{*}=P\,\tilde{A}^{*}=\begin{pmatrix}C^{*}\\ E^{*}\end{pmatrix}.\] This, combined with (13) allows us to compute \[P\,\tilde{A}^{*}\,\tilde{A}\,P+I_{(n+1)\,d}-P=\begin{pmatrix}C^{*}C&C^{*}E\\ E^{*}C&I_{nd}-E+E^{*}E\end{pmatrix}=\begin{pmatrix}C^{*}C&C^{*}E\\ E^{*}C&I_{nd}\end{pmatrix}.\] The determinant can be computed by Schur's formula (9); utilizing (13) once more, we obtain: \[\det\begin{pmatrix}C^{*}C&C^{*}E\\ E^{*}C&I_{nd}\end{pmatrix}=\det(C^{*}C-C^{*}E\,I_{nd}^{-1}\,E^{*}C)\cdot\det(I_ {nd})=\det(C^{*}C-C^{*}E\,C).\] Now \(Q=\mathrm{diag}(I_{d}-y_{1}\,y_{1}^{\mathsf{T}}\,\ldots,I_{d}-y_{n}\,y_{n}^{ \mathsf{T}})\) while \(C=\mathrm{vec}(\tilde{c}_{1},\ldots,\tilde{c}_{n})\), so (15) implies that \(Q\,C=C\). Since \(Q\) is symmetric, this implies \(C^{\mathsf{T}}Q=C^{\mathsf{T}}\). Now \(C\) is a map from \(\mathbb{R}^{d}\) with the standard inner product to \((\mathbb{R}^{d})^{n}\) with the inner product \(\langle u,v\rangle_{\varrho}=\sum_{i}\varrho_{i}^{2}\,\langle u_{i},v_{i}\rangle_ {\mathbb{R}^{d}}\) (from Section 2). If \(u\in\mathbb{R}^{d}\) and \(v\in(\mathbb{R}^{d})^{n}\), the adjoint \(C^{*}\) is defined by \[\langle C^{*}v,u\rangle_{\mathbb{R}^{d}}=\langle v,Cu\rangle_{\varrho}=\sum_{i= 1}^{n}\varrho_{i}^{2}\,\langle v_{i},\tilde{c}_{i}\,u_{i}\rangle_{\mathbb{R}^ {d}}=\sum_{i=1}^{n}\langle(\tilde{c}_{i}^{\mathsf{T}}\varrho_{i}^{2})\,v_{i}, u_{i}\rangle_{\mathbb{R}^{d}}=\langle C^{\mathsf{T}}R^{2}\,v,u\rangle_{ \mathbb{R}^{d}}.\] It follows, substituting the definition of \(E\), that \[C^{*}C-C^{*}E\,C =C^{*}\!C-C^{*}\!Q\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! The number \(1-|w|^{2}\) is raised to the power \(n\,(d-1)-d\). For \(n\gg d\) it may become very small even if \(w\) is not very close to the boundary of the disk. This motivates us to choose the weight function \(\chi\colon\mathbb{B}\to\mathbb{R}\) in Proposition8 as follows: \[\chi(w)\coloneqq\lambda\,2^{d}\,\big{(}1-|w|^{2}\big{)}^{n\,(d-1)-d}.\] where \(\lambda\) is a constant chosen so that \(\int_{\mathbb{B}}\chi(w)\,\mathrm{dvol}_{\mathbb{B}}=1\). We obtain the following weights: \[K(w,y)\coloneqq\frac{\chi(w)}{J\mathrm{op}(w,y)}=\lambda\,\frac{\sqrt{\det\! \left(\sum_{i=1}^{n}r_{i}^{2}/\varrho_{i}^{2}\,\big{(}I_{d}-y_{i}\,y_{i}^{ \top}\big{)}\right)}}{\det\!\left(\sum_{i=1}^{n}r_{i}\,\big{(}I_{d}-y_{i}\,y_{ i}^{\top}\big{)}\right)}\,\Bigg{(}\prod_{i=1}^{n}|w+y_{i}|^{2}\Bigg{)}^{d-1}, \tag{16}\] noting that a direct computation yields \[\lambda=\frac{1}{2^{d}\,\pi^{d/2}}\,\frac{\Gamma((n-1)\,(d-1)+\tfrac{d}{2})}{ \Gamma((n-1)\,(d-1))}.\] However in practice (e.g., when using these weights for sampling as in Corollary9) it is more convenient to ignore \(\lambda\) entirely, as the factors of \(\lambda\) in the numerator and denominator of (4) cancel. ## 4 The quotient by \(\mathrm{SO}(d)\) In many contexts, one is interested in shapes of polygons but not their poses in \(\mathbb{R}^{d}\). Therefore, it is desirable to identify configurations which are related by a rigid motion and study the resulting moduli space of equivalence classes of polygons. This is the point of view taken by mathematicians who study polygons in \(\mathbb{R}^{3}\) via their symplectic structure [20, 23, 24, 28], and by the present authors in our previous papers [3, 8]. We now see how to transfer our work to this moduli space. The diagonal action of the special orthogonal group \(\mathrm{SO}(d)\) on \((\mathbb{R}^{d})^{n}\) is an action by isometries of our metric \(\langle u,v\rangle_{\varrho}=\sum_{i=1}^{n}\varrho_{i}^{2}\,\langle u_{i},v_{i}\rangle\). Since each \(\mathbb{S}\) is invariant under this action, it restricts to an action by isometries on \((\mathrm{Arm},g_{\mathrm{Arm}})\). Since the closure condition \(\sum_{i=1}^{n}r_{i}\,y_{i}=0\) is also \(\mathrm{SO}(d)\)-invariant, the action further restricts to an action by isometries on \((\mathbb{B}\times\mathrm{Pol},\langle\cdot,\cdot\rangle\times g_{\mathrm{Pol}})\). Since \(\mathrm{op}\) is Mobius-equivariant and \(\mathrm{SO}(d)\) is a subgroup of the Mobius group, the diffeomorphism \(\mathrm{op}\colon\mathbb{B}\times\mathrm{Pol}^{\times}\to\mathrm{Arm}^{\times}\) is \(\mathrm{SO}(d)\)-equivariant. These actions are faithful, but they are not free if the \(y_{i}\) lie in some lower-dimensional subspace of \(\mathbb{R}^{d}\). So now we slightly restrict our attention to avoid these troublesome configurations. **Definition 20**.: We define \(\mathrm{Arm}^{\diamond}\) to the be subset of \(\mathrm{Arm}^{\times}\) where the points \(x_{1},\ldots,x_{n}\) do not lie in any affine hyperplane in \(\mathbb{R}^{d}\). We define \(\mathrm{Pol}^{\diamond}\) to be the subset of \(\mathrm{Pol}^{\times}\) where the \(y_{i}\) do not lie in any linear hyperplane in \(\mathbb{R}^{d}\). **Lemma 21**.: _The closure map \(\mathrm{cl}\) and the opening map \(\mathrm{op}\) are diffeomorphisms between \(\mathrm{Arm}^{\diamond}\) and \(\mathbb{B}\times\mathrm{Pol}^{\diamond}\). If \(n>d\), then \(\mathrm{Arm}\setminus\mathrm{Arm}^{\diamond}\) and \(\mathrm{Pol}\setminus\mathrm{Pol}^{\diamond}\) are nullsets._ Proof.: Suppose \((w,y)\in\mathbb{B}\times\mathrm{Pol}^{\diamond}\). We claim that \(x=\mathrm{op}(w,y)\in\mathrm{Arm}^{\diamond}\). Suppose not. Then the \(x_{i}\) lie in an affine hyperplane in \(\mathbb{R}^{d}\). Since the \(x_{i}\) also lie in the unit sphere \(\mathbb{S}\), they lie in some \(S^{d-2}\) formed by the intersection of the affine hyperplane with \(\mathbb{S}\). It follows that the \(x_{i}\) and their conformal barycenter \(w\) lie on some unique \(S^{d-1}\subset\mathbb{R}^{d}\), and that this sphere \(S\) intersects the unit sphere at right angles. Now the \(y_{i}\) (and the origin) are the image of the \(x_{i}\) (and \(w\)) under a Mobius transformation. Therefore, they lie in the image of \(S\) under this Mobius transformation. Since this image is either a sphere or a hyperplane, since it meets the unit sphere \(\mathbb{S}\) at right angles, and since it contains the origin, it must be a hyperplane. This contradicts our assumption that \((w,y)\in\mathbb{B}\times\mathrm{Pol}^{\diamond}\). Thus \(\mathrm{op}\) maps \(\mathbb{B}\times\mathrm{Pol}^{\diamond}\) into \(\mathrm{Arm}^{\diamond}\). The argument that \(\mathrm{cl}\) maps \(\mathrm{Arm}^{\diamond}\) into \(\mathbb{B}\times\mathrm{Pol}^{\diamond}\) is quite similar. Suppose \(x\in\mathrm{Arm}^{\diamond}\), but \(\mathrm{cl}(x)\) is not in \(\mathrm{Pol}^{\diamond}\). Then the \(y_{i}\) all lie in some hyperplane \(H\). Now the opening map \(\mathrm{op}(w,y)\) is a Mobius transformation (of the \(y_{i}\)), so it maps \(H\) to some \(S^{d-1}\) containing \(w\). This sphere also contains the \(x_{i}\). Since the \(x_{i}\) are also on the unit sphere, they lie in the intersection of two different \(d-1\) spheres, which is in turn contained in some affine hyperplane. But this contradicts our assumption that \(x\in\mathrm{Arm}^{\diamond}\). Now \(\mathrm{cl}\) and \(\mathrm{op}\) are diffeomorphisms between the larger sets \(\mathrm{Arm}^{\times}\) and \(\mathbb{B}\times\mathrm{Pol}^{\times}\), so they remain diffeomorphisms on the open subsets \(\mathrm{Arm}^{\diamond}\) and \(\mathbb{B}\times\mathrm{Pol}\). Now observe that \(\mathrm{Arm}\setminus\mathrm{Arm}^{\diamond}\) is the disjoint union of \(\mathrm{Arm}\setminus\mathrm{Arm}^{\times}\) and \(\mathrm{Arm}^{\times}\setminus\mathrm{Arm}^{\diamond}\). We know that \(\mathrm{Arm}\setminus\mathrm{Arm}^{\times}\) is a nullset (Definition 6). We claim that \(\mathrm{Arm}^{\times}\setminus\mathrm{Arm}^{\diamond}\) is a nullset, too: The probability that the points \(x_{1},\ldots,x_{d}\) span an affine hyperplane in \(\mathbb{R}^{d}\) is one, while the probability that \(x_{d+1}\) lies in the same affine hyperplane is zero. Finally, since the diffeomorphism \(\mathrm{cl}\) maps \(\mathrm{Arm}^{\times}\setminus\mathrm{Arm}^{\diamond}\) to \(\mathrm{Pol}^{\times}\setminus\mathrm{Pol}^{\diamond}\), the latter is also a nullset. It follows as above (again, see Definition 6) that \(\mathrm{Pol}\setminus\mathrm{Pol}^{\diamond}\) is a nullset as well. Now \(\mathrm{SO}(d)\) acts smoothly, freely, and by isometries on \((\mathrm{Pol}^{\diamond},g_{\varrho,r})\), so this is the total space of a principal bundle \(\mathrm{Pol}^{\diamond}\stackrel{{\pi}}{{\longrightarrow}} \mathrm{Pol}^{\diamond}/\mathrm{SO}(d)\). We will use the notation \(\widehat{\mathrm{Pol}}^{\diamond}\coloneqq\mathrm{Pol}^{\diamond}/\mathrm{SO }(d)\). The quotient space is an open manifold. One natural choice of measure on this space is the pushforward measure \(\pi_{\#}P_{\varrho,r}\) of \(P_{\varrho,r}\) along the quotient map. This has the desirable feature that for any \(\mathrm{SO}(d)\)-invariant integrable \(f\colon\mathrm{Pol}\to\mathbb{R}\) we may unambiguously define \(\hat{f}\colon\widehat{\mathrm{Pol}}\to\mathbb{R}\) such that \(f=\hat{f}\circ\pi\) and get \[\int_{\mathrm{Pol}}f\ \mathrm{d}P_{\varrho,r}=\int_{\widehat{\mathrm{Pol}}}\hat{f} \ \mathrm{d}(\pi_{\#}P_{\varrho,r}). \tag{17}\] However, this is _not_ the usual choice in the literature on polygon spaces. Instead, we let the quotient space \(\widehat{\mathrm{Pol}}^{\diamond}\) inherit the Riemannian quotient metric from \((\mathrm{Pol}^{\diamond},g_{\varrho,r})\) and construct the corresponding Riemannian volume measure \(\widehat{\mathrm{vol}}_{\varrho,r}\). In turn, this volume measure defines a Riemannian probability measure \[\hat{P}_{\varrho,r}\coloneqq\frac{1}{\widehat{\mathrm{vol}}_{\varrho,r}( \widehat{\mathrm{Pol}}^{\diamond})}\widehat{\mathrm{vol}}_{\varrho,r}\] on \(\widehat{\mathrm{Pol}}^{\diamond}\) (where we extend the measure from \(\widehat{\mathrm{Pol}}^{\diamond}\) to \(\widehat{\mathrm{Pol}}\) by zero). The pushforward measure and the metric measure are really different from one another; Fig. 5 illustrates the issue. **Proposition 22**.: _Suppose that \(\varphi\colon\widehat{\mathrm{Pol}}\to\mathbb{R}\) is an integrable function. Then_ \[\int_{\widehat{\mathrm{Pol}}}\varphi(y)\;\mathrm{d}\widehat{\mathrm{vol}}_{ \varrho,r}(y)=\int_{\widehat{\mathrm{Pol}}^{\diamond}}\varphi(y)\;\mathrm{d} \widehat{\mathrm{vol}}_{\varrho,r}(y)=\int_{\mathrm{Arm}^{\diamond}}\varphi( \pi(y_{*}(x)))\,\hat{K}(\mathrm{cl}(x))\;\mathrm{d}\mathrm{vol}_{\varrho}(x).\] _Here, if \(\lambda_{1}(y),\ldots,\lambda_{d}(y)\) are the eigenvalues of \(\varSigma(y)\coloneqq\sum_{i=1}^{n}\varrho_{i}^{2}\,y_{i}\,y_{i}^{\mathrm{T}}\), and \(K(w,y)\) are the sampling weights from (16), we define_ \[\hat{K}(w,y)\coloneqq\frac{K(w,y)}{\mathrm{vol}(\mathrm{SO}(d))}\left(\prod_{1 \leq k<l<n}\frac{\lambda_{k}(y)+\lambda_{l}(y)}{2}\right)^{-1/2}. \tag{18}\] Proof.: The coarea formula (see [10, p.160] for a formulation in terms of differential forms or [14, Section 3.4.2] for one in terms of Hausdorff and Lebesgue measures) tells us that \[\int_{\mathrm{Pol}^{\diamond}}\varphi(\pi(y))\,J\pi(y)\;\mathrm{d}\mathrm{vol}_ {\varrho,r}(y)=\int_{\widehat{\mathrm{Pol}}^{\diamond}}\!\left(\int_{\pi^{-1}( z)}\varphi(\pi(y))\,\mathrm{d}\mathrm{vol}_{\varrho,r}^{\pi^{-1}(z)}(y)\right)\, \mathrm{d}\widehat{\mathrm{vol}}_{\varrho,r}(z),\] where \(J\pi(y)=\sqrt{\det(\mathrm{d}\pi(y)\;\mathrm{d}\pi(y))^{\sharp}}\). Since \(\mathrm{SO}(d)\) acts freely, each fiber \(\pi^{-1}(z)\) is parametrized by \(\mathrm{SO}(d)\), and the submanifold volume \(\mathrm{d}\mathrm{vol}_{\varrho,r}^{\pi^{-1}(z)}\) on the fiber with respect to the ambient metric \(g_{\varrho,r}\) is also the \(\dim\mathrm{SO}(d)=d(d-1)/2\)-dimensional Hausdorff measure \(\mathcal{H}^{d(d-1)/2}\) with respect to this metric. In the quotient metric, \(\pi\) is a Riemannian submersion, so \(J\pi(y)=1\). Observing that \(\varphi\circ\pi\) is constant on each fiber of \(\pi\), we can simplify the above to \[\int_{\mathrm{Pol}^{\diamond}}\varphi(\pi(y))\;\mathrm{d}\mathrm{vol}_{\varrho,r}(y)=\int_{\widehat{\mathrm{Pol}}^{\diamond}}\varphi(z)\,\mathcal{H}^{d(d-1 )/2}(\pi^{-1}(z))\;\mathrm{d}\widehat{\mathrm{vol}}_{\varrho,r}(z).\] Figure 5: Here the surface of revolution represents \(\mathrm{Pol}^{\diamond}\) and the rotations about the axis represents the action of \(\mathrm{SO}(d)\) on \(\mathrm{Pol}^{\diamond}\) by isometries. The Riemannian quotient space \(\widehat{\mathrm{Pol}}^{\diamond}\) is represented by the meridian curve. We see that the quotient metric (which measures products between vectors in the directions orthogonal to the rotations in the metric of \(\mathrm{Pol}\)) is the arclength metric on the meridian as a space curve shown below the surface. The yellow arc \(I\) and orange arc \(J\) on \(\widehat{\mathrm{Pol}}^{\diamond}\) are subsets of equal volume according to the Riemannian probability measure \(\hat{P}_{\varrho,r}\). But they have very different volumes in the pushforward of \(P_{\varrho,r}\) under the quotient map as the corresponding yellow and orange annuli in \(\mathrm{Pol}^{\diamond}\) have very different areas. An \(\mathrm{SO}(d)\)-invariant function on \(\mathrm{Pol}^{\diamond}\) has a constant value on the fiber over any point in \(\widehat{\mathrm{Pol}}^{\diamond}\). Therefore, it descends to a function on \(\widehat{\mathrm{Pol}}^{\diamond}\) and may be integrated there with respect to either \(\hat{P}_{\varrho,r}\) or the pushforward \(\pi_{\sharp}P_{\varrho,r}\). But we must expect the results to differ. The fiber \(\pi^{-1}(z)\) is an orbit of the \(\operatorname{SO}(d)\) action. We now set out to calculate the volume of this orbit. There is no reason to expect that all \(\operatorname{SO}(d)\) orbits will have the same volume, so we start by fixing some \(y\in\pi^{-1}(z)\) and compute the orbit volume in terms of \(y\). We start by parametrizing the orbit of \(y\) by the smooth map \(f\colon\operatorname{SO}(d)\to\operatorname{Pol}\) given by \(f(Q)=(Q\,y_{1},\dots,Q\,y_{n})=\operatorname{diag}(Q)\,y\). The orbit volume is then computed by the integral \[\int_{\operatorname{SO}(d)}\sqrt{\det(Df(Q)^{*}Df(Q))}\,\operatorname{dvol}(Q).\] The map \(f\) is \(\operatorname{SO}(d)\) equivariant, i.e., we have \(f(Q\,Q^{\prime})=\operatorname{diag}(Q)\,f(Q^{\prime})\) for all \(Q\), \(Q^{\prime}\in\operatorname{SO}(d)\). Applying the derivative with respect to \(Q^{\prime}\) on both sides, we obtain with the chain rule that \(Df(Q\,Q^{\prime})\,Q=\operatorname{diag}(Q)\,Df(Q^{\prime})\). For \(Q^{\prime}=I_{d}\) this can be rewritten as \(Df(Q)=\operatorname{diag}(Q)\,Df(I_{d})\,Q^{*}\). Thus we have \[\det(Df(Q)^{*}Df(Q)) =\det\Bigl{(}(\operatorname{diag}(Q)\,Df(I_{d})\,Q^{*})^{*} \operatorname{diag}(Q)\,Df(I_{d})\,Q^{*}\Bigr{)}\] \[=\det\Bigl{(}Q^{*}Q\,Df(I_{d})^{*}\operatorname{diag}(Q)^{*} \operatorname{diag}(Q)\,Df(I_{d})\Bigr{)}=\det(Df(I_{d})^{*}Df(I_{d})).\] Here we exploited that \(\operatorname{diag}(Q)\) is an isometry with respect to \(g_{\varrho,r}\) and that \(\operatorname{SO}(d)\) acts on itself isometrically with respect to the Frobenius metric. Therefore, the integrand is constant and it suffices to evaluate it at \(Q=I_{d}\). At \(I_{d}\), the tangent space to \(\operatorname{SO}(d)\) consists of the skew-symmetric matrices. Further, if \(\xi\) and \(\eta\) are \(d\times d\) skew-symmetric matrices, then \[Df(I_{d})\,\xi=\begin{pmatrix}\xi\,y_{1}\\ \,\vdots\\ \xi\,y_{n}\end{pmatrix}\quad\text{and}\quad Df(I_{d})\,\eta=\begin{pmatrix} \eta\,y_{1}\\ \,\vdots\\ \eta\,y_{n}\end{pmatrix}.\] We now observe that (using the cyclic invariance of trace and the skew-symmetry of \(\xi\)), \[\langle\xi,Df(I_{d})^{*}Df(I_{d})\,\eta\rangle_{\operatorname{Frob}}=\langle Df (I_{d})\,\xi,Df(I_{d})\,\eta\rangle_{\varrho}=\sum_{i=1}^{n}\varrho_{i}^{2}\, \langle\xi\,y_{i},\eta\,y_{i}\rangle\] We now choose \(\eta\) and \(\xi\) in order to calculate entries of the \((d(d-1)/2)\times(d(d-1)/2)\) matrix \(Df(I_{d})^{*}Df(I_{d})\). We first choose an orthonormal basis for \(\mathbb{R}^{d}\) which diagonalizes the symmetric \(d\times d\) matrix \(\varSigma\) and assume the diagonal entries are \(\lambda_{1},\dots,\lambda_{d}\). With respect to this basis for \(\mathbb{R}^{d}\), the \(d\times d\) skew-symmetric matrices have an orthonormal basis given by matrices \[\xi(i,j)_{k,l}=\begin{cases}\frac{1}{\sqrt{2}}&\text{if $(k,l)=(i,j)$,}\\ -\frac{1}{\sqrt{2}}&\text{if $(k,l)=(j,i)$,}\\ 0&\text{otherwise.}\end{cases}\] Note that multiplying by \(\xi(i,j)\) on the right swaps the \(i\)-th and \(j\)-th columns, multiplying the \(j\)-th by \(-1/\sqrt{2}\) and the \(i\)-th by \(+1/\sqrt{2}\). We may then compute \[\left(Df(I_{d})^{*}Df(I_{d})^{*}\right)_{(i,j),(k,l)}\coloneqq-\operatorname{ tr}\!\left(\xi(i,j)\,\Sigma\,\xi(k,k)\right)=-\operatorname{tr}\!\left(\xi(k,l)\, \xi(i,j)\,\Sigma\right)=\frac{\lambda_{i}+\lambda_{j}}{2}\,\delta_{(i,j),(k,l)}\] as the matrix product \(\xi(k,l)\,\xi(i,j)\) has diagonal elements if and only if \((i,j)=(k,l)\), in which case it has \(-\frac{1}{2}\) in positions \(k\) and \(l\). Thus \(Df(I_{d})^{*}Df(I_{d})\) is a diagonal matrix with diagonal entries \(\frac{\lambda_{k}+\lambda_{l}}{2}\). It follows that the orbit volume is \[\operatorname{vol}(\operatorname{SO}(d)\,y)=\operatorname{vol}(\operatorname{ SO}(d))\,\det(Df(I_{d})^{*}Df(I_{d}))=\operatorname{vol}(\operatorname{SO}(d)) \left(\prod_{1\leq k<l\leq n}\frac{\lambda_{k}+\lambda_{l}}{2}\right)^{1/2}.\qed\] As before, we can immediately write down the formula for Monte Carlo sampling: **Corollary 23**.: _If \(f\colon\widehat{\operatorname{Pol}}\to\mathbb{R}\) is integrable and if \(x^{(j)}\) is a sequence of independent samples drawn from \(P_{\varrho}\) on \(\operatorname{Arm}\), then_ \[\frac{\sum_{j=1}^{N}f(\pi(y_{*}(x^{(j)})))\,\hat{K}(\operatorname{cl}(x^{(j)}) )}{\sum_{j=1}^{N}\hat{K}(\operatorname{cl}(x^{(j)})}\to\int_{\widehat{ \operatorname{Pol}}}f\,\operatorname{d}\!\hat{P}_{\varrho,r}\quad\text{almost surely as $N\to\infty$.} \tag{19}\] We note that as in (16), we may ignore constant factors in the definition of the sampling weights \(\hat{K}\) when using (19), as they will cancel in the numerator and denominator. Therefore, we can ignore \(\operatorname{vol}(\operatorname{SO}(d))\) and the factors of \(2\) in the denominator, effectively using \[\hat{K}(w,y)=\frac{\sqrt{\det\!\left(\sum_{i=1}^{n}r_{i}^{2}/\varrho_{i}^{2} \left(I_{d}-y_{i}\,y_{i}^{\mathsf{T}}\right)\right)}}{\det\!\left(\sum_{i=1}^ {n}r_{i}\left(I_{d}-y_{i}\,y_{i}^{\mathsf{T}}\right)\right)}\left(\prod_{i=1}^ {n}|w+y_{i}|^{2}\right)^{d-1}\left(\prod_{1\leq k<l\leq n}(\lambda_{k}(y)+ \lambda_{l}(y))\right)^{-1/2} \tag{20}\] as sampling weights for \(\hat{P}_{\varrho,r}\) on \(\widehat{\operatorname{Pol}}\). We note that if \(n<d\), the polygon always lies in a lower-dimensional subspace of \(\mathbb{R}^{d}\) and the group \(\operatorname{SO}(d)\) no longer acts freely. One can apply essentially the same techniques as above, and obtain a similar formula containing only the \(\frac{1}{2}\left(\lambda_{i}+\lambda_{j}\right)\) for which \(\lambda_{i}+\lambda_{j}>0\). As this case is not of much relevance to Monte-Carlo sampling, we leave the details to the interested reader. ## 5 Experiments We now give the results of some sample computations which show our method at work. All of these computations were made using our open-source implementation of the sampling algorithm [4; 5]. We have mentioned above that the symplectic volume by viewing the quotient space \(\widehat{\operatorname{Pol}}\) of polygons in \(\mathbb{R}^{3}\) as the symplectic reduction of \(\operatorname{Arm}\) by the diagonal action of \(\operatorname{SO}(3)\) at the zero fiber (as in [24]) corresponds in our model to setting \(\varrho_{i}=\sqrt{r_{i}}\). In Fig. 6, we test this statement explicitly by computing the distribution of the chord skipping the first three edges of an hexagon with unit sides and a hexagon with sidelengths \((1,1/2,3/2,1,1,1)\) in the \(\hat{P}_{\varrho,r}\) measure with \(\varrho=\sqrt{r}\). The distribution of the length of the chord skipping \(k\) edges in an \(n\)-gon under \(\hat{P}_{\sqrt{r},r}\) in \(\mathbb{R}^{3}\) may be computed by conditioning randomly selected open polygons with \(k\)- and \(n-k\) edges on having the same failure to close vector (cf. [31, 34]). These distributions are complicated piecewise-polynomial functions, but they have been known for some time [13, Section 5]. Using this method, we computed the distributions \(f_{\text{eq}}\) and \(f_{\text{neq}}\) of the length of the chord skipping three edges in an equilateral hexagon and a hexagon with edgelengths \(r=(1,1/2,3/2,1,1,1)\) to be \[f_{\text{eq}}(\ell)=\begin{cases}\ell^{2},&\text{if }0\leq r\leq 1,\\ \frac{(\ell-3)^{2}}{4},&\text{if }1\leq r\leq 3,\\ 0,&\text{otherwise}.\end{cases}\qquad\text{and}\qquad f_{\text{neq}}(\ell)= \begin{cases}\frac{4\ell^{2}}{5}&\text{if }0<\ell\leq 1,\\ \frac{23(3-\ell)}{5}&\text{if }1\leq\ell\leq 2,\\ \frac{2(3-\ell)^{2}}{5}&\text{if }2\leq\ell\leq 3,\\ 0,&\text{otherwise}.\end{cases} \tag{21}\] Next, we illustrate the difference between \(P_{\varrho,r}\) and \(\hat{P}_{\varrho,r}\) for equilateral tetragons in \(\mathbb{R}^{3}\) by considering the distribution of the length of the chord joining vertices separated by two edges. The results are shown in Fig. 7. We last present a performance comparison for our algorithm versus the Action-Angle Method (AAM) [3] and the Progressive Action-Angle Method (PAAM) [8] for equilateral \(n\)-gons in \(\mathbb{R}^{3}\) with respect to \(\hat{P}_{\varrho,r}\). We are comparing reweighted sampling with direct sampling, so it would be unfair to measure performance by the number of sampled polygons per second. Instead, we use each sampler to estimate the expected (squared) radius of gyration \[R^{2}(v)=\frac{1}{n^{2}}\sum_{i,j=1}^{n}|v_{i}-v_{j}|^{2}=\frac{1}{n}\;\sum_{ i}^{n}|v_{i}-\bar{v}|^{2},\] Figure 6: On the left hand side, we see the theoretical pdf \(f_{\text{eq}}\) for the length of a chord skipping three edges in an equilateral hexagon from the left of (21), plotted with a histogram of 1 million samples from \(\hat{P}_{\sqrt{r},r}\) weighted by the sampling weights in (20). On the right hand side, we see the theoretical pdf \(f_{\text{neq}}\) for the length of chord skipping the first three edges in a hexagon with sidelengths \(r=(1,1/2,3/2,1,1,1)\) from the right of (21), also plotted with a histogram of 1 million samples from \(\hat{P}_{\sqrt{r},r}\) weighted by the sampling weights in (20). We see that we resolve the peak clearly in both cases. We note that we also tested the results against a variant of the moment polytope sampling method of [3], confirming the result both times. where \(v_{1},\ldots,v_{n}\) are the vertex positions of the polygon and \(\bar{v}\) is their mean. We use standard techniques to estimate confidence intervals and stop when the radius of the 99% confidence interval is less than \(0.1\%\) of the sample mean. In Fig. 8 we report the number of samples required as well as timings. For this integrand, CoBarS requires (asymptotically) about 25% more samples than AAM/PAAM. However, CoBarS has time complexity \(O(n)\), while AAM requires \(O(n^{2.5})\) time and PAAM requires \(O(n^{2})\) time. Therefore, the small number of additional samples is quickly amortized as \(n\) increases, with crossover around \(n=50\). ## 6 Conclusion We have given a new algorithm for sampling configurations of closed polygons with arbitrary prescribed edgelengths in any dimension. Our method can sample the same symplectic volume on \(\widehat{\text{Pol}}\) for \(d=3\) as the Progressive Action-Angle method [8]. However, it is much more general, allowing us to construct samples with arbitrary edgelengths, in other dimensions, and with respect to the measure \(P_{\varrho,r}\) as well as the measure \(\hat{P}_{\varrho,r}\) on the quotient. We provide an open-source implementation of our algorithm: see [4] for a parallel, header-only implementation in \(C++\); and see [5] for a _Mathematica_ interface. Our algorithm runs in \(O(n)\) time as opposed to the \(O(n^{2})\) complexity of the Progressive Action Angle method. Though time-per-sample can be misleading for reweighted samplers, our performance comparisons show that the new sampler is faster at estimating means to fixed confidence intervals. We now suggest some avenues for further investigation. Many authors have studied the homology of the space of planar polygons with fixed edgelengths [18; 22]. It would be interesting to combine our sampling algorithm with tools from topological data analysis to find cohomology groups for polygon spaces computationally. This could give new insight into the remaining open questions regarding the cohomology rings of polygon spaces in higher dimensions as well, as in [17]. The volumes of polygon spaces in any dimension can (in principle) be computed explicitly by integrating the reweighting factors of (16) and (20) over Arm. Very little appears to be Figure 7: A detailed calculation reveals the probability distribution function of the length \(\ell\) of the chord joining vertices separated by two edges of an equilateral tetragon in \(\mathbb{R}^{3}\) in \(P_{\varrho,r}\) to be proportional to \(8\,\ell\,\sqrt{4-\ell^{2}}\,E\left(-(\ell^{2}-4)^{2}/(16\,\ell^{2})\right)\) where \(E\) is the complete elliptic integral of the second kind [11, (19.2.8)]. A histogram of 1 million weighted samples computed using the sampling weights in (16) is plotted along with this function at left. The probability distribution of the same length in \(\hat{P}_{\varrho,r}\) is known to be \(1/2\). At right, we see this function plotted along with a histogram of 1 million weighted samples computed using the sampling weights in (20). We see that in both cases agreement between theory and computation is very good, and note as well that the results are quite different. known about these volumes. In 2d, Kamiyama [21] has given computational results on the volume of the space of polygons with one long edge for \(n=4,5,6\). In 3d, Khoi [25] used Witten's formula to compute symplectic volumes for all polygon spaces. Khoi proved that among all \(n\)-gons with the same total edgelength in \(\mathbb{R}^{3}\), the equilateral \(n\)-gons are the most flexible in the sense that their configuration space has the largest symplectic volume. These methods are deeply symplectic and so tied to the three dimensional case. But it is very natural to ask: are the equilateral \(n\)-gons the most flexible in \(\mathbb{R}^{d}\)? Our algorithm provides a powerful toolkit for this kind of investigation.
2303.10996
An analysis of $\mathbb{P}$-invariance and dynamical compensation properties from a control perspective
Dynamical compensation (DC) provides robustness to parameter fluctuations. As an example, DC enable control of the functional mass of endocrine or neuronal tissue essential for controlling blood glucose by insulin through a nonlinear feedback loop. Researchers have shown that DC is related to structural unidentifiability and $\mathbb{P}$-invariance property, and $\mathbb{P}$-invariance property is a sufficient and necessary condition for the DC property. In this article, we discuss DC and $\mathbb{P}$-invariancy from an adaptive control perspective. An adaptive controller is a self-tuning controller used to compensate for changes in a dynamical system. To design an adaptive controller with the DC property, it is easier to start with a two-dimensional dynamical model. We introduce a simplified system of ordinary differential equations (ODEs) with the DC property and extend it to a general form. The value of the ideal adaptive control lies in developing methods to synthesize DC to variations in multiple parameters. Then we investigate the stability of the system with time-varying input and disturbance signals, with a focus on the system's $\mathbb{P}$-invariance properties. This study provides phase portraits and step-like response graphs to visualize the system's behavior and stability properties.
Akram Ashyani, Yu-Heng Wu, Huan-Wei Hsu, Torbjörn E. M. Nordling
2023-03-20T10:23:43Z
http://arxiv.org/abs/2303.10996v2
An analysis of \(\mathbb{P}\)-invariance and dynamical compensation properties from a control perspective ###### Abstract Dynamical compensation (DC) provides robustness to parameter fluctuations. As an example, DC enable control of the functional mass of endocrine or neuronal tissue essential for controlling blood glucose by insulin through a nonlinear feedback loop. Researchers have shown that DC is related to structural unidentifiability and \(\mathbb{P}\)-invariance property, and \(\mathbb{P}\)-invariance property is a sufficient and necessary condition for the DC property. In this article, we discuss DC and \(\mathbb{P}\)-invariancy from an adaptive control perspective. An adaptive controller is a self-tuning controller used to compensate for changes in a dynamical system. To design an adaptive controller with the DC property, it is easier to start with a two-dimensional dynamical model. We introduce a simplified system of ordinary differential equations (ODEs) with the DC property and extend it to a general form. The value of the ideal adaptive control lies in developing methods to synthesize DC to variations in multiple parameters. Then we investigate the stability of the system with time-varying input and disturbance signals, with a focus on the system's \(\mathbb{P}\)-invariance properties. This study provides phase portraits and step-like response graphs to visualize the system's behavior and stability properties. **Keywords** Dynamical compensation property; \(\mathbb{P}\)-invariance property; ordinary differential equations; adaptive proportional-integral feedback **Mathematics Subject Classification 2010** 93B11, 93B52 ## 1 Introduction Dynamical compensation (DC) implies that the output of a system does not depend on a parameter for any input [1]. For instance, in glucose homeostasis controlled by insulin, despite parameter variations, the glucose response remains identical. This definition of the DC property is a sufficient condition and implies that the parameter is structurally unidentifiable [2; 3; 4]. In 2017, a necessary and sufficient condition for the DC property was introduced using equivariances and partial differential equations, denoted as the \(\mathbb{P}\)-invariance property [5]. The \(\mathbb{P}\)-invariance property related to a parameter indicates that changing the parameter does not alter the system's behavior, which is useful in biological and medical models. This phenomenon is especially advantageous when a change in a parameter has no effect on the output, allowing the system's behavior to be predicted. Robustness, which refers to a system's ability to handle fluctuations, is critical in dynamical systems. Several studies on adaptation and homeostasis have demonstrated the robustness of biological systems, such as the robustness of bacterial chemotaxis [9; 10]. The application of DC and \(\mathbb{P}\)-invariance properties is also beneficial in epidemiological models [11; 12]. Therefore, the DC property may be included in future robustness research. Karin et al. used the glucose homeostasis model to discuss the robustness and DC property of homeostasis [1]. Several mathematical models based on systems of differential equations have been developed to comprehensively analyze biological observations and identify all possible connections [6; 7; 8]. However, it is often more convenient to work with simpler models with fewer dimensions, as they are easier to interpret and analyze. In this paper, we aim to simplify the original model in Karin et al. and include another feedback mechanism to derive an extended model in section 2. We began by checking the system's stability in section 3 because the system must be stable to check the DC and \(\mathbb{P}\)-invariance properties. We use the phase portrait approach to verify the system's stability and obtain some results, for preferred stable situations, to compare the results of the DC and \(\mathbb{P}\)-invariance properties. Finally, in the numerical simulation in section 4, we considered situations in which the system is stable at desired equilibrium points and demonstrate the impact of adaptive control and \(\mathbb{P}\)-invariance in the system when it is perturbed. ## 2 Mathematical model In our study, as a starting point, we used the hormonal circuit reactions model stated in Karin et al. [1]; \[\frac{dy}{dt} =u_{0}+u(t)-sxy, \tag{1a}\] \[\frac{dx}{dt} =pzy-x,\] (1b) \[\frac{dz}{dt} =z(y-y_{0}), \tag{1c}\] where \(s\) and \(p\) are the feedback gains of \(x\) and \(z\), respectively. The output variable, \(y\), is a regulated variable that is able to form a feedback loop with \(x\) and \(z\). The regulated variable \(y\) controls the functional mass \(z\) of tissue which secretes hormone \(x\) in this circuit. The aim is to first simplify the model 1, containing Eqs. 1a-1c, then control the system with adaptive proportional-integral feedback so that it has \(\mathbb{P}\)-invariance property. We also compared the differences between DC and \(\mathbb{P}\)-invariance property. We simplified the model 1 as, \[\frac{dy}{dt} =u_{0}+u(t)-szy, \tag{2a}\] \[\frac{dz}{dt} =z(y-y_{0}), \tag{2b}\] where \(z\) is the feedback state, and \(y\) is the output of the system. Our expectation is that the positive constant \(s\) has the DC property, meaning that the output \(y\) is invariant to the change of the parameter \(s\). Hence we introduce \(\tilde{z}=sz\) and substitute \(z\) in Eq. 2a and Eq. 2b with \(\tilde{z}\) resulting in \[\frac{dy}{dt} =u_{0}+u(t)-\tilde{z}y, \tag{3a}\] \[\frac{d\tilde{z}}{dt} =\tilde{z}(y-y_{0}). \tag{3b}\] The above Equations show that the output response \(y\) remains the same when the value of \(s\) changes. By extending our simplified model with a DC property in parameter \(s\), we created an adaptive proportional-integral feedback model \[\frac{dy}{dt} =by(t)+d(t)+sz(t)\big{(}lr(t)-y(t)\big{)}, \tag{4a}\] \[\frac{dz}{dt} =-cz(t)\big{(}r(t)-y(t)\big{)}. \tag{4b}\] The system can be viewed as an open-loop exponential growth system \(\frac{dy}{dt}=by(t)\), where \(d(t)\) and \(r(t)\) represent the disturbance and reference input, respectively. The error term is given by \(r(t)-y(t)\) and \(lr(t)-y(t)\), and the adaptive proportional-integral feedback is \(sz(t)\big{(}lr(t)-y(t)\big{)}\), where \(sz(t)\) is considered as the adaptive proportional-integral gain. In control theory, a reference input refers to an input signal that guides the system response. Typically, the goal is to make the response \(y(t)\) track the reference input \(r(t)\), such that the error term is zero (\(r(t)-y(t)=0\)) at the equilibrium point. Furthermore, since we consider this equation in the context of biological phenomena, all parameters are assumed positive. This implies that \(b\), \(s\), \(l\), and \(c\) are all positive, and for every \(t>0\), all \(y(t)\), \(z(t)\), and \(r(t)\) are positive. The block diagram of the adaptive proportional-integral system is illustrated in Fig. 1. To verify the DC property of our model, the system should be at an equilibrium point before being perturbed by any input. When a system is at an equilibrium point, its value does not change with time. We then triggered the system with a step-like input \(r(t)\) to sketch the response \(y(t)\). We adjust the value of each parameter in Eqs. 4a and 4b to observe how they affect the system. The stability region is discovered by drawing the phase portrait. ## 3 Results Here, we began by checking stability of the system, and then we compared the differences between \(\mathbb{P}\)-invariance and DC property. Finally, we provided a numerical example to illustrate the result. ### Phase portrait and stability Our goal is to discover the region of attraction by drawing the phase portrait. By setting the derivative terms in Eqs. 4a and 4b to zero, two equilibrium points can be obtained: \[E_{1}=(y_{1},z_{1})=\big{(}-\frac{d(t)}{b},0\big{)}, \tag{5}\] \[E_{2}=(y_{2},z_{2})=\big{(}r(t),\frac{d(t)+br(t)}{sr(t)(1-l)}. \tag{6}\] Under the assumption that all parameters are non-negative and the signals \(d(t)\) and \(r(t)\) are positive at some timepoint(s) and non-negative at all other timepoints, we note the following: Since \(y_{1}\) is negative in the equilibrium point \(E_{1}\), it is a biologically infeasible state of the system. If \(0<l\leq 1\), then both \(z_{2}\) and \(y_{2}\) are non-negative, making \(E_{2}\) the equilibrium point of interest. To ensure that \(z_{2}\) remain finite, we first assume \(0<l<1\). The local stability of a system can be analyzed by calculating the eigenvalues of the matrix of partial derivatives in equilibrium points, known as the Jacobian matrix. The matrix of partial derivatives for system 4 and its eigenvalues are shown below. \[\mathbf{J}(y,z)=\begin{bmatrix}b-sz(t)&s\big{(}lr(t)-y(t)\big{)}\\ cz(t)&-c\big{(}r(t)-y(t)\big{)}\end{bmatrix}, \tag{7}\] \[\lambda(y,z)=\frac{1}{2}\Big{(}b-sz(t)-c\big{(}r(t)-y(t)\big{)} \pm\sqrt{\Big{(}b-sz(t)-c\big{(}r(t)-y(t)\big{)}\Big{)}^{2}-4c\Big{(}z(t) \big{(}sr(t)(1-l)\big{)}-b\big{(}r(t)-y(t)\big{)}\Big{)}}\Big{)}. \tag{8}\] The Jacobian matrix is presented in an algebraic structure to calculate the eigenvalues easier when analyzing the local stability of the individual equilibrium point. **1) Local stability of \(E_{1}\):** To investigate the local stability around \(E_{1}\), we computed two eigenvalues. \[\lambda_{1}(y_{1},z_{1})=b,\ \ \ \ \lambda_{2}(y_{1},z_{1})=-c\big{(}d(t)+br(t) \big{)}/b. \tag{9}\] As \(b>0\) and \(-c\big{(}d(t)+br(t)\big{)}/b<0\) this equilibrium point is a saddle point. **2) Local stability of \(E_{2}\):** For the equilibrium point \(E_{2}\) the eigenvalues are \[\lambda_{1}=\frac{\tau+\sqrt{\tau^{2}-4\delta}}{2},\ \ \ \ \lambda_{2}=\frac{\tau-\sqrt{\tau^{2}-4\delta}}{2}, \tag{10}\] where \[\tau=\text{trace}\big{(}J(y_{2},z_{2})\big{)}=\frac{d(t)+blr(t)}{ r(t)(l-1)}, \tag{11}\] \[\delta=\det\big{(}J(y_{2},z_{2})\big{)}=c\big{(}d(t)+br(t)\big{)}. \tag{12}\] Three situations can happen: Figure 1: Block diagram shows the adaptive proportional-integral feedback \(sz(t)\big{(}lr(t)-y(t)\big{)}\) where \(sz(t)\) is the adaptive proportional-integral gain with two error terms \(r(t)-y(t)\) and \(lr(t)-y(t)\). Term \(d(t)\) represents the disturbance. 1. \(\tau^{2}-4\delta=0,\) 2. \(\tau^{2}-4\delta<0,\) 3. \(\tau^{2}-4\delta>0.\) In both (1) and (2), stability depends on \(\tau\). Hence, if \(\tau<0\), then \(E_{2}\) is stable. Based on the assumption that parameters and variables are positive to be meaningful in biology and \(l<1\), we have \(\tau<0\), which means that \(E_{2}\) is a stable equilibrium point. In situation (3), as \(\delta>0\), it will result in \(|\tau|>\sqrt{\tau^{2}-4\delta}\). Hence, if \(\tau<0\), then \(E_{2}\) is stable. Again, based on the assumption of having meaningful parameters in the equilibrium points, \(l<1\), the equilibrium \(E_{2}\) is stable. These findings demonstrate that as long as \(E_{2}\) is meaningful in biology, it is a globally stable equilibrium point. ### Influence of the adaptive controller on stability Our aim is to investigate the influence of adaptive controller term on the stability of the system. Given a system \[\dot{x}=f(x(t),u(t),p),\;y=g(x(t),u(t),p),\;x(0)=\gamma_{p}; \tag{13}\] if there exists an equivalent transformation \[f(\eta_{p}(x),u,p) =(\eta_{p})_{*}(x)f(x,u), \tag{14a}\] \[g(\eta_{p}(x),u,p) =g(x,u),\] (14b) \[\eta_{p}(\gamma) =\gamma_{p}, \tag{14c}\] where \(\eta_{*}\) denotes the Jacobian matrix of transformation \(\eta\), the system have the \(\mathbb{P}\)-invariance property [5]. By verifying the invariance of the system 4, containing Eqs. 4a-4b, we were able to discover the parameters that lead to the DC property. We also demonstrated the differences between the definitions of \(\mathbb{P}\)-invariance and the DC property. Based on the definition of the \(\mathbb{P}\)-invariance property and the relationship with the DC property, the DC property can be classified as an adaptive control strategy in the system 4. #### 3.2.1 Verification of the \(\mathbb{P}\)-invariance property We verify that the system is \(\mathbb{P}\)-invariant with respect to variation of \(s\). In order to verify that the system 4 has the \(\mathbb{P}\)-invariance property, we introduced \(x_{1}(t)\) and \(x_{2}(t)\) as two state variables and \(y(t)\) as the output variable of the system. For simplicity, we wrote the system 4 in \(x_{1}\), \(x_{2}\), and \(y\) as, \[\dot{x_{1}} =-cx_{1}\big{(}r(t)-x_{2}\big{)}, \tag{15a}\] \[\dot{x_{2}} =bx_{2}+d(t)+sx_{1}\big{(}lr(t)-x_{2}\big{)},\] (15b) \[y =x_{2}. \tag{15c}\] The notation here is selected to be identical to the one used by [5]. We considered the possible equivariance \(\eta_{p}(x_{1},x_{2})=\big{(}\alpha_{p}(x_{1},x_{2}),\beta_{p}(x_{1},x_{2}) \big{)}\). In this case, the condition \(g\big{(}\eta_{p}(x),u,p\big{)}=g(x,u)\) means \(\beta_{p}(x_{1},x_{2})=x_{2}\). Therefore, we have \(\eta_{p}(x_{1},x_{2})=\big{(}\alpha_{p}(x_{1},x_{2}),x_{2}\big{)}\). Hence, \[(\eta_{p})_{*}(x_{1},x_{2})=\begin{bmatrix}\frac{\partial\alpha_{p}}{\partial x _{1}}(x_{1},x_{2})&\frac{\partial\alpha_{p}}{\partial x_{2}}(x_{1},x_{2})\\ \frac{\partial x_{2}}{\partial x_{1}}&\frac{\partial x_{2}}{\partial x_{2}} \end{bmatrix}=\begin{bmatrix}\frac{\partial\alpha_{p}}{\partial x_{1}}(x_{1},x_{2})&\frac{\partial\alpha_{p}}{\partial x_{2}}(x_{1},x_{2})\\ 0&1\end{bmatrix}. \tag{16}\] As a result from equation 14a, for parameter \(s\) our aim is to prove \(f(\eta_{s}(x),u,s)=(\eta_{s})_{*}(x)f(x,u)\). It means: \[\begin{bmatrix}-c\alpha_{s}(x_{1},x_{2})\big{(}r(t)-x_{2}\big{)}\\ x_{2}+d(t)+s\alpha_{s}(x_{1},x_{2})\big{(}lr(t)-x_{2}\big{)}\end{bmatrix}= \begin{bmatrix}\frac{\partial\alpha_{s}}{\partial x_{1}}(x_{1},x_{2})&\frac{ \partial\alpha_{s}}{\partial x_{2}}(x_{1},x_{2})\\ 0&1\end{bmatrix}\begin{bmatrix}-cx_{1}\big{(}r(t)-x_{2}\big{)}\\ bx_{2}+d(t)+x_{1}\big{(}lr(t)-x_{2}\big{)}\end{bmatrix}. \tag{17}\] Hence: \[-c\alpha_{s}(x_{1},x_{2})\big{(}r(t)-x_{2}\big{)}=\frac{\partial \alpha_{s}(x_{1},x_{2})}{\partial x_{1}}\Big{(}-cx_{1}\big{(}r(t)-x_{2}\big{)} +\frac{\partial\alpha_{s}(x_{1},x_{2})}{\partial x_{2}}\Big{(}bx_{2}+d(t)+x_{1 }\big{(}lr(t)-x_{2}\big{)}\Big{)}, \tag{18}\] \[bx_{2}+d(t)+s\alpha_{s}(x_{1},x_{2})\big{(}lr(t)-x_{2}\big{)}=bx_ {2}+d(t)+x_{1}\big{(}lr(t)-x_{2}\big{)}. \tag{19}\] By comparing the coefficients in Eq. 18 we have \[\frac{\partial\alpha_{s}(x_{1},x_{2})}{\partial x_{1}}=\frac{ \alpha_{s}(x_{1},x_{2})}{x_{1}}, \tag{20}\] \[\frac{\partial\alpha_{s}(x_{1},x_{2})}{\partial x_{2}}=0. \tag{21}\] From Eq. 19 we attained \[s\alpha_{s}(x_{1},x_{2})(lr(t)-x_{2})=x_{1}\big{(}lr(t)-x_{2}\big{)}, \tag{22}\] If \(lr(t)-x_{2}\neq 0\), it means \[\alpha_{s}(x_{1},x_{2})=\frac{x_{1}}{s}. \tag{23}\] Therefore, there is a Jacobian matrix \(\alpha_{s}(x_{1},x_{2})=x_{1}/s\) that can achieve the transformation of the system. The system could demonstrate \(\mathbb{P}\)-invariance when \(s\) is the \(\mathbb{P}\)-invariance parameter. Next, we investigated whether the parameter \(b\) has the \(\mathbb{P}\)-invariance property, meaning that changing \(b\) will not influence the behavior of \(y(t)\). As a result from equation 14a, for parameter \(b\) our aim is to prove \(f(\eta_{b}(x),u,b)=(\eta_{b})_{*}(x)f(x,u)\). It means: \[\begin{bmatrix}-c\alpha_{b}(x_{1},x_{2})\big{(}r(t)-x_{2}\big{)}\\ bx_{2}+d(t)+s\alpha_{b}(x_{1},x_{2})\big{(}lr(t)-x_{2}\big{)}\end{bmatrix}= \begin{bmatrix}\frac{\partial\alpha_{b}}{\partial x_{1}}(x_{1},x_{2})&\frac {\partial\alpha_{b}}{\partial x_{2}}(x_{1},x_{2})\\ 0&1\end{bmatrix}\begin{bmatrix}-cx_{1}\big{(}r(t)-x_{2}\big{)}\\ x_{2}+d(t)+sx_{1}\big{(}lr(t)-x_{2}\big{)}\end{bmatrix}. \tag{24}\] Thus, it is essential to solve \[-c\alpha_{b}(x_{1},x_{2})\big{(}r(t)-x_{2}\big{)}=\frac{\partial \alpha_{b}(x_{1},x_{2})}{\partial x_{1}}\Big{(}-cx_{1}\big{(}r(t)-x_{2}\big{)} \Big{)}+\frac{\partial\alpha_{b}(x_{1},x_{2})}{\partial x_{2}}\Big{(}x_{2}+d( t)+sx_{1}\big{(}lr(t)-x_{2}\big{)}\Big{)}, \tag{25}\] \[bx_{2}+d(t)+s\alpha_{b}(x_{1},x_{2})\big{(}lr(t)-x_{2}\big{)}=x_ {2}+d(t)+sx_{1}\big{(}lr(t)-x_{2}\big{)}. \tag{26}\] By comparing the coefficients in Eq. 25, we have \[\frac{\partial\alpha_{b}(x_{1},x_{2})}{\partial x_{1}}=\frac{ \alpha_{b}(x_{1},x_{2})}{x_{1}}, \tag{27}\] \[\frac{\partial\alpha_{b}(x_{1},x_{2})}{\partial x_{2}}=0. \tag{28}\] From Eq. 26 we have \[bx_{2}+s\alpha_{b}(x_{1},x_{2})\big{(}lr(t)-x_{2}\big{)}=x_{2}+ sx_{1}\big{(}lr(t)-x_{2}\big{)}, \tag{29}\] If \(lr(t)-x_{2}\neq 0\), it means \[\alpha_{b}(x_{1},x_{2})=\frac{x_{2}+sx_{1}\big{(}lr(t)-x_{2}\big{)} -bx_{2}}{s\big{(}lr(t)-x_{2}\big{)}}, \tag{30}\] and yields \[\frac{\partial\alpha_{b}(x_{1},x_{2})}{\partial x_{1}}=1, \tag{31}\] \[\frac{\partial\alpha_{b}(x_{1},x_{2})}{\partial x_{2}}=\frac{(1- b)\big{(}slr(t)\big{)}}{\Big{(}s\big{(}lr(t)-x_{2}\big{)}\Big{)}^{2}}. \tag{32}\] There is no solution of \(\alpha_{b}(x_{1},x_{2})\) that can be obtained from Eq. 30 and satisfies the two conditions in Eqs. 27 and 28. This implies that the system is not \(\mathbb{P}\)-invariant in \(b\). Next we verify that the system is not \(\mathbb{P}\)-invariant with respect to variation of \(c\). In order to verify that the system 4 has the \(\mathbb{P}\)-invariance property, we introduced \(x_{1}(t)\) and \(x_{2}(t)\) as two state variables and \(z(t)\) as the output variable of the system. For simplicity, we wrote the system 4 in \(x_{1}\), \(x_{2}\), and \(z\) as, \[\dot{x_{1}} =bx_{1}+d(t)+sx_{2}\big{(}lr(t)-x_{1}\big{)}, \tag{33a}\] \[\dot{x_{2}} =-cx_{2}\big{(}r(t)-x_{1}\big{)},\] (33b) \[z =x_{2}. \tag{33c}\] The notation here is selected to be identical to the one used by [5]. We considered the possible equivariance \(\eta_{p}(x_{1},x_{2})=\big{(}\alpha_{p}(x_{1},x_{2}),\beta_{p}(x_{1},x_{2}) \big{)}\). In this case, the condition \(g\big{(}\eta_{p}(x),u,p\big{)}=g(x,u)\) means \(\beta_{p}(x_{1},x_{2})=x_{2}\). Therefore, we have \(\eta_{p}(x_{1},x_{2})=\big{(}\alpha_{p}(x_{1},x_{2}),x_{2}\big{)}\). Hence, \[(\eta_{p})_{*}(x_{1},x_{2})=\begin{bmatrix}\frac{\partial\alpha_{p}}{ \partial x_{1}}(x_{1},x_{2})&\frac{\partial\alpha_{p}}{\partial x_{2}}(x_{1},x_ {2})\\ \frac{\partial x_{2}}{\partial x_{1}}&\frac{\partial x_{2}}{\partial x_{2}} \end{bmatrix}=\begin{bmatrix}\frac{\partial\alpha_{p}}{\partial x_{1}}(x_{1},x_ {2})&\frac{\partial\alpha_{p}}{\partial x_{2}}(x_{1},x_{2})\\ 0&1\end{bmatrix}. \tag{34}\] As a result from equation 14a, for parameter \(c\) our aim is to prove \(f(\eta_{c}(x),u,c)=(\eta_{c})_{*}(x)f(x,u)\). It means: \[\begin{bmatrix}b\alpha_{c}(x_{1},x_{2})+d(t)+sx_{2}\big{(}lr(t)- \alpha_{c}(x_{1},x_{2})\big{)}\\ -cx_{2}\big{(}r(t)-\alpha_{c}(x_{1},x_{2})\big{)}\end{bmatrix}=\\ \begin{bmatrix}\frac{\partial\alpha_{c}}{\partial x_{1}}(x_{1},x_{2})&\frac{ \partial\alpha_{c}}{\partial x_{2}}(x_{1},x_{2})\\ 0&1\end{bmatrix}\begin{bmatrix}bx_{1}+d(t)+sx_{2}\big{(}lr(t)-x_{1})\\ -x_{2}\big{(}r(t)-x_{1}\big{)}\end{bmatrix}. \tag{35}\] Hence: \[b\alpha_{c}(x_{1},x_{2})+d(t)+sx_{2}\big{(}lr(t)-\alpha_{c}(x_{1},x_{2})\big{)}=\frac{\partial\alpha_{c}(x_{1},x_{2})}{\partial x_{1}}\Big{(} bx_{1}+d(t)+sx_{2}\big{(}lr(t)-x_{1}\big{)}\Big{)}\] \[+\frac{\partial\alpha_{c}(x_{1},x_{2})}{\partial x_{2}}\Big{(}-x _{2}\big{(}r(t)-x_{1}\big{)}\Big{)}, \tag{36}\] \[-cx_{2}\big{(}r(t)-\alpha_{c}(x_{1},x_{2})\big{)}=-x_{2}\big{(}r( t)-x_{1}\big{)}. \tag{37}\] By comparing the coefficients in Eq. 36 we have \[\frac{\partial\alpha_{c}(x_{1},x_{2})}{\partial x_{1}}=\frac{b \alpha_{c}(x_{1},x_{2})+d(t)+sx_{2}\big{(}lr(t)-\alpha_{c}(x_{1},x_{2})\big{)} }{bx_{1}+d(t)+sx_{2}\big{(}lr(t)-x_{1}\big{)}}, \tag{38}\] \[\frac{\partial\alpha_{c}(x_{1},x_{2})}{\partial x_{2}}=0. \tag{39}\] From Eq. 37 we attained \[cx_{2}\alpha_{c}(x_{1},x_{2})=x_{2}\big{(}(c-1)r(t)+x_{1}), \tag{40}\] If \(cx_{2}\neq 0\), it means \[\alpha_{c}(x_{1},x_{2})=\frac{(c-1)r(t)+x_{1}}{c}, \tag{41}\] and yields \[\frac{\partial\alpha_{c}(x_{1},x_{2})}{\partial x_{1}}=\frac{1}{c}, \tag{42}\] \[\frac{\partial\alpha_{c}(x_{1},x_{2})}{\partial x_{2}}=0. \tag{43}\] There is no solution of \(\alpha_{c}(x_{1},x_{2})\) that can be obtained from Eq. 41 and satisfies the two conditions in Eqs. 38 and 39. This implies that the system is not \(\mathbb{P}\)-invariant in \(c\). #### 3.2.2 Verification of the DC property As a demonstration to show that \(\mathbb{P}\)-invariance property is more general than DC property, we used the DC property definition by Karin et al. [1] in the system 33, containing Eqs. 15a-15c. By choosing \(v_{1}=sx_{1}\) and \(v_{2}=x_{2}\), for \(s\neq 0\) we have: \[\dot{v}_{1} =-cv_{1}\big{(}r(t)-v_{2}\big{)}, \tag{44a}\] \[\dot{v}_{2} =bv_{2}+d(t)+v_{1}\big{(}lr(t)-v_{2}\big{)},\] (44b) \[y =v_{2}. \tag{44c}\] Therefore, we can assume \(s=1\), which means DC property with respect to \(s\neq 0\). By choosing \(v_{1}=x_{1}\) and \(v_{2}=bx_{2}\), for \(s\neq 0\) we have: \[\dot{v}_{1} =-cv_{1}\big{(}r(t)-\frac{1}{b}v_{2}\big{)}, \tag{45a}\] \[\dot{v}_{2} =bv_{2}+bd(t)+bv_{1}\big{(}lr(t)-\frac{1}{b}v_{2}\big{)},\] (45b) \[y =v_{2}. \tag{45c}\] Therefore, we cannot assume \(b=1\), which means for \(b\), we could not find a transformation to prove DC property, but as DC property is a sufficient condition, we cannot claim that it is not DC property with respect to \(b\). However, as \(\mathbb{P}\)-invariance property is a sufficient and necessary condition for DC, we proved that the system does not have DC for variation in \(b\). By choosing \(v_{1}=cx_{1}\) and \(v_{2}=x_{2}\), for \(c\neq 0\) we have: \[\dot{v}_{1} =-cv_{1}\big{(}r(t)-v_{2}\big{)}, \tag{46a}\] \[\dot{v}_{2} =bv_{2}+d(t)+\frac{s}{c}v_{1}\big{(}lr(t)-v_{2}\big{)},\] (46b) \[y =v_{2}. \tag{46c}\] Therefore, we cannot assume \(c=1\), which means for \(c\), we could not find a transformation to prove DC property, but as DC property is a sufficient condition, we cannot claim that it is not DC property with respect to \(c\). However, as \(\mathbb{P}\)-invariance property is a sufficient and necessary condition for DC, we proved that the system does not have DC for variation in \(c\). ## 4 Numerical simulation In this section, we discuss and exemplify the theoretical results of our research by numerical simulations. We verify the phase portrait, influence of the adaptive controller, and the \(\mathbb{P}\)-invariance property by using step-like responses for input \(r(t)\) and disturbance \(d(t)\). To investigate the \(\mathbb{P}\)-invariance property with respect to the parameters \(s\) and \(b\), the system 4 is first brought to its equilibrium. Next, by perturbing the system with a step-like response \(d(t)\) and changes in \(s\) and \(b\) separately, we check whether it returns to the equilibrium or not. As analyzed in Section 3.1, equilibrium point \(E_{1}\) is always a saddle point, and to have stability at equilibrium point \(E_{2}\), the main condition is \(0<l<1\). Therefore, we chose initial conditions such that all solutions converge to \(E_{2}\), i.e., a stable equilibrium point. \[b=0.3,\;d(0)=0.01,\;c=2,\;r(0)=11,\;l=0.7,s=0.25 \tag{47}\] Hence the system 4 is: \[\frac{dy}{dt} =0.3y(t)+0.01+sz(t)\big{(}7.7-y(t)\big{)}, \tag{48a}\] \[\frac{dz}{dt} =-2z(t)\big{(}11-y(t)\big{)}, \tag{48b}\] The local stability of the system can be analyzed by calculating the eigenvalues at each equilibrium point. When the real parts of the eigenvalues are negative, the equilibrium point is locally stable. We simulated the step-like response with initial input \(r(0)=11\) and a single pulse with amplitude 5 from time 0 to 400. We verified the results with different parameters for \(s\), \(b\) and \(c\). The phase portrait for the original parameters in 47 with different values of \(s\), \(b\) and \(c\) is shown in Fig. 2. For \(s=0.25\), two red dots in Fig. 2a represent the equilibrium points \(E_{1}=(-0.033,0)\) and \(E_{2}=(11.000,4.012)\), with eigenvalues \[\lambda(E_{1})=\{0.300,-22.067\},\;\;\;\;\;\lambda(E_{2})=\{-0.351+2.549i,-0.3 51-2.549i\}. \tag{49}\] Since \(E_{2}\) is a stable equilibrium point, all trajectories in its region of attraction approach it. If we multiply \(s\) by 6 times (\(s=1.5\)), we obtain the equilibrium points \(E_{1}=(-0.033,0)\) and \(E_{2}=(11.000,0.669)\) in Fig. 2b, with eigenvalues \[\lambda(E_{1})=\{0.300,-22.067\},\;\;\;\;\;\lambda(E_{2})=\{-0.351+2.549i,-0.3 51-2.549i\}. \tag{50}\] Again, since \(E_{2}\) is a stable equilibrium point, all trajectories in its region of attraction approach it. In Fig. 2c, we choose \(b=0.6\), which is twice as large as the original one, and it alters the equilibrium points to \(E_{1}=(-0.017,0)\) and \(E_{2}=(11.000,8.012)\), with eigenvalues \[\lambda(E_{1})=\{0.6,-22.033\},\;\;\;\;\;\lambda(E_{2})=\{-0.701+3.568i,-0.701- 3.568i\}. \tag{51}\] Since \(E_{2}\) is a stable equilibrium point, all trajectories in its region of attraction approach it. Finally, if we multiply \(c\) by two (\(c=4\)), we get the equilibrium points \(E_{1}=(-0.033,0)\) and \(E_{2}=(11.000,4.012)\) in Fig. 2d, with eigenvalues \[\lambda(E_{1})=\{0.300,-44.133\},\;\;\;\;\;\lambda(E_{2})=\{-0.351+3.622i,-0.3 51-3.622i\}. \tag{52}\] All trajectories in the region of attraction approach \(E_{2}\) as it is a stable equilibrium point. After verifying the stability of the system, we investigated the \(\mathbb{P}\)-invariance property under different situations. In all Figs. 3, 4 and 5\(r(t)\) and \(d(t)\) are the same and show the time-varying step-like response of the reference input \(r(t)\) and disturbance \(d(t)\) These inputs were also subject to additional noise from a standard normal distribution. We tested different combinations of the reference \(r(t)\) and disturbance \(d(t)\) to exemplify the \(\mathbb{P}\)-invariance property under different scenarios. Both the input \(r(t)\) and the disturbance \(d(t)\) began with the starting values defined at 47 and remained constant from time 0 until time 50. The input \(r(t)\) changes while disturbance \(d(t)\) remains constant when time is between 50 and 150. Both the input \(r(t)\) and disturbance \(d(t)\) remain constant between time 150 and 200. In the time interval \((200,300)\), the disturbance \(d(t)\) changes while the input \(r(t)\) remains constant. When time is between \((300,350)\), both the input \(r(t)\) and disturbance \(d(t)\) change. Finally, both converge to a new amount (\(r=13.75,d=5\)) and remain constant in the time interval \((350,400)\). The purple trajectories in Fig. 2 illustrates the starting position in the Figs. 3, 4 and 5. As a result, we are in a stable situation at the start, however it may take some time to achieve stability. In all Figs. 3, 4 and 5\(z(t)\) and \(y(t)\) show the responses to the input and disturbance by \(r(t)\) and \(d(t)\). The green dashed line represents the residual of changes in \(y(t)\), which remains zero when \(s\) changes, but is non-zero when \(b\) or \(c\) changes. This is a consequence of the system having \(\mathbb{P}\)-invariance for parameter \(s\), but not for \(b\) and \(c\). ## 5 Conclusions Our two state simplified and extended model based on Karin et al.'s work [1] preserves the DC property when the parameter \(s\) is changed. We have demonstrated this using the \(\mathbb{P}\)-invariance definition by Sontag [5]. With this approach, we have also shown no DC for the parameters \(b\) and \(c\), because the definition of \(\mathbb{P}\)-invariance is both sufficient and necessary. Our example system is an exponential growth system with an adaptive proportional integral controller. Exponential growth is a common feature of many physical systems, such as early stage of cell growth or disease spread. We have shown that our adaptive proportional integral feedback with DC in the control parameters \(s\) can stabilize the system and ensure that the response tracks the reference input despite variation in the control parameters. The downside of this is that the closed loop systems behavior cannot be tuned by changing the gain of the controller as customary in e.g. PID-controllers. Moreover, we have demonstrated the stability of the system under a variety of conditions and plotted the phase portrait for a representative example. In summary, we have demonstrated an adaptive controller with \(\mathbb{P}\)-invariance in it's parameter \(s\). This can be beneficial for designing robust controllers that can handle environmental fluctuations, in particular in Synthetic biology, as well as for understanding biological systems during modeling and analyzing. **Data availability** All data used is included in this article. **Acknowledgement** The authors gratefully acknowledge valuable comments by Prof. Filippo Menolascina from the University of Edinburgh, UK. **Funding** We would like to thank the Ministry of Science and Technology in Taiwan for their financial support (grants number MOST 105-2218-E-006-016-MY2, 105-2911-I-006-518, 107-2634-F-006-009, 110-2222-E-006-010, and 111-2221-E-006-186). **Fig 2.** Phase portraits with different values of \(s\), \(b\) and \(c\) show that the stable equilibrium \(E_{2}\) has almost the same region of attraction in all cases but the trajectories differ. The region of attraction is determined by \(E_{1}\). (a) The phase portrait for parameters 47. (b) The phase portrait for parameters 47, except \(s\) that is changed from 0.25 to 1.5. (c) The phase portrait for parameters 47, except \(b\) that is changed from 0.3 to 0.6. (d) The phase portrait for parameters 47, except \(c\) that is changed from 2 to 4. Figure 3: Visualization of the impact of DC and lack there off on the output using time-varying step-like changes in the reference input \(r(t)\) and disturbance \(d(t)\) in different combinations. Gaussian noise was added to the constant value of \(r(t)\) and \(d(t)\) during certain periods to ensure excitation. \(y(t)\) and \(z(t)\) show the comparison of the step response when \(s\) is 0.25 and 1.5. As we have started \(y(t)\) and \(z(t)\) with a distance from the equilibrium point, it takes a time to converge to the stable situation resulting at haing some residual between but then the output \(y(t)\) remained identical–the residual (green dashed line) equals zero. A hallmark of the system being \(\mathbb{P}\)-invariant with regard to \(s\). **Fig 4.** Visualization of the impact of DC and lack there off on the output using time-varying step-like changes in the reference input \(r(t)\) and disturbance \(d(t)\) in different combinations. Gaussian noise was added to the constant value of \(r(t)\) and \(d(t)\) during certain periods to ensure excitation. \(y(t)\) and \(z(t)\) show the comparison of the step response when \(b\) is 0.3 and 0.6. The output \(y(t)\) differs and the residual (green dashed line) is non-zero. A hallmark of the system not being \(\mathbb{P}\)-invariant with regard to \(b\). Figure 5: Visualization of the impact of DC and lack there off on the output using time-varying step-like changes in the reference input \(r(t)\) and disturbance \(d(t)\) in different combinations. Gaussian noise was added to the constant value of \(r(t)\) and \(d(t)\) during certain periods to ensure excitation. \(y(t)\) and \(z(t)\) show the comparison of the step response when \(c\) is 2 and 4. The output \(y(t)\) differs and the residual (green dashed line) is non-zero. A hallmark of the system not being \(\mathbb{P}\)-invariant with regard to \(c\).
2308.04449
The Disparate Impacts of College Admissions Policies on Asian American Applicants
There is debate over whether Asian American students are admitted to selective colleges and universities at lower rates than white students with similar academic qualifications. However, there have been few empirical investigations of this issue, in large part due to a dearth of data. Here we present the results from analyzing 685,709 applications from Asian American and white students to a subset of selective U.S. institutions over five application cycles, beginning with the 2015-2016 cycle. The dataset does not include admissions decisions, and so we construct a proxy based in part on enrollment choices. Based on this proxy, we estimate the odds that Asian American applicants were admitted to at least one of the schools we consider were 28% lower than the odds for white students with similar test scores, grade-point averages, and extracurricular activities. The gap was particularly pronounced for students of South Asian descent (49% lower odds). We trace this pattern in part to two factors. First, many selective colleges openly give preference to the children of alumni, and we find that white applicants were substantially more likely to have such legacy status than Asian applicants, especially South Asian applicants. Second, after adjusting for observed student characteristics, the institutions we consider appear less likely to admit students from geographic regions with relatively high shares of applicants who are Asian. We hope these results inform ongoing discussions on the equity of college admissions policies.
Joshua Grossman, Sabina Tomkins, Lindsay Page, Sharad Goel
2023-08-03T22:41:54Z
http://arxiv.org/abs/2308.04449v1
# The Disparate Impacts of College Admissions Policies on Asian American Applicants ###### Abstract There is debate over whether Asian American students are admitted to selective colleges and universities at lower rates than white students with similar academic qualifications. However, there have been few empirical investigations of this issue, in large part due to a dearth of data. Here we present the results from analyzing 685,709 applications from Asian American and white students to a subset of selective U.S. institutions over five application cycles, beginning with the 2015-2016 cycle. The dataset does not include admissions decisions, and so we construct a proxy based in part on enrollment choices. Based on this proxy, we estimate the odds that Asian American applicants were admitted to at least one of the schools we consider were 28% lower than the odds for white students with similar test scores, grade-point averages, and extracurricular activities. The gap was particularly pronounced for students of South Asian descent (49% lower odds). We trace this pattern in part to two factors. First, many selective colleges openly give preference to the children of alumni, and we find that white applicants were substantially more likely to have such legacy status than Asian applicants, especially South Asian applicants. Second, after adjusting for observed student characteristics, the institutions we consider appear less likely to admit students from geographic regions with relatively high shares of applicants who are Asian. We hope these results inform ongoing discussions on the equity of college admissions policies. ## Introduction Over the last several decades, questions have been raised over whether selective colleges in the U.S. discriminate against Asian American applicants in admissions decisions (Arcidiacono et al., 2022, Chun and Zalokar, 1992, Espenshade and Radford, 2009, Espenshade et al., 2004, Gelman et al., 2019, Long, 2004, Park, 2019, SFFA v. Harvard, 2019, Takagi, 1992]. In the 1980s, Brown and Stanford formed committees to audit their own admissions policies and practices (Chun and Zalokar, 1992, Takagi, 1992). Brown found evidence of discrimination in its admissions process; Stanford did not find clear evidence of bias, but could not fully explain its lower acceptance rates of Asian American applicants relative to white students. A 1990 report by the U.S. Department of Education's Office of Civil Rights (OCR) investigated allegations that Harvard capped the number of Asian American students it admitted (Chun and Zalokar, 1992). OCR found no evidence of an Asian quota, but concluded that Asian American applicants were less likely to be admitted than white students with similar academic qualifications. OCR further found that this disparity largely disappeared once recruited athletes and the children of alumni ("legacies") were excluded from its analysis, suggesting the gap in acceptance rates was driven by Harvard's stated preference for admitting students from these two groups (Chetty et al., 2023; Hurwitz, 2011; Park, 2019). Most recently, in a 2023 decision, the Supreme Court ruled that Harvard engaged in unconstitutional racial balancing, holding the Asian American share of admitted students to approximately 20%--though Harvard denied doing so. In the more than 30 years since the OCR investigation, there have been limited third-party, applicant-level empirical analyses of potential discrimination in college admissions decisions against Asian American applicants. Over this time span, both the demographics of the United States and the educational landscape have changed substantially. Asian American representation among K-12 public school students has more than doubled, increasing from 3% in 1993 to 7% in 2020 (Nowicki, 2022), and the overall admission rate to Harvard has dropped from 18% in 1990 to 5% in 2020 (Fu and Kim, 2020; Lee, 1993). These changes suggest a need to reexamine college admissions policies for potential disparate impacts on Asian American applicants. Here we analyze 685,709 first-year college applications submitted by 292,795 Asian American and white students to a subset of U.S. institutions with relatively low admit rates and relatively high yield rates. All of the applications we consider were submitted via a national postsecondary application platform over five application cycles, from the 2015-2016 cycle to the 2019-2020 cycle.1 We exclude students who attend a high school outside of the United States or who report primary citizenship outside of the United States. Given the complex patterns of immigration and marked heterogeneity in experiences across subgroups, we disaggregate our analysis by three regions of origin self-reported by the Asian American applicants in our dataset: South Asia, East Asia, and Southeast Asia.2 To preserve confidentiality, we focus on broader patterns rather than on individual institutions, and we report aggregate results across the combined set of colleges and universities we consider. In particular, our main outcome of interest is whether applicants were admitted to at least one of these institutions. One limitation of our analysis is that we do not directly observe admissions decisions, and so we infer these decisions based on enrollment choices, as described below. Footnote 1: Each of the institutions we consider receives the majority of first-year applications from students applying via the application platform (Table A1). Footnote 2: Once applicants indicate being “Asian”, they have the option to select one or more of 9 countries of origin; they can additionally indicate being “Other East Asian”, “Other South Asian,” or “Other Southeast Asian”. We classify the listed countries as follows: China, Japan, and Korea as East Asia; India and Pakistan as South Asia; and the Philippines, Vietnam, Cambodia, and Malaysia as Southeast Asia. 3% of Asian applicants select countries that span multiple regions. In these cases, we randomly assign one of the spanned regions. 2% of Asian applicants do not select a country of origin. These students are excluded from the analysis. After excluding students who we infer to be recruited athletes, we estimate that South Asian applicants had 49% lower odds of admission to the subset of schools we consider than white applicants with comparable test scores, high school grade-point averages, and extracurricular activities. We estimate that both East Asian and Southeast Asian applicants had 17% lower odds of admission to these schools. After additionally adjusting for whether a student applied early to any considered college or university, the student's high school, and whether the student is a legacy applicant, we estimate that Southeast Asian students were accepted at similar rates to white students, and that East Asian students had 10% lower odds of admission than white students. But, we estimate that South Asian applicants still had 30% lower odds of acceptance to these institutions than white students after adjusting for all available information in our data. We note, however, that we do not have access to all materials submitted by and about applicants, such as essays, letters of recommendation, alumni interviews, and admission officer ratings. Finally, we explore how the relative share of Asian American and white enrollees might change at the colleges and universities we consider under various hypothetical admission policies. Under a policy that admits students solely on the basis of standardized test scores and participation in extracurricular activities--and holding fixed the combined number of enrolled Asian American and white students--we estimate that enrollment of South Asian students and East Asian students would increase substantially, while the number of Southeast Asian students would remain approximately the same. Concerns about the disparate impacts of college admissions policies on Asian American students are often entangled with discussions about affirmative action (Antonovics and Sander, 2013; Gelman et al., 2019; Gersen, 2017; Hughes et al., 2016; Karabel, 2005; Kim, 2022; Park et al., 2023; Takagi, 1992; West-Faulcon, 2016). At their core, however, these two issues--affirmative action and differences in the admission rates of similarly qualified white and Asian American students--are conceptually distinct. In particular, during the time period we consider, institutions could have admitted Asian American applicants at rates comparable to similarly qualified white students while still giving preference to applicants from groups underrepresented in higher education.3 Footnote 3: As of 2023, explicit racial preferences in college admissions are no longer legally permissible (Students for Fair Admissions, Inc. v. President and Fellows of Harvard College, 2023). ## Data description Our analysis is based on applications submitted through a national postsecondary application platform. The data we use contain detailed, anonymized information on each student, including race and gender; standardized test scores (ACT and/or SAT); high school grade-point average (GPA); Advanced Placement (AP) exam scores; structured descriptions of their extracurricular activities (e.g., the number of hours they spent participating in various clubs or sports); the location and other characteristics of the high school they attended; whether their parents attended college, and, if so, the colleges they attended; whether they received an application fee waiver (a proxy for financial need); the set of colleges to which they applied via the platform; and whether they applied early action or early decision to any of the institutions we consider (Table A4). If a student took the SAT, we convert their SAT score to an equivalent ACT score to facilitate comparisons between applicants and aid interpretation.4 Although we have quite detailed individual-level data, we do not have access to the full set of application materials, including student essays, letters of recommendation, or intended major. We also do not have access to internal college evaluations, such as interviewer ratings. We approximate admissions decisions by first inferring enrollment decisions. We infer enrollment by observing the school to which a high school counselor sent a student's official high school transcript, information that is collected by the platform. (NB: official transcripts typically are required by colleges to formalize acceptance decisions.) We then infer that students were admitted to at least one of the schools we consider if, and only if, they sent a transcript to (i.e., ultimately enrolled in) one of those schools. This inference rests on an assumption that students who were admitted to at least one of the schools we consider ultimately attended one of those schools. While imperfect, three points suggest this process yields results that are suitably accurate for our purposes. First, we assessed the quality of our enrollment inference by matching 5,000 randomly selected applicants to the schools we consider to be their true enrollments as reported by the National Student Clearinghouse. We find that the estimated precision of our enrollment inference strategy is 97% with an estimated recall of 91%. We further find that accuracy is comparable across race groups (see the Methods section in the Appendix). Second, the schools we consider have relatively high yield rates, suggesting that admission to these schools is strongly correlated with enrollment. Finally, we find qualitatively similar results with an estimation strategy that holds under the weaker assumption that enrollment is independent of race, conditional on acceptance and other observed student characteristics (see the Estimating Admission Rates section in the Appendix for details). Our study pool is comprised of 685,709 applications submitted by 292,795 students to the colleges and universities we consider in the 2015-2016 through the 2019-2020 application cycles. We include Asian and white applicants who attended a U.S. high school, excluding students from high schools for which we cannot reliably infer college enrollment (see the Methods section in the Appendix and Table A2). We cannot identify athletic recruits with certainty, but we exclude from our sample students who appear to be athletic recruits based on the timing of their applications and their reported extracurricular activities (see the Methods section in the Appendix). Within our study pool, 36% of applicants self-identify as Asian, with 51%, 15%, and 34% of these students self-identifying as East Asian, Southeast Asian, and South Asian, respectively. Finally, we supplement our data from the platform with public high school data from the Common Core of Data (CCD), private high school data from the Private School Universe Survey (PSS), and rurality data at the ZIP code level from the Economic Research Service of the U.S. Department of Agriculture. ## Results Among applicants to the colleges and universities we consider, we estimate that 16% of East Asian, 8% of Southeast Asian, and 10% of South Asian students were admitted to at least one of these institutions, compared to 12% of white applicants. While these aggregate admissions rates differ by race and ethnicity, they do not account for differences in qualifications across groups. For example, Asian American applicants had, on average, higher standardized test scores than white applicants (Table A3). As a first step to account for these differences, in Figure 1 we show estimated admissions rates by standardized test score for Asian American applicants and white applicants. We find that Asian American students were admitted at consistently lower rates than white applicants with comparable test scores, with the largest gap for South Asian applicants. For instance, among applicants with an ACT (or ACT-equivalent) score of 34--placing them in the 99th percentile of test takers--we estimate that 16% of white students were admitted compared to 9% of South Asian students, a relative gap of 43%. Standardized test scores are one factor among many that colleges consider when determining whom to admit. Additional criteria that we are able to observe include high school grade-point average (GPA), participation in extracurricular activities, legacy status, and the state in which each applicant's high school is located. To understand the extent to which these other considerations may explain the observed disparities in admissions rates, we fit a series of nested logistic regression models of the following form: \[\Pr(Y_{i}=1)=\text{logit}^{-1}(\beta_{0}+\beta_{S}\mathbbm{1}_{S}+\beta_{E} \mathbbm{1}_{E}+\beta_{SE}\mathbbm{1}_{SE}+X_{i}\beta_{X}),\] where \(Y_{i}\) is a binary variable indicating whether applicant \(i\) was admitted to any college or university we consider; \(\mathbbm{1}_{S}\), \(\mathbbm{1}_{E}\), and \(\mathbbm{1}_{SE}\) indicate whether the applicant identified as South Asian, East Asian, or Southeast Asian, respectively; and \(X_{i}\) is a vector of additional covariates (e.g., test scores and GPA) that we vary across models, with \(\beta_{X}\) the corresponding vector of coefficients. Our key coefficients of interest are \(\beta_{S}\), \(\beta_{E}\), and \(\beta_{SE}\), which yield estimates of the gap in admissions rates between white applicants and Asian American Figure 1: _Estimated rate of admission to at least one of the selective institutions we consider as a function of standardized test score, for Asian American applicants and white applicants in the study pool. Asian American applicants typically were admitted at lower rates than white applicants with identical test scores, with the largest gap for South Asian students. Among admits in our study pool who report ACT or SAT scores, 93% have ACT (or ACT-equivalent) scores at or above 32. Percentiles are derived from all students who took the ACT in 2018 [ACT, Inc., 2018]. Point sizes are proportional to the number of applicants in each group._ applicants in the three Asian subgroups that we consider. We find similar results if we fit separate models comparing white applicants to applicants in each Asian subgroup individually (Tables A13-A15). Table 1 shows, for nine models that include different subsets of control variables, the fitted coefficients for each of the three Asian subgroups (see also Tables A5-A12). Coefficients are exponentiated for ease of interpretation as odds ratios. The first model includes only fixed effects for the application season and the subset of colleges (or application "basket") to which the student applied--among the full set of colleges we consider--facilitating comparisons among groups of students who applied in the same year and to the same subset of colleges. The corresponding coefficients are thus akin to raw admissions odds ratios across groups, without adjusting for differences in applicant credentials. The second and third models in Table 1 additionally adjust for measures of academic preparation, including SAT/ACT alone (Model 2) and, additionally, GPA, AP test scores, and SAT II subject test scores (Model 3). These academic-preparation models corroborate the visual pattern in Figure 1: we estimate that Asian American students--especially South Asian students--had substantially lower odds of admission than white students with similar test scores and related academic credentials. These disparities largely persist when we progressively adjust for extracurricular activities (Model 4); gender and family characteristics, like whether the student received an application fee waiver (Model 5); and whether the student applied early (Model 6). Next, with Model 7, we account for whether a student is the child of an alum. After adjusting for legacy status--in addition to all of the above mentioned factors--we see large reductions in the estimated disparities in acceptance rates for all three Asian subgroups we consider. Figure 2 helps explain this result. The top panel of the figure shows estimated admission rates for Asian American applicants and white applicants conditional on legacy status and test scores.5\({}^{,}\)6 For a given test score, we estimate that applicants--both white and Asian American--with legacy status were more than twice as likely to gain admission than applicants without legacy status. In the bottom panel of Figure 2, we present prevalence of legacy status among applicants with an ACT-equivalent test score of 32 or above, mirroring the focus of the upper panel. Here, we observe that white applicants were approximately three times more likely to have legacy status than East Asian and Southeast Asian applicants, and almost six times more likely than South Asian students. Thus, even though estimated acceptance rates conditional on test score and legacy status were similar across race and ethnicity, white students appear to benefit from being substantially more likely to have legacy status. Footnote 5: In Figure 2, we follow convention and define legacy status to mean an applicant had at least one parent who attended one of the colleges or universities we consider as an undergraduate, and the student applied to the institution(s) that their parent(s) attended. In our regression models, we additionally adjust for other familial connections to the included colleges and universities, like a parent attending graduate school there or having two parents with undergraduate degrees from the same school Footnote 6: In prior work examining the effect of legacy status on admission to elite institutions, Hurwitz [2011] found that the magnitude of the legacy effect is larger in models that account for the application components that we do not observe. In theory, the higher estimated admissions rates that we observe for legacy applicants may stem both from admissions practices that favor the children of alumni and from the potentially greater social capital of legacy students. We note, however, that Model 5 adjusts for whether an applicant had a parent who attended a top-50 institution (based on 2019 U.S. News rankings) not included in the subset of colleges on which we focus, or attended one of the colleges in our subset to which the student did not apply--proxies for having high social capital distinct from legacy status specifically. The change in disparities that we observe moving from Model 5 to Model 7 thus appears attributable to the specific benefits of having legacy status, rather than the more generalized benefits of high social capital. Finally, we examine the relationship between estimated acceptance rates and geography. For each state, Figure 3 displays the estimated admission rate of high-achieving applicants--with ACT-equivalent scores of 32 or above--to the fraction of applicants from that state who were Asian American. In computing this proportion, we limit to white applicants and Asian American applicants, and point sizes are proportional to the total number of high-scoring white and Asian American applicants in each state. The negatively sloped Figure 2: _Estimated rate of admission to at least one college or university we consider for white applicants and Asian American applicants with high ACT or SAT scores. Across test scores, we estimate that applicants with a parent who attended one of the selective institutions we consider as an undergraduate are more than twice as likely to be admitted than non-legacy applicants with the same test scores. The bottom panel shows the proportion of applicants with high test scores who have legacy status, disaggregated by race. High-scoring white applicants are three to six times more likely to have legacy status than high-scoring Asian American applicants, suggesting white applicants disproportionately benefit from a boost in admission rates afforded to those with legacy status._ regression line shows that states with a larger fraction of Asian American applicants tended to have lower estimated admission rates. Further, states with a higher proportion of Asian American applicants tended to have higher average test scores, suggesting the geographic trend is not driven by a gap in academic achievement (Figure A2). This geographic pattern also persists when we exclude applicants from California, and when we disaggregate the data to the level of high school instead of state (Figures A1 and A3). Model 8 in Table 1--which adjusts for location as well as academic and extracurricular performance but not legacy status--shows that these apparent geographic preferences account for much of the admissions gap between white and Asian American applicants. Model 9, the last one we consider, adjusts for all application information available to us, including both legacy status and geography. After adjusting for this rich set of covariates, we see that the estimated admissions gap between Southeast Asian and white applicants largely disappears, though we still find that white students have higher estimated odds of admission than otherwise similar East Asian and South Asian applicants. It is unclear what may account for these remaining disparities, though it bears repeating that admissions officers have access to more complete application materials than do we, including letters of recommendation, essays, and interview assessments. We conclude our analysis by exploring how the relative share of Asian American students at the institutions we consider might change under various hypothetical admissions policies. Figure 3: _For each U.S. state, overall estimated admission rate to at least one institution among the subset of selective schools we consider for white applicants and Asian applicants with an ACT-equivalent score at or above 32, with the proportion of high-scoring white and Asian applicants who identify as Asian on the horizontal axis. Point sizes are proportional to the number of high-scoring white and Asian applicants from the state who applied to one of the institutions we consider. The red least-squares regression line is weighted by the same count of applicants. States with a greater share of Asian American applicants have, on average, lower estimated admission rates for high-scoring applicants._ In line with our analysis above, we restrict our attention to white students and Asian American students. Specifically, we hold fixed the combined number of students in these groups (approximately mirroring historical admissions outcomes, as shown in Figure A4), and so any increases in Asian American enrollment necessarily imply decreases in enrollment of white students. Any exercise of this sort is inherently speculative--in part because changes in admissions policies could alter application behavior--but we still believe it is informative to gauge the approximate magnitude of effects. As a baseline, the top row of Figure 4 shows the estimated share of enrollees in our data from the three Asian subgroups of interest. The rest of the figure shows the estimated share of enrollees from these subgroups under eight hypothetical admissions policies that are divided into four categories. In the first category--which we call "top-\(k\)" policies--we imagine admitting students with the highest ACT-equivalent scores, with ties broken randomly. In the second category, "random above threshold," we consider policies that randomly admit students above an ACT-equivalent score \(t\) such that admitted students have a mean score equal to that of actual enrollees (Sandel, 2020). For both of these categories we consider two variants: the "ACT" variant selects from the entire applicant pool of the schools we consider, while the "ACT+ECs" variant selects only from applicants with at least as many hours of reported extracurricular (EC) activities over four years of high school as the median of the hours reported by all enrollees. Under all four policies, we estimate the same or larger shares of Asian American students compared to what we observe in the data. Asian American students report, on average, fewer extracurricular hours than white applicants, so the ACT+ECs policy variant admits fewer Asian American applicants than the ACT variant. The final two categories we consider investigate outcomes under hypothetical policies that maintain both the current number of enrollees from each state and the total number of enrollees with legacy. Specifically, we first divide our historical data into 102 (2 x 51) cells consisting of legacy and non-legacy applicants from each U.S. state and Washington, D.C.; we then in turn apply each of the four policies described above to each of the 102 cells, ensuring for each cell that the number of students enrolled under the hypothetical policies matches the historical enrollment numbers. With these added legacy and geographic constraints, the share of Asian American enrollees is smaller than under the unconstrained analogs, as expected given our results above. But, even with these constraints, the number of Asian American enrollees across policies is still similar to or larger than the status quo. ## Discussion Based on a large-scale analysis of applications to a subset of selective U.S. colleges and universities, it appears that that Asian American students were less likely to be admitted than white students with comparable academic credentials and extracurricular activities, a disparity that is particularly pronounced for South Asian students. It further appears that much--though not all--of this gap is attributable to admissions practices that favor the children of alumni and apparent geographic preferences. These disparities likely stem from a complex set of objectives that universities work to balance, and are not necessarily driven by explicit or implicit racial preferences. Nonetheless, our results prompt questions about the equitable design of college admissions policies. In our primary analysis, we excluded applicants who we inferred were recruited athletes, under the assumption that filling sports teams is a hard constraint for many universities, and that doing so involves qualitatively distinct admissions criteria. We note, though, that athletic recruits are disproportionately likely to be white rather than Asian American: in our study pool, white applicants outnumber Asian American applicants by a factor of about two to one, but among inferred recruits, white applicants outnumber Asian American applicants by a factor of four to one. As a result, if we do not proactively exclude recruited athletes from our analysis, we find an even larger gap in the estimated admissions rates between Asian American students and white students with comparable academic credentials (Tables A13-A15). Our results are subject to two key limitations. First, we have imperfect information on college admissions decisions. In our analysis, we infer admissions decisions from enrollment choices, where we assume that students who applied to but did not ultimately attend one of the selective schools we consider were not admitted to any of those schools. This assumption only allows us to approximately reconstruct admissions decisions. However, given the relatively high yield rates of the universities we consider, we believe this assumption is suitably accurate for our analysis. Further, we find qualitatively similar results under an al Figure 4: _Estimated enrollment of Asian American students at the institutions we consider under eight hypothetical admissions policies, with the top panel showing the actually observed demographic composition in our historical data. In all cases, we consider only the subset of Asian American students and white students, and so increases in Asian American enrollment correspond to decreases in the enrollment of white students. In most instances, the hypothetical policies we consider lead to an increase in enrollment of Asian American students, including those that preserve the number of legacy students and the number of enrollees from each state in the historical data._ ternative estimation strategy that rests on the weaker assumption that enrollment decisions are independent of race, conditional on acceptance and other observed student characteristics (see the Estimating Admission Rates section in the Appendix). Finally, our results remain largely the same if we eliminate any one school from our analysis (Tables A13-A15), suggesting the robustness of our results to the exact subset of schools we consider. Second, we do not have access to each student's complete application materials. Specifically, we do not observe a student's intended major, essays, teacher recommendations, transcripts, interview ratings, and admission officer ratings. It is thus possible that students who we observe to have similar academic and extracurricular credentials are in fact different in important ways that are revealed in these other materials. We note, however, that results made public through litigation suggest that--at least in the case of Harvard's admissions practices--the disparities we identify persist after adjusting for several additional markers of academic and extracurricular excellence, including admission officer ratings of each applicant's academics, extracurriculars, teacher recommendations, and counselor recommendations (cf. Figure 6.1 in [Students for Fair Admissions, Inc. v. President and Fellows of Harvard College, 2017], Model 4 to Model 5, Table B.7.1 and B.7.2 show coefficients).7 Further, Kim [2022] finds that Asian American and white college applicants with similar academic credentials receive letters of recommendation that are "broadly similar in content and tone." Footnote 7: Expert testimony provided in the Harvard case indicates that disparities in admission rates at Harvard are reduced after adjusting for admission officers’ assessments of an applicant’s “personal qualities” and admission officers’ “overall rating” of an applicant. There is worry, however, that assessments of “personal qualities” are more subjective than ratings of academic and extracurricular achievements, are less clearly connected to merit, and may be influenced by implicit or explicit racial biases. Further, “overall ratings” are so closely tied to the final admissions decision, that we would expect adjusting for them would mask any disparities [Jung et al., 2018]. Discussions of college admissions practices impacting Asian Americans often revolve around affirmative action. But, as we noted at the start, these issues are conceptually distinct. In theory, one can both implement affirmative action policies that maintain the share of students on campus from groups that are underrepresented in higher education while simultaneously admitting Asian American students at the same rate as white students with similar academic and extracurricular credentials. In such a case, we would expect the number of enrolled white students to decrease, not the number of racial minorities. During the time period we examined, affirmative action was widely used for shaping the diversity of college campuses, meaning the scenario described above was an option available to college administrators. Thus, at the very least, our results shed light on past admissions choices and their consequences for Asian American college applicants. Now that affirmative action is legally prohibited, institutions will need to reconsider how applicants are evaluated in order to ensure equitable admissions processes and to maintain diverse campuses. For example, existing decision-making processes that afford preference to the children of alumni appear to not only disadvantage Asian Americans but also other racial minorities (Figure A5). Looking ahead, we hope our findings facilitate ongoing discussions about the design and implementation of equitable admissions policies.
2301.08778
Split Ways: Privacy-Preserving Training of Encrypted Data Using Split Learning
Split Learning (SL) is a new collaborative learning technique that allows participants, e.g. a client and a server, to train machine learning models without the client sharing raw data. In this setting, the client initially applies its part of the machine learning model on the raw data to generate activation maps and then sends them to the server to continue the training process. Previous works in the field demonstrated that reconstructing activation maps could result in privacy leakage of client data. In addition to that, existing mitigation techniques that overcome the privacy leakage of SL prove to be significantly worse in terms of accuracy. In this paper, we improve upon previous works by constructing a protocol based on U-shaped SL that can operate on homomorphically encrypted data. More precisely, in our approach, the client applies Homomorphic Encryption (HE) on the activation maps before sending them to the server, thus protecting user privacy. This is an important improvement that reduces privacy leakage in comparison to other SL-based works. Finally, our results show that, with the optimum set of parameters, training with HE data in the U-shaped SL setting only reduces accuracy by 2.65% compared to training on plaintext. In addition, raw training data privacy is preserved.
Tanveer Khan, Khoa Nguyen, Antonis Michalas
2023-01-20T19:26:51Z
http://arxiv.org/abs/2301.08778v1
# Split Ways: Privacy-Preserving Training of Encrypted Data Using Split Learning ###### Abstract Split Learning (SL) is a new collaborative learning technique that allows participants, e.g. a client and a server, to train machine learning models without the client sharing raw data. In this setting, the client initially applies its part of the machine learning model on the raw data to generate activation maps and then sends them to the server to continue the training process. Previous works in the field demonstrated that reconstructing activation maps could result in privacy leakage of client data. In addition to that, existing mitigation techniques that overcome the privacy leakage of SL prove to be significantly worse in terms of accuracy. In this paper, we improve upon previous works by constructing a protocol based on U-shaped SL that can operate on homomorphically encrypted data. More precisely, in our approach, the client applies Homomorphic Encryption (HE) on the activation maps before sending them to the server, thus protecting user privacy. This is an important improvement that reduces privacy leakage in comparison to other SL-based works. Finally, our results show that, with the optimum set of parameters, training with HE data in the U-shaped SL setting only reduces accuracy by 2.65% compared to training on plaintext. In addition, raw training data privacy is preserved. ## 1 Introduction Machine Learning (ML) models have attracted global solution and are used in a plethora of applications such as medical diagnosis, pattern recognition, and credit risk assessment. However, applications and services using ML are often breaching user privacy. As a result, the need to preserve the confidentiality and privacy of individuals and maintain user trust has gained extra attention. This is not only because of the technological advancements that privacy-preserving machine learning (PPML) can offer, but also due to its potential societal impact (i.e. building fairer, democratic and unbiased societies). Split Learning (SL) and Federated Learning (FL) are the two methods of collaboratively training a model derived from distributed data sources without sharing raw data [7]. In FL, every client runs a copy of the entire model on its data. The server receives updated weights from each client and aggregates them. The SL model divides the neural network into two parts: the client-side and the server-side [6]. SL is used for training Deep Neural Networks (DNN) among multiple data sources, while mitigating the need to directly share raw labeled data with collaboration parties. The advantages of SL are multifold: _(i)_ it allows users to train ML models without sharing their raw data with a server running part of a DNN model, thus preserving user privacy. _(ii)_ it protects both the client and the server from revealing their parts of the model, and _(iii)_ it reduces the client's computational overhead by not running the entire model (i.e. utilizing a smaller number of layers) [8]. Though SL offers an extra layer of privacy protection by definition, there are no works exploring how it is combined with popular techniques that promise to preserve user privacy (e.g. encryption). In [1], the authors studied whether SL can handle sensitive time-series data and demonstrated that SL alone is _insufficient_ when performing privacy-preserving training for 1-dimensional (1D) CNN models. More precisely, the authors showed raw data can be reconstructed from the activation maps of the intermediate split layer. The authors also employed two mitigation techniques, adding hidden layers and applying differential privacy to reduce privacy leakage. However, based on the results, none of these techniques can effectively reduce privacy leakage from all channels of the SL activation. Furthermore, both these techniques result in reducing the joint model's accuracy. In this work, we construct a model that uses Homomorphic Encryption (HE) [2] to mitigate privacy leakage in SL. In our proposed model, the client first encrypts the activation maps and then sends the en crypted activation maps to the server. The encrypted activation maps do _not_ reveal anything about the raw data (i.e. it is _not_ possible to reconstruct the original raw data from the encrypted activation maps). **Vision**: AI systems have proven surpass people in recognizing abnormalities such as tumours on X-rays and ultrasound scans [9]. In addition to that, machines can reliably make diagnoses equal to those of human experts. All the evidence indicates that we can now build systems that achieve human expert performance in analyzing medical data - systems allowing humans to send their medical data to a remote AI service and receive an accurate automated diagnosis. An intelligent and efficient AI healthcare system of this type offers a great potential since it can improve the health of humans but also have an important social impact. However, these opportunities come with certain pitfalls, mainly concerning privacy. With this in mind, we have designed a system that analyzes images in a privacy-preserving way. More precisely, we show how encrypted images can be analyzed with high accuracy without leaking information about their actual content. While this is still far from our big dream (namely automated AI diagnosis) we still believe it is an important step that will eventually pave the way towards our time goal. **Contributions**: The main contributions of this paper are the following: * this is an important improvement that reduces privacy leakage compared to [1]. * We constructed the HE version of the U-shaped SL technique. In the encrypted U-shaped SL model, the client encrypts the activation map using HE and sends it to the server. The core advantage of the HE encrypted U-shaped SL over the plaintext U-shaped SL is that the server performs computation over the encrypted activation maps. * To assess the applicability of our framework, we performed experiments on a heartbeat datasets: the MIT-DB [5] For this dataset, we experimented with activation maps of 256 for both plaintext and homomorphically encrypted activation maps and we have measured the model's performance by considering training duration, test accuracy, and communication cost. ## 2 Related Work The SL approach proposed by Gupta and Raskar [3] offers a number of significant advantages over FL. Similar to FL [10], SL does _not_ share raw data. In addition, it has the benefit of _not_ disclosing the model's architecture and weights. For example, [3] predicted that reconstructing raw data on the client-side, while using SL would be difficult. In addition, the authors of [8]employed the SL model to the healthcare applications to protect the users' personal data. Vepakomma _et al._ found that SL outperforms FL in terms of accuracy [8]. Initially, it was believed that SL is a promising approach in terms of client raw data protection, however, SL provides data privacy on the grounds that only intermediate activation maps are shared between the parties. Different studies showed the possibility of privacy leakage in SL. In [7], the authors analyzed the privacy leakage of SL and found a considerable leakage from the split layer in the 2D CNN model. Furthermore, the authors mentioned that it is possible to reduce the distance correlation (a measure of dependence) between the split layer and raw data by slightly scaling the weights of all layers before the split. This type of scaling works well in models with a large number of hidden layers before the split. The work of Abuadbba _et al._[1] is the first study exploring whether SL can deal with time-series data. It is dedicated to investigating _(i)_ whether an SL can achieve the same model accuracy for a 1D CNN model compared to the non-split version and _(ii)_ whether it can be used to protect privacy in sequential data. According to the results, SL can be applied to a model without the model classification accuracy degradation. As for the second question, the authors proved it is possible to reconstruct the raw data (personal ECG signal) in the 1D CNN model using SL by proposing a privacy assessment framework. They suggested three metrics: visual invertibility, distance correlation, and dynamic time warping. The results showed that when SL is directly adopted into 1D CNN models for time series data could result in significant privacy leakage. Two mitigation techniques were employed to limit the potential privacy leakage in SL: _(i)_ increasing the number of layers before the split on the client-side and _(ii)_ applying differential privacy to the split layer activation before sending the activation map to the server. However, both techniques suffer from a loss of model accuracy, particularly when differential privacy is used. The strongest differential privacy can increase the dissimilarity between the activation map and the corresponding raw data. However, _it degrades the classification accuracy significantly from 98.9% to 50%._ In [1], during the forward propagation, the client sends the activation map in plaintext to the server, where the server can easily reconstruct the original raw data from the activated vector of the split layer leading to clear privacy leakage. In our work, we constructed a training protocol, where, instead of sending plaintext activation maps, the client first conducts an encryption using HE and then sends said maps to the server. In this way, the server is unable to reconstruct the original raw data, but can still perform a computation on the encrypted activation maps and realize the training process. ## 3 Architecture In this section,we first describe the non-split version or local model of the 1D CNN used to classify the ECG signal. Then, we discuss the process of splitting this local model into a U-shaped split model. Furthermore, we also describe the involved parties (a client and a server) in the training process of the split model, focusing on their roles and the parameters assigned to them throughout the training process. ### 1D CNN Local Model Architecture We first implement and successfully reproduce the local model results [1]. This model contains two Conv1D layers and two FC layers. The optimal test accuracy that this model achieves is 98.9%. We implement a simplified version where the model has one less FC layer compared to the model from [1]. Our local model consists of all the layer of Figure 1 without any split between the client and the server. As can be seen in Figure 1, we limit our model to two Conv1D layers and one linear layer as we aim to reduce computational costs when HE is applied on activation maps in the model's split version. Reducing the number of FC layers leads to a drop in the accuracy of the model. The best test accuracy we obtained after training our local model for 10 epochs with a batch size of 4 is 92.84%. _Although reducing the number of layers affects the model's accuracy, it is not within our goals to demonstrate how successful our ML model is for this task; instead, our focus is to construct a split model where training and evaluation on encrypted data are comparable to training and evaluation on plaintext data._ In section 5, we detail the results for the non-split version and compare them with the split version. ### U-shaped Split 1D CNN Model The split learning protocol consists of two parties: the client and server. We split the local 1D CNN into multiple parts, where each party trains its part(s) and communicates with others to complete the overall training procedure. More specifically, we construct the U-shaped split 1D CNN in such a way that the first few as well as the last layer are on the client-side, while the remaining layers are on the server-side. Actors in the Split Learning ModelAs mentioned earlier, in our split learning setting, we have two involved parties: the client and the server. Each party plays a specific role and has access to certain parameters. More specifically, their roles and accesses are described as following * Client: In the plaintext version, the client holds two Conv1D layers and can access their weights and biases in plaintext. Other layers (Max Pooling layers, Leaky ReLU layers, Softmax layer) do not have weights and biases. Apart from these, in the HE encrypted version, the client is also responsible for generating the context for HE and has access to all context parameters (Polynomial modulus (\(\mathcal{P}\)), Coefficient modulus (\(\mathcal{C}\)), Scaling factor (\(\Delta\)), Public key (pk) and Secret key (sk)). Note that for both training on plaintext and encrypted activation maps, the raw data examples \(\mathbf{x}\)'s and their corresponding labels \(\mathbf{y}\)'s reside on the client side and are never sent to the server during the training process. * Server: In our model, the computation performed on the server-side is limited to only one linear layer. Hence, the server can exclusively access the weights and biases of this linear layer. Regarding the HE context parameters, the server has access to \(\mathcal{P}\), \(\mathcal{C}\), \(\Delta\) Figure 1: U-shaped Split-Learning and pk shared by the client, with the exception of the sk. Not holding the sk, the server cannot decrypt the HE encrypted activation maps sent from the client. The hyperparameters shared between the client and the server are the learning rate (\(\eta\)), batch size (\(n\)), number of batches to be trained (\(N\)), and number of training epochs (\(E\)). ## 4 Split Model Training Protocols In this section, we first present the protocol for training the U-shaped split 1D CNN on plaintext activation maps, followed by the protocol for training the U-shaped split 1D CNN on encrypted activation maps. ### Training U-shaped Split Learning with Plain-text Activation Maps We have used algorithm 1 and algorithm 2 to train the U-shaped split 1D CNN reported in subsection 3.2. First, the client and server start the socket initialization process and synchronize the hyperparameters \(\eta,n,N,E\). They also initialize the weights (\(\mathbf{w}^{i}\)) and biases (\(\mathbf{b}^{i}\)) of their layers according to \(\Phi\). During the forward propagation phase, the client forward-propagates the input \(\mathbf{x}\) until the \(l^{th}\) layer and sends the activation \(\mathbf{a}^{(l)}\) to the server. The server continues to forward propagate and sends the output \(\mathbf{a}^{(L)}\) to the client. Next, the client applies the Soft-max function on \(\mathbf{a}^{(L)}\) to get \(\mathbf{\hat{y}}\) and calculates the error \(J=\mathcal{L}(\mathbf{\hat{y}},\mathbf{y})\). The client starts the backward propagation by calculating and sending the gradient of the error w.r.t \(\mathbf{a}^{(L)}\), i.e. \(\frac{\partial J}{\partial\mathbf{a}^{(L)}}\), to the server. The server continues the backward propagation, calculates \(\frac{\partial J}{\partial\mathbf{a}^{(l)}}\) and sends \(\frac{\partial J}{\partial\mathbf{a}^{(l)}}\) to the client. After receiving the gradients \(\frac{\partial J}{\partial\mathbf{a}^{(l)}}\) from the server, the backward propagation continues to the first hidden layer on the client-side. Note that the exchange of information between client and server in these algorithms takes place in plaintext. As can be seen in algorithm 1, the client sends the activation maps \(\mathbf{a}^{(l)}\) to the server in plaintext and receives the output of the linear layer \(\mathbf{a}^{(L)}\) from the server in plaintext. The same applies on the server side: receiving \(\mathbf{a}^{(l)}\) and sending \(\mathbf{a}^{(L)}\) in the plaintext as can be seen in algorithm 2. Sharif _et al._[1] showed that the exchange of plaintext activation maps between client and server using SL reveals important information regarding the client's raw sequential data. Later, in subsection 5.1 we show in detail how passing the forward activation maps from the client to the server in the plaintext will result in information leakage. To mitigate this privacy leakage, we propose the protocol, where the client encrypts the activation maps before sending them to the server, as described in subsection 4.2. ``` Initialization: \(s\leftarrow\) socket initialized with port and address; \(s\).connect \(\eta,n,N,E\gets s.synchronize()\) \(\{\mathbf{w}^{(i)},\mathbf{b}^{(i)}\}_{\forall i\in\{0..l\}}\ \leftarrow\ initialize\ using\ \Phi\) \(\{\mathbf{z}^{(i)}\}_{\forall i\in\{0..l\}},\{\mathbf{a}^{(i)}\}_{\forall i \in\{0..l\}}\leftarrow\emptyset\) \(\{\frac{\partial J}{\partial\mathbf{a}^{(i)}}\}_{\forall i\in\{0..l\}},\{ \frac{\partial J}{\partial\mathbf{a}^{(i)}}\}_{\forall i\in\{0..l\}}\leftarrow\emptyset\) for\(e\ \in\ E\)do for each batch \((\mathbf{x},\ \mathbf{y})\) generated from \(D\)do Forward propagation: \(O.zero\_grad()\) \(\mathbf{a}^{0}\ \leftarrow\mathbf{x}\) for\(i\gets 1\ to\ l\)do for\(i\ \leftarrow\ 1\ to\ l\)do \(\mathbf{z}^{(i)}\ \leftarrow\ f^{(i)}\left(\mathbf{a}^{(i-1)}\right)\) \(\mathbf{a}^{(i)}\ \leftarrow\ g^{(i)}\left(\mathbf{z}^{(i)}\right)\) end for \(s.send\ (\mathbf{a}^{(l)})\) \(s.receive\ (\mathbf{a}^{(L)})\) \(\hat{y}\ \leftarrow\ Softmax\left(\mathbf{a}^{(L)}\right)\) \(J\leftarrow\mathcal{L}(\mathbf{\hat{y}},\mathbf{y})\) Backward propagation: \(\text{Compute}\left\{\frac{\partial J}{\partial\mathbf{\hat{y}}}\ \&\ \frac{\partial J}{\partial\mathbf{a}^{(L)}}\right\}\) \(s.send\ \left(\frac{\partial J}{\partial\mathbf{a}^{(L)}}\right)\) \(s.receive\ \left(\frac{\partial J}{\partial\mathbf{a}^{(l)}}\right)\) for\(i\gets 1\ to\ l\)do Compute \(\left\{\frac{\partial J}{\partial\mathbf{\hat{w}}^{(i)}},\ \frac{\partial J}{\partial\mathbf{\hat{b}}^{(i)}}\right\}\) Update \(\mathbf{w}^{(i)},\ \mathbf{b}^{(i)}\) end for end for end for ``` **Algorithm 1**Client Side ### Training U-shaped Split 1D CNN with Encrypted Activation Maps The protocol for training the U-shaped 1D CNN with a homomorphically encrypted activation map consists of four phases: initialization, forward propagation, classification, and backward propagation. The initialization phase only takes place once at the beginning of the procedure, whereas the other phases continue until the model iterates through all epochs. Each of these phases are described in detail in the following subsections. InitializationThe initialization phase consists of socket initialization, context generation, and random weight loading. The client first establishes a socket connection to the server and synchronizes the four hyperparameters \(\eta,\ n,\ N,E\) with the server, shown in algorithm 3 and algorithm 4. These parameters must be synchronized on both sides to be trained in the same way. Also, the weights on the client and server are initialized with the same set of corresponding weights in the local model to accurately assess and compare the influence of SL on performance. On both the client and the server sides, \(\mathbf{w}^{(i)}\) are initialized using corresponding parts of \(\Phi\). The activation map at layer \(i\) (\(\mathbf{a}^{(i)}\)), output tensor of a Conv1D layer (\(\mathbf{z}^{(i)}\)), and the gradients are initially set to zero. In this phase, the context generated is a specific object that holds encryption keys pk and sk of the HE scheme as well as certain additional parameters like \(\mathcal{P}\), \(\mathcal{C}\) and \(\Delta\). ``` Initialization: \(s\leftarrow\) socket initialized with port and address; \(s.connect\) \(\eta,n,N,E\gets s.synchronize()\) \(\{\mathbf{w}^{(i)},\mathbf{b}^{(i)}\}_{\forall i\in\{0..l\}}\ \leftarrow\ initialize\ using\ \Phi\) \(\{\mathbf{z}^{(i)}\}_{\forall i\in\{l+1..L\}}\leftarrow\emptyset\) \(\left\{\frac{\partial J}{\partial\mathbf{z}^{(i)}}\right\}_{\forall i\in\{l+1..L\}}\leftarrow\emptyset\) for\(e\ \in\ E\)do for\(i\gets 1\)to\(N\)do Forward propagation: \(O.zero\_grad()\) \(s.receive\) (\(\mathbf{a}^{(l)}\)) \(\mathbf{a}^{(L)}\ \leftarrow\ f^{(i)}\left(\mathbf{a}^{(l)}\right)\) \(s.send\left(\mathbf{a}^{(L)}\right)\) Backward propagation: \(s.receive\) \(\left(\frac{\partial J}{\partial\mathbf{a}^{(L)}}\right)\) Compute \(\left\{\frac{\partial J}{\partial\mathbf{w}^{(L)}},\ \frac{\partial J}{ \partial\mathbf{b}^{(L)}}\right\}\) Update \(\mathbf{w}^{(L)},\ \mathbf{b}^{(L)}\) Compute \(\frac{\partial J}{\partial\mathbf{a}^{(l)}}\) \(s.send\left(\frac{\partial J}{\partial\mathbf{a}^{(l)}}\right)\) end for end ``` **Algorithm 2**Server Side Further information on the HE parameters and how to choose the best-suited parameters can be found in the TenSEAL's benchmarks tutorial1. As shown in algorithm 3 and algorithm 4, the context is either public (ct\({}_{\texttt{pub}}\)) or private (ct\({}_{\texttt{spri}}\)) depending on whether it holds the secret key sk. Both the ct\({}_{\texttt{spub}}\) and ct\({}_{\texttt{spri}}\) have the same parameters, though ct\({}_{\texttt{spri}}\) holds a sk and ct\({}_{\texttt{spub}}\) does not. The server does not have access to the sk as the client only shares the ct\({}_{\texttt{spub}}\) with the server. After completing the initialization phase, both the client and server proceed to the forward and backward propagation phases. Footnote 1: [https://bit.ly/3XKPSByN](https://bit.ly/3XKPSByN) Forward propagationThe forward propagation starts on the client side. The client first zeroes out the gradients for the batch of data \((\mathbf{x},\mathbf{y})\). He then begins calculating the \(\mathbf{a}^{(l)}\) activation maps from \(\mathbf{x}\), as can be seen in algorithm 3 where each \(f^{(i)}\) is a Conv1D layer. The Conv1D layer can be described as following: given a 1D input signal that contains \(C\) channels, where each channel \(\mathbf{x}_{(i)}\) is a 1D array (\(i\in\{1,\ldots,C\}\)), a Conv1D layer produces an output that contains \(C^{\prime}\) channels. The \(j^{th}\) output channel \(\mathbf{y}_{(j)}\), where \(j\in\{1,\ldots,C^{\prime}\}\), can be described as2 Footnote 2: [https://pytorch.org/docs/stable/generated/torch.nn.Convid.html](https://pytorch.org/docs/stable/generated/torch.nn.Convid.html) \[\mathbf{y}_{(j)}=\mathbf{b}_{(j)}+\sum_{i=1}^{C}\mathbf{w}_{(i)}\star\mathbf{x}_{(i)}, \tag{1}\] where \(\mathbf{w}_{(i)},i\in\{1,\ldots,C\}\) are the weights, \(\mathbf{b}_{(j)}\) are the biases of the Conv1D layer, and \(\star\) is the 1D cross-correlation operation. The \(\star\) operation can be described as \[\mathbf{z}(i)=(\mathbf{w}\star\mathbf{x})(i)=\sum_{j=0}^{m-1}\mathbf{w}(j)\cdot\mathbf{ x}(i+j), \tag{2}\] where \(\mathbf{z}(i)\) denotes the \(i^{th}\) element of the output vector \(\mathbf{z}\), and \(i\) starts at \(0\). Here, the size of the 1D weighted kernel is \(m\). In algorithm 3, \(g^{(i)}\) can be seen as the combination of Max Pooling and Leaky ReLU functions. The final output activation maps of the \(l^{th}\) layer from the client is \(\mathbf{a}^{(l)}\). The client then homomorphically encrypts \(\mathbf{a}^{(l)}\) and sends the encrypted activation maps \(\overline{\mathbf{a}^{(l)}}\) to the server. In algorithm 4, the server receives \(\overline{\mathbf{a}^{(l)}}\) and then performs forward propagation, which is a linear layer evaluated on HE encrypted data \(\overline{\mathbf{a}^{(l)}}\) as \[\overline{\mathbf{a}^{(L)}}=\overline{\mathbf{a}^{(l)}}\mathbf{w}^{(L)}+\mathbf{b}^{(L)}. \tag{3}\] After that, the server sends \(\mathbf{a}^{(L)}\) to the client (algorithm 4). Upon reception, the client decrypts \(\overline{\mathbf{a}^{(L)}}\) to get \(\mathbf{a}^{(L)}\), performs Softmax on \(\mathbf{a}^{(L)}\) to produce the predicted output \(\mathbf{\hat{y}}\) and calculate the loss \(J\), as can be seen in algorithm 3. Having finished the forward propagation we may move on to the backward propagation part of the protocol. Backward propagationAfter calculating the loss \(J\), the client starts the backward propagation by initially computing \(\frac{\partial J}{\partial\mathbf{\hat{y}}}\) and then \(\frac{\partial J}{\partial\mathbf{a}^{(L)}}\) and \(\frac{\partial J}{\partial\mathbf{\hat{w}}^{(L)}}\) using the chain rule (algorithm 3). Specifically, the client calculates \[\frac{\partial J}{\partial\mathbf{a}^{(L)}} =\frac{\partial J}{\partial\mathbf{\hat{y}}}\frac{\partial \mathbf{\hat{y}}}{\partial\mathbf{a}^{(L)}},\text{and} \tag{4}\] \[\frac{\partial J}{\partial\mathbf{w}^{(L)}} =\frac{\partial J}{\partial\mathbf{a}^{(L)}}\frac{\partial \mathbf{a}^{(L)}}{\partial\mathbf{w}^{(L)}}. \tag{5}\] Following, the client sends \(\frac{\partial J}{\partial\mathbf{a}^{(L)}}\) and \(\frac{\partial J}{\partial\mathbf{\hat{w}}^{(L)}}\) to the server. Upon reception, the server computes \(\frac{\partial J}{\partial\mathbf{\hat{b}}}\) by simply doing \(\frac{\partial J}{\partial\mathbf{\hat{b}}}=\frac{\partial J}{\partial\mathbf{ a}^{(L)}}\), based on equation (3). The server then updates the weights and biases of his linear layer according to equation (6). \[\boldsymbol{w}^{(L)}=\boldsymbol{w}^{(L)}-\eta\frac{\partial J}{\partial \boldsymbol{w}^{(L)}},\ \ \ b^{(L)}=\boldsymbol{b}^{(L)}-\eta\frac{\partial J}{\partial \boldsymbol{b}^{(L)}}. \tag{6}\] Next, the server calculates \[\frac{\partial J}{\partial\mathbf{a}^{(l)}}=\frac{\partial J}{\partial \mathbf{a}^{(L)}}\frac{\partial\mathbf{a}^{(L)}}{\partial\mathbf{a}^{(l)}}, \tag{7}\] and sends \(\frac{\partial J}{\partial\mathbf{a}^{(l)}}\) to the client. After receiving \(\frac{\partial J}{\partial\mathbf{a}^{(l)}}\), the client calculates the gradients of \(J\) with respect to the weights and biases of the Conv1D layer using the chain-rule, which can generally be described as \[\frac{\partial J}{\partial\boldsymbol{w}^{(i-1)}} =\frac{\partial J}{\partial\boldsymbol{w}^{(i)}}\frac{\partial \boldsymbol{w}^{(i)}}{\partial\boldsymbol{w}^{(i-1)}} \tag{8}\] \[\frac{\partial J}{\partial\boldsymbol{b}^{(i-1)}} =\frac{\partial J}{\partial\boldsymbol{b}^{(i)}}\frac{\partial \boldsymbol{b}^{(i)}}{\partial\boldsymbol{b}^{(i-1)}} \tag{9}\] Finally, after calculating the gradients \(\frac{\partial J}{\partial\boldsymbol{w}^{(i)}},\ \frac{\partial J}{\partial \boldsymbol{b}^{(i)}}\), the client updates \(\boldsymbol{w}^{(i)}\) and \(\boldsymbol{b}^{(i)}\) using the Adam optimization algorithm [4]. Note that in the backward pass, by sending both \(\frac{\partial J}{\partial\mathbf{a}^{(L)}}\) and \(\frac{\partial J}{\partial\boldsymbol{w}^{(L)}}\) to the server, we help the server keep his parameters in plaintext and prevent the multiplicative depth of the HE from growing out of bound, however, this leads to a privacy leakage of the activation maps. ``` Context Initialization: \(\texttt{ctv}_{\text{pri}},\ \leftarrow\ \mathcal{P},\ \mathcal{C},\ \Delta,\ \texttt{pk},\ \texttt{sk}\) \(\texttt{ctv}_{\text{pub}},\ \leftarrow\ \mathcal{P},\ \mathcal{C},\ \Delta,\ \texttt{pk}\) \(s.send(\texttt{ctv}_{\text{pub}})\) for\(e\) in \(E\)doforeach batch \((\mathbf{x},\ \mathbf{y})\) generated from \(\mathbf{D}\)doForward propagation: \(O.zero\_grad()\) \(\mathbf{a}^{0}\ \leftarrow\ \mathbf{x}\) for\(i\ \leftarrow\ 1\)to\(l\)do \(\mathbf{z}^{(i)}\ \leftarrow\ f^{(i)}\left(\mathbf{a}^{(i-1)}\right)\) \(\mathbf{a}^{i}\ \leftarrow\ g^{(i)}\left(\mathbf{z}^{(i)}\right)\) end for\(\overline{\mathbf{a}^{(l)}}\ \leftarrow\ \texttt{HE.Enc}\left(\texttt{pk},\mathbf{a}^{(l)}\right)\) \(s.send\ \overline{(\mathbf{a}^{(l)})}\) \(s.receive\ \overline{(\mathbf{a}^{(L)})}\) \(\mathbf{a}^{(L)}\ \leftarrow\ \texttt{HE.Dec}\left(\texttt{sk},\overline{\mathbf{a}^{(L)}}\right)\) \(\mathbf{\hat{y}}\ \leftarrow\ Softmax\left(\mathbf{a}^{(L)}\right)\) \(\mathbf{J}\leftarrow\mathcal{L}(\mathbf{\hat{y}},\mathbf{y})\) Backward propagation: \(\text{Compute}\left\{\frac{\partial J}{\partial\mathbf{\hat{y}}}\&\frac{ \partial J}{\partial\mathbf{a}^{(L)}}\&\frac{\partial J}{\partial\mathbf{w}^{(L)}} \right\}\) \(s.send\left(\frac{\partial J}{\partial\mathbf{a}^{(L)}}\&\frac{\partial J}{ \partial\mathbf{w}^{(L)}}\right)\) \(s.receive\left(\frac{\partial J}{\partial\mathbf{a}^{(l)}}\right)\) for\(i\gets l\)do \(\text{Compute}\left\{\frac{\partial J}{\partial\boldsymbol{w}^{(i)}},\ \frac{\partial J}{\partial\boldsymbol{b}^{(i)}}\right\}\) Update \(\boldsymbol{w}^{(i)},\ \boldsymbol{b}^{(i)}\) end for end for end for ``` **Algorithm 3**Client Side ## 5 Performance Analysis We evaluate our method on the MIT-BIH dataset [5]. **MIT-BIH** We use the pre-processed dataset from [1], which is based on the MIT-BIH arrhythmia (abnormal heart rhythm) database [5]. The processed dataset contains 26,490 samples of heartbeat that belong to 5 different types: N (normal beat), L (left bundle branch block), R (right bundle branch block), A (atrial premature contraction), V (ventricular premature contraction). An example heartbeat of each class is visualized in Figure 2. To train our network, the dataset is then split into a train and test split according to [1]. This results in both the train and test split as matrices of size \([13245,1,128]\), meaning that they contain 13,245 ECG samples, each sample has one channel and 128 timesteps. Experimental SetupAll neural networks are trained on a machine with Ubuntu 20.04 LTS, processor Intel Core i7-8700 CPU at 3.20GHz, 32Gb RAM, GPU GeForce GTX 1070 Ti with 8Gb of memory. We write our program in the Python programming language version 3.9.7. The neural nets are constructed using the PyTorch library version 1.8.1+cu102. For HE algorithms, we employ the TenSeal library version 0.3.10. We perform our experiments in the localhost setting. In terms of hyperparameters, we train all networks with 10 epochs, \(\eta=0.001\) learning rate, and \(n=4\) training batch size. For the split neural network with HE activation maps, we use the Adam optimizer for the client model and mini-batch Gradient Descent for the server. We use GPU for networks trained on the plaintext. For the U-shaped SL model on HE activation maps, we train the client model on GPU, and the server model on CPU. ### Evaluation In this section, we report the experimental results in terms of accuracy, training duration and communication throughput. We measure the accuracy of the neural nets on the plaintext test set after the training processes are completed. The 1D CNN models used on MIT-BIH dataset have two Conv1D layers and one linear layer. The activation maps are the output of the last Conv1D layer. We experiment with the activation maps of \([\text{batch size},256]\) for the MIT-BIH dataset. We denote the 1D CNN model with an activation map sized \([\text{batch size},256]\) as \(M_{1}\). **Training Locally** Results when training \(M_{1}\) locally on the MIT-BIH plaintext dataset are shown in Figure 3. The neural network learns quickly and is able to decrease the loss drastically from epoch 1 to 5. From epoch 6-10, the loss begins to plateau. After training for 10 epochs, we test the trained neural network on the test dataset and get 88.06% accuracy. Training the model locally on plaintext takes 4.8sec for each epoch on average. **U-shaped Split Learning using Plaintext Activation Maps** Our experiments, show that training the U-shaped split model on plaintext (reported in section 3.2) produces the same results in terms of accuracy compared to local training for model \(M_{1}\). This result is similar to the findings of [1]. Even though the authors of [1] only used the vanilla version of the split model, Figure 3: Results when training locally on the plaintext MIT-BIH dataset with activation maps of size \([\text{batch size},256]\). Figure 2: Heartbeats from the processed ECG dataset. they too found that, compared to training locally, accuracy was not reduced. We will now discuss the training time and communication overhead of the U-shaped split models and compare them to their local versions. For the split version of \(M_{1}\), each training epoch takes 8.56 seconds on average, hence 43.9% longer than local training. The U-shaped split models take longer to train due to the communication between the client and the server. The communication cost for one epoch of training split \(M_{1}\) is 33.06 Mb. Visual InvertibilityIn the SL model, the activation maps are sent from client to server to continue the training process. A visual representation of the activation maps reveals a high similarity between certain activation maps and the input data from the client, as demonstrated in Figure 4 for the models trained on the MIT-BIH dataset. The figure indicates that, compared to the raw input data from the client (the first row of Figure 4), some activation maps (as plotted in the second row of Figure 4) have exceedingly similar patterns. This phenomenon clearly compromises the privacy of the client's raw data. The authors of [1] quantify the privacy leakage by measuring the correlations between the activation maps and the raw input signal by using two metrics: distance correlation and Dynamic Time Warping. This approach allows them to measure whether their solutions mitigate privacy leakage work. Since our work uses HE, said metrics are unnecessary as the activation maps are encrypted. U-shaped Split 1D CNN with Homomorphic Encrypted Activation MapsWe train the split neural networks \(M_{1}\) on the MIT-BIH dataset using encrypted activation maps according to subsection 4.2. To encrypt the activation maps on client side (i.e. before sending them to the server), we experiment with five different sets of HE parameters for model \(M_{1}\). Additionally, we perform experiments using different combinations of HE parameters. Table 1 shows the results in terms of training time, testing accuracy, and communication overhead for the neural networks with different configurations. For the U-shaped SL version on the plaintext, we captured all communication between client and server. For training split models on encrypted activation maps, we approximate the communication overhead for one training epoch by getting the average communication of training on the first ten batches of data, then multiply that with the total number of training batches. For the \(M_{1}\) model, the best test accuracy was 85.41%, when using the HE parameters with polynomial modulus \(\mathcal{P}=4096\), coefficient modulus \(\mathcal{C}=[40,20,20]\), scale \(\Delta=2^{21}\). The accuracy drop was 2.65% compared to training the same network on plaintext. This set of parameters achieves higher accuracy compared to the bigger sets of parameters with \(\mathcal{P}=8192\), while requiring much lower training time and communication overhead. The result when using the first set of parameters with \(\mathcal{P}=8192\) is close (85.31%), but with a much longer training time (3.67 times longer) and communication overhead (8.43 times higher). Our experiments show that training on encrypted activation maps can produce optimistic results, with accuracy dropping by 2-3% for the best sets of HE parameters. The set of parameters with \(\mathcal{P}=8192\) achieve the second highest test accuracy, though incurring the highest communication overhead and the longest training time. The set of parameters with \(\mathcal{P}=4096\) can offer a good trade-off as they can produce on-par accuracy with \(\mathcal{P}=8192\), while requiring significantly less communication and training time. Experimental results show that with the smallest set of HE parameters \(\mathcal{P}=2048\), \(\mathcal{C}=[18,18,18]\), \(\Delta=2^{16}\), the least amount of communication and training time is required. ## 6 Conclusion This paper focused on how to train ML models in a privacy-preserving way using a combination of split learning -a promising machine-learning method- and homomorphic encryption. We constructed protocols by which a client and a server could collaboratively train a model without revealing significant information about the raw data. As far as we are aware, this is the first time split learning is used on encrypted data. Figure 4: Top: client input data. Bottom: one of the output channels from the \(M_{1}\) model’s second convolution layer.
2310.11568
The mechanical radius of the proton
We present the first determination of the proton mechanical radius. The result was obtained by employing a novel theoretical approach that connects experimental data of deeply virtual Compton scattering with the spin = 2 interaction that is characteristic of gravity coupling with matter. We find that the proton mechanical radius is significantly smaller than its charge radius, consistent with the latest Lattice QCD computation.
V. D. Burkert, L. Elouadrhiri, F. X. Girod
2023-10-17T20:30:12Z
http://arxiv.org/abs/2310.11568v2
# The mechanical radius of the proton ###### Abstract We present the first determination of the proton's mechanical radius. The result was obtained by employing a novel theoretical approach, which connects experimental data of deeply virtual Compton scattering with the spin \(J=2\) interaction that is characteristic of gravity coupling with matter. We find that the proton's mechanical radius is significantly smaller than its charge radius, consistent with the latest Lattice QCD computation. pacs: Historically, the proton's size has been studied in electromagnetic interactions using electron beams. The first direct measurement of the proton's finite size through its charge radius was achieved 1955 by R. Hofstadter using elastic electron-proton scattering [1]. For the very precise latest results of the proton's charge radius, see the 2022 edition of the Review of Particle Physics [2]. In contrast to the electromagnetic properties, the internal mechanical properties of the proton are essentially unknown, although theoretical work on the foundations had already begun in the 1960s [3; 4] but remained dormant for over three decades as no practical way could be devised to experimentally probe these properties. The mechanical properties are related to the proton's interaction with gravity and are encoded in the gravitational form factors (GFFs) of the protons' matrix element of the symmetric energy-momentum tensor (EMT) [5]. The GFF cannot be measured directly because of our inability to design an experimental setup of matter beams to be scattered off proton targets involving the exchange of gravitons with the required properties. For a recent colloquial review of the GFFs, see ref. [6]. Theoretical developments near the beginning of the new millennium have shown that the GFFs can be probed indirectly using processes that involve angular momentum \(J=2\) interactions to mimic gravity [7]. This is achieved in various deeply inelastic exclusive processes, among which deeply virtual Compton scattering (DVCS) is the experimentally most accessible one [8; 9]. DVCS allows for the extraction of the internal proton structure expressed in the generalized parton distributions (GPDs) [5; 10] and enables the exploration of its mechanical properties [11], including its mechanical size. The basic process to get access to GPDs is deeply virtual Compton scattering (DVCS), illustrated in Fig. 1. In addition to the recoil proton (\(p^{\prime}\)), a high-energy photon is emitted in the final state: \[\vec{e}+p\to e^{\prime}+p^{\prime}+\gamma\, \tag{1}\] where the arrow over the initial state electron (e) indicates that the electron beam is spin-polarized and scattered off the target proton (\(p\)). It involves the interaction of two spin \(J_{\gamma}=1\) photons with the proton, which mimics the spin \(J_{G}=2\) interaction that is characteristic of gravity interacting with matter. What makes this process measurable is that DVCS involves the electromagnetic coupling constant \(\alpha_{em}\) rather than the many orders of magnitude weaker gravitational coupling. The mean square mechanical radius can be expressed as [12] \[\langle r^{2}\rangle_{\rm mech}=6\frac{D(t=0)}{\int_{-\infty}^{0}D(t)dt}\] where \(D(t)\), the so-called "Druck" term, is the gravitational form factor encoding the shear forces and pressure distribution in the proton. It is related to the GPD \(H(x,\xi,t)\), with \(x\) being the quark momentum fraction, \(\xi\) the longitudinal momentum fraction transferred to the struck quark, and \(t\) the 4-momentum transfer to the proton. The D-term is the last unknown global property of the proton that, until recently, has remained unconstrained. In our previous work \(D(t)\) was determined in a range in \(-t\), and was used to estimate the distribution of Figure 1: Left: The hypothetical graviton-proton interaction to probe the mechanical properties. Right: The graviton-proton interaction is mimicked with the \(J_{\gamma\gamma}=2\) photon vertices in the leading diagram in DVCS. The integrated quark propagator (shaded ellipse) contains \(J_{\gamma\gamma}=2\) as the leading component. pressure [13] and shear stress [14] inside the proton. In the present work we determine the mechanical radius of the proton employing the form factor \(D^{q}(t)\), where the superscript indicate that it refers to the quark contribution to the proton's mechanical size. We briefly summarize the steps involved in this process. The proton's 3-dimensional quark structure is probed in deeply virtual Compton scattering (DVCS), a process where an electron exchanges a (virtual) photon with a quark in the proton that subsequently emits a high-energy real photon. All particles involved in the process, the scattered electron, the emitted high-energy photon and the recoil proton are measured in particle detectors in time coincidence. The basic process in leading twist approximation is the handbag diagram shown in Fig. 1. The two high-energy photons, each with spin \(J_{\gamma}=1\) that couple to the same quark, contain the leading \(J_{\gamma^{*}\gamma}=2\) contribution equivalent to the coupling of a graviton of spin \(J_{G}=2\) to the proton. As the electromagnetic coupling to quarks is many orders of magnitude stronger than gravity, we can employ the DVCS process to probe the gravitational properties of the proton experimentally. The DVCS process on the proton is described in leading twist by 3 chiral even GPDs, of which \(H(x,\xi,t)\), is most relevant in this study, where \(x\) is the momentum fraction of the struck quark, \(\xi\) is the longitudinal momentum fraction transferred to the struck quark q, and \(t\) is the 4-momentum transfer to the proton. At sufficiently high energies, the process factorizes into the coupling of the virtual and real photon to the active quark, and into the non-perturbative part described by the GPDs. For DVCS off the proton GPD \(H(x,\xi,t)\) dominates the process, while other contributions are expected to be smaller, and in part kinematically suppressed. \(H(x,\xi,t)\) is directly mapped to the gravitational form factors \(D(t)\) and \(M_{2}(t)\) in a sum rule [8] involving its second Mellin moment: \[\int\mathrm{d}x\,xH(x,\xi,t)\ =\ M_{2}(t)+\xi^{2}D(t), \tag{2}\] where the GFF \(D(t)\) encodes the distribution of shear forces on the quarks and the pressure distribution in the proton. Ideally, one would determine the integral by measuring \(H(x,\xi,t)\) in the entire \(x\) and \(\xi\) space for different values of \(t\). However, in the DVCS experiments, such an approach is impractical as \(H(x,\xi,t)\) is not directly accessible in the full \(x\)-space, but only at the value \(x=\pm\xi\). We therefore employ a more phenomenological approach and express the \(H(x,\xi,t)\) in terms of the Compton Form Factor \(\mathcal{H}(\xi,t)\) through the convolution integral defined as \[\mathrm{Re}\mathcal{H}(\xi,t)+i\mathrm{Im}\mathcal{H}(\xi,t)=\\ \int_{0}^{1}dx\left[\frac{1}{\xi-x-i\epsilon}-\frac{1}{\xi+x-i \epsilon}\right]H(x,\xi,t), \tag{3}\] where the real function of 3 parameters \(H(x,\xi,t)\) is replaced with the complex function of 2 parameters \(\mathrm{Re}\mathcal{H}(\xi,t)\) and \(\mathrm{Im}\mathcal{H}(\xi,t)\). The Compton Form Factors are directly related to the observables we can experimentally determine in DVCS measurements. In order to obtain theoretically sound expressions for the \(\xi\) dependence of the \(\mathrm{Re}\mathcal{H}(\xi,t)\) and \(\mathrm{Im}\mathcal{H}(\xi,t)\), we use a recently developed parameterization [15]. This adds some model-dependence to the extraction procedure which we account for in the systematic uncertainties of the fit results. The imaginary part of \(\mathcal{H}(\xi,t)\) and its real part are extracted by fitting the parameterization to the experimentally measured beam-spin asymmetry data [16] and the unpolarized cross section data [17]. Both parts are related through a subtracted dispersion relation [18; 19; 20] at fixed \(t\), where \(D(t)\) appears as the subtraction term. \[\mathrm{Re}\mathcal{H}(\xi,t)\stackrel{{\mathrm{LO}}}{{=}}\\ D(t)+\frac{1}{\pi}\mathcal{P}\int_{0}^{1}dx\left[\frac{1}{\xi- x}-\frac{1}{\xi+x}\right]\mathrm{Im}\mathcal{H}(\xi,t) \tag{4}\] From the dispersion relation we can then determine \(D(t)\) for each value of \(\xi\). The subtraction term \(D(t)\) is directly related to the gravitational form factor we seek to determine. It encodes the mechanical properties of the proton. In our previous paper we have used a multipole form parametrization for the \(D(t)\) form factor: and fit this parameterization together with the one describing the Compton Form Factors to the data. In Fig. 2 we Figure 2: The form factor \(D(t)\) as determined in the fit to the DVCS data. The hatched area represents the systematic uncertainties. display the results of the \(D(t)\) form factor extraction and fit it to the multipole form: \[D(t)\ =\ D\biggl{[}1+\frac{-t}{M^{2}}\biggr{]}^{-\alpha}, \tag{4}\] where \(D\), \(\alpha\) and \(M^{2}\) are the fit parameters. Employing the form in (4) we obtain the mechanical mean square radius of the proton as given by the relation \[\langle r_{p}^{2}\rangle_{\rm mech}=6(\alpha-1)/M^{2}. \tag{5}\] Note that the radius does not depend on the value of \(D\) but only on \(\alpha\) and \(M^{2}\). For a physical result, i.e. \(\langle r_{p}^{2}\rangle_{\rm mech}>0\), \(D(t)\) must drop faster with \(-t\) than a monopole form, i.e. \(\alpha>1\). Our fit results in these parameters: \[D = 1.46\pm 0.24 \tag{6}\] \[M^{2} = +1.02\pm 0.13\ {\rm GeV}^{2}\] (7) \[\alpha = +2.76\pm 0.23 \tag{8}\] Using eqn.(5) the following result for the mechanical proton radius is obtained: \[\langle r_{p}^{2}\rangle_{\rm mech} = 0.402\pm 0.072\ {\rm fm}^{2} \tag{9}\] \[\sqrt{\langle r_{p}^{2}\rangle_{\rm mech}} = 0.634\pm 0.057\ {\rm fm} \tag{10}\] Within the uncertainties, the fitted value of \(\alpha=2.76\pm 0.23\) is consistent with a tripole behavior of \(D(t)\). For comparison we also show the proton's charge radius as listed in the 2022 Review of Particle Properties [2] The resulting mechanical radius of the proton is significantly smaller, by about 25%, compared to the proton's charge radius in (11). \[\sqrt{\langle r_{p}^{2}\rangle_{\rm charge}}\ =\ 0.8409\pm 0.0004\ {\rm fm} \tag{11}\] The large difference in magnitude of the proton's charge and of its mechanical size may at first glance be surprising. However, it should be noted that there is an important distinction between the way the charge radius and the mechanical radius are determined. The charge radius is defined as the slope of the elastic electric form factor \(G_{E}^{p}(t)\) at \(t=0\), i.e. it is probed at _large distances_ from the proton's center. The mechanical size is determined in a hard scattering DVCS process and involves the _short distance_ interactions inside the proton. This difference between the two concepts is reflected in the definition of the mechanical radius that includes an integration over the entire \(t\)-dependence of \(D(t)\), i.e. it incorporates the entire spatial distribution of pressure and forces in the proton. The difference between the two concepts becomes even more apparent when comparing the sizes of the proton and of the neutron. The mean square _charge_ radius of the neutron results in a much smaller value than that of the proton, and in a negative value sign [2]: \[\langle r_{n}^{2}\rangle_{\rm charge}=-0.1161\pm 0.0022\ {\rm fm}^{2},\] where the subscript "\(n\)" denotes the neutron. This results could be interpreted that the neutron's charge radius is much smaller than the one of the proton, likely due to the very different charge distribution inside the overall zero-charge neutron. It is then evident that the neutron's charge radius bears no relationship with the neutron's physical size. In contrast, the _mechanical_ size of the neutron is expected to be the same as the one of the proton with only possible minor differences expected from isospin breaking effects [12]. This is the consequence of the force & pressure distribution of the quarks being the result of the strong interaction that is agnostic to the electrical charge, as the main difference between the proton and the neutron. The _charge_ radius of the proton has thus a fundamentally different meaning from the proton's _mechanical_ radius. The latter is close to what one may characterize as the physical size of the proton. Our result represents the first experimental determination of the quark mechanical radius of the proton using the DVCS process and its relation to the GFFs. The most recent state-of-the-art Lattice QCD calculation of the quark contributions to the mechanical radius of the proton [21] agrees remarkably well with our results, as shown in Fig 3. We anticipate that they will stimulate further discussions on the proper meaning of the proton radius, and what values should be used as input to model calculations of nuclear properties, especially at high pressure, such as in the cores of neutron stars. It is pertinent to mention that new data on DVCS-BH beam-spin asymmetry have been taken from experiments that measured the DVCS process with a significantly expanded kinematic scope employing a beam energy of 10.6 GeV, and Figure 3: Mechanical radius of the proton’s quark content from experiment and from Lattice QCD, in comparison to its charge radius. providing significantly enhanced statistical precision [22]. These concerted efforts hold the promise of substantially diminishing the associated uncertainties inherent in deriving the mechanical properties of both the proton and neutron, including their mechanical radii. The innovative approach used in this analysis not only advances our understanding of the proton's fundamental characteristics but also opens a new avenue in the study of the gravitational structure not only of nucleons but other hadrons and nuclei, both in the ground and excited states. Study of the gravitational structure of hadrons stands as a pillar within the 2023 Nuclear Science Advisory Committee (NSAC) Long Range Plan, underscoring its fundamental importance in shaping the future of nuclear science. **Acknowledgement** We are thankful to Yoshitaka Hatta for helpful comments regarding the interpretation of the results. Special thanks go to Joanna Griffin for preparing Fig. 1. The material discussed in this article is based upon work supported by the U.S. Department of Energy, Office of Science, Office of Nuclear Physics under contract DE-AC05-06OR23177.
2308.13135
Nonparametric Additive Value Functions: Interpretable Reinforcement Learning with an Application to Surgical Recovery
We propose a nonparametric additive model for estimating interpretable value functions in reinforcement learning. Learning effective adaptive clinical interventions that rely on digital phenotyping features is a major for concern medical practitioners. With respect to spine surgery, different post-operative recovery recommendations concerning patient mobilization can lead to significant variation in patient recovery. While reinforcement learning has achieved widespread success in domains such as games, recent methods heavily rely on black-box methods, such neural networks. Unfortunately, these methods hinder the ability of examining the contribution each feature makes in producing the final suggested decision. While such interpretations are easily provided in classical algorithms such as Least Squares Policy Iteration, basic linearity assumptions prevent learning higher-order flexible interactions between features. In this paper, we present a novel method that offers a flexible technique for estimating action-value functions without making explicit parametric assumptions regarding their additive functional form. This nonparametric estimation strategy relies on incorporating local kernel regression and basis expansion to obtain a sparse, additive representation of the action-value function. Under this approach, we are able to locally approximate the action-value function and retrieve the nonlinear, independent contribution of select features as well as joint feature pairs. We validate the proposed approach with a simulation study, and, in an application to spine disease, uncover recovery recommendations that are inline with related clinical knowledge.
Patrick Emedom-Nnamdi, Timothy R. Smith, Jukka-Pekka Onnela, Junwei Lu
2023-08-25T02:05:51Z
http://arxiv.org/abs/2308.13135v1
# Nonparametric Additive Value Functions: ###### Abstract We propose a nonparametric additive model for estimating interpretable value functions in reinforcement learning. Learning effective adaptive clinical interventions that rely on digital phenotyping features is a major for concern medical practitioners. With respect to spine surgery, different post-operative recovery recommendations concerning patient mobilization can lead to significant variation in patient recovery. While reinforcement learning has achieved widespread success in domains such as games, recent methods heavily rely on black-box methods, such neural networks. Unfortunately, these methods hinder the ability of examining the contribution each feature makes in producing the final suggested decision. While such interpretations are easily provided in classical algorithms such as Least Squares Policy Iteration, basic linearity assumptions prevent learning higher-order flexible interactions between features. In this paper, we present a novel method that offers a flexible technique for estimating action-value functions without making explicit parametric assumptions regarding their additive functional form. This nonparametric estimation strategy relies on incorporating local kernel regression and basis expansion to obtain a sparse, additive representation of the action-value function. Under this approach, we are able to locally approximate the action-value function and retrieve the nonlinear, independent contribution of select features as well as joint feature pairs. We validate the proposed approach with a simulation study, and, in an application to spine disease, uncover recovery recommendations that are inline with related clinical knowledge. ## 1 Introduction The design and widespread usage of modern smartphones and wearables have facilitated real-time and consistent access to data concerning human behavior and health (Torous, Staples, and Onnela, 2015; Onnela, 2020). Digital phenotyping data offers a resource for clinicians interested in learning improved guidelines and recommendations for patients recovering from clinical or surgical procedures (Cote et al., 2019; Panda et al., 2020). Currently, questions concerning a patient's quality of life after a treatment or surgery are largely inferred from in-person follow-up visits and infrequent electronic surveys. These methods of evaluation are severely limited due their reliance on patient recall and their inability to capture the temporal evolution of a patient's recovery. By combining the utility of digital phenotyping and novel statistical machine learning techniques, clinical practitioners are afforded a new paradigm for discovering improved standards of care from high-quality and temporally-dense data (Panda et al., 2020). In this paper, we introduce a novel approach in reinforcement learning for estimating recovery strategies and recommendations in studies employing the use of digital phenotyping data. Reinforcement learning is a sub-field of machine learning that focuses on learning sequences of decisions that optimize long-term outcomes from experiential data (Sutton and Barto, 2018). Within healthcare, reinforcement learning algorithms have been used to discover decision-making strategies for chronic illness treatments (Bothe et al., 2013; Peyser et al., 2014), anesthesia regulation and automation (Sinzinger and Moore, 2011), chemotherapy scheduling and dosage management (Padmanabhan, Meskin, and Haddad, 2015; Ahn and Park, 2011), and sepsis management (Raghu et al., 2017; Peng et al., 2018). As it stands, employing reinforcement learning algorithms for healthcare applications requires (1) a consideration of the process used to estimate the decision-making strategy, or policy \(\pi\), and (2) the ability to carefully examine the intended behavior of the learned policy before deployment in the real world (Gottesman et al., 2019). Within these settings, decision-making policies are commonly represented as a function \(\pi(\mathbf{s})\) of state features \(\mathbf{s}=(s_{1},\ldots,s_{d})^{T}\in\mathbb{R}^{d}\). Accordingly, policies can be estimated using policy gradient or value-based reinforcement learning algorithms (Sutton and Barto, 2018). Figure 1: An overview of using nonparametric additive models for learning interpretable value functions. Within our setting, real-world data from subjects with select physiological disorders are collected using a smartphone-based digital phenotyping platform. Modalities collected from subject smartphones range from raw sensor data (e.g., GPS, accelerometer, gyroscope, or magnetometer) to usage logs (e.g., anonymized communication and screen-time). Relevant features \(\mathbf{s}=(s_{1},\ldots,s_{d})^{T}\) are summarized from these modalities and are used to frame a corresponding decision-making problem (or MDP see 2.1) of interest. Under the select MDP, we model the value function \(Q^{\pi}(\mathbf{s},a,x)\) as a sum of nonparametric component functions \(g_{a}(x)\) and \(f_{j,a}(\mathbf{s}_{j},x)\:\forall j\). Here we visualize the change in the shape and sparsity patterns of the component function \(f_{j,a}(\cdot,z)\) as the candidate variable \(x\) changes, e.g., \(x\in\{z_{1},z_{2},z_{3},z_{4}\}\). Each additive component function can be estimated and inspected using the kernel-weighted least square fixed point approximation detailed in 3. Barto, 2018). In value-based reinforcement learning, policies are determined by selecting the action \(a\) that maximizes the corresponding action-value function \(Q^{\pi}(\mathbf{s},a)\). Under this greedy action-selection strategy, retrieving an optimal policy relies on learning an optimal action-value function, commonly represented using neural network function approximators. As such, current value-based reinforcement learning algorithms serve as black boxes that simply receive a set of data and output a near-optimal policy. Generally, these polices are not only difficult to interpret, but provide minimal indication as to which features in the data (i.e., \(\mathbf{s}_{j}\:\forall j\in[d]\)) contributed to the selected decision (Gottesman et al., 2019). Rather than relying on black box function approximators, we introduce a class of value functions that provides a flexible, nonparametric representation of the action-value function, easily interpretable for both researchers and clinicians. By allowing for the inspection of a candidate variable \(x\) (i.e., time-varying/-invariant confounders or continuous-valued actions), we construct a generalized framework for modeling action-value functions as a sum of nonparametric, additive component functions and takes the form \[Q^{\pi}(\mathbf{s},a,x)=g_{a}(x)+\sum_{j=1}^{d}f_{j,a}(\mathbf{s}_{j},x)+\epsilon. \tag{1.1}\] Specifically, we extend in the input space of \(Q^{\pi}\) and allow for the examination of the marginal effect of a candidate variable \(x\), as well as its joint effect between state features \(\mathbf{s}_{j}\) under a discretized action space. This framework allows us to explore several representations of each additive component depending on our choice of \(x\). For instance, when \(x\) takes on continuous values over the entire discretized action space of \(a\), we can directly represent \(x\) as the continuous action \(a\) and equate the marginal and joint additive component functions in Equation 1.1 to \(g(a)\) and \(f_{j}(\mathbf{s}_{j},a)\), respectively. To estimate the components functions, we consider the classical approximate policy iteration algorithm, _Least Square Policy Iteration_ (LSPI) (Lagoudakis and Parr, 2004), and provide a kernel-hybrid approach for estimating action-value functions without making explicit parametric assumptions regarding their additive functional form. To do so, we relax the traditional linearity assumption imposed in LSPI by leveraging advances in estimating high dimensional, nonparametric additive regression models (Fan and Jiang, 2005; Ravikumar et al., 2007; Lafferty and Wasserman, 2008). In particular, we propose incorporating the kernel-sieve hybrid regression estimator introduced in Lu et al. (2020) to obtain a sparse additive representation of the action-value function by combining local kernel regression and basis expansion methods such as splines. To demonstrate the applicability of our methodology, we provide a simulation study, where we examine its ability in estimating nonlinear additive functions and compare its performance against modern neural network-based approaches. Furthermore, we directly apply our model to an on-going digital phenotyping study, where we learn and interpret a decision-making policy that aims to improve pain management and functional recovery in patients recovering from spine surgery by managing on patient mobility. ### Related Research Our work directly contributes to the growing literature on function approximation methods for value-based reinforcement learning. Current state-of-the-art algorithms approximate action-values functions using expressive modeling architectures such as neural networks. By combining fitted Q-iteration procedure with modern tools such as replay buffers and target networks, these algorithms are able to resolve the pitfalls of classical methods and solve complex, high-dimensional decision-making tasks (Riedmiller, 2005; Antos et al., 2007; Van Hasselt, 2010; Mnih et al., 2013; Mnih et al., 2015). Unfortunately, the powerful flexibility of these approaches comes at the cost of interpretability that is native to algorithms such as Least Squares Policy Iteration (LSPI). LSPI is a model free, off-policy approximate policy iteration algorithm that models the action-value function using a parametric linear approximation and finds an approximate function that best satisfies the Bellman equation (i.e., the fixed point solution) (Lagoudakis and Parr, 2004). While LSPI provides an unbiased estimate of the action-value function, it faces significant challenges when the model is misspecified and when the dimensionality of the feature space is high (Lagoudakis and Parr, 2004; Farahmand et al., 2016). Several modifications to LSPI have been proposed in the reinforcement learning literature. In settings where the feature space is large, several approach exist for finding sparse solutions under a linear model (Hoffman et al., 2012; Kolter and Ng, 2009; Tziortziotis and Dimitrakakis, 2017; Geist and Scherrer, 2011). Alternatively, Xu, Hu, and Lu (2007) propose a kernel-based LSPI algorithm that operates in an infinite dimensional Hilbert space and allows for nonlinear feature extraction by selecting appropriate kernel functions. Additionally, Howard and Nakamura (2013) propose a locally-weighted LSPI model that leverages locally-weighted to construct a nonlinear, global control policy. In general, these last two approaches avoid the pitfalls of model misspecification and minimize apriori knowledge needed to model the action-value function. Currently, no approximation methods in RL has been introduced that direcly allows for the nonparametric estimation concerning the additive contribution of select features and joint feature pairs. In the supervised learning literature, several approaches exist for estimating nonparametric component functions under high-dimensional feature spaces. Several of these approaches include generalized additive models (GAM) and sparse additive models (SpAM), which our approach draws parallels to (Ravikumar et al., 2007; Hastie, 2017). To bridge these areas of research, we reformulate the policy evaluation step of the classical LSPI algorithm and propose incorporating the kernel-sieve hybrid regression estimator introduced in Lu, Kolar, and Liu (2020). This approach provides a powerful function approximation technique for locally estimating action-value functions using a loss function combining both basis expansion and kernel method with a hybrid \(\ell_{1}/\ell_{2}\)-group Lasso penalty. ### Organization of the Paper The remainder of this paper is organized as follows. In Section 2.2, we introduce our generalized, nonparametric model for representing action-value functions. In Section 3, we present our estimation strategy for locally approximating the action-value function by combining basis expansion and kernel methods. In Section 4, we examine the results of a simulation study and highlight our method's performance in estimating the sparse additive components of the action-value function. In Section 5, we present a real-world cohort of patients recovering from a neurological intervention for spine disease as a motivating case study. In Section 6, we apply our method to the digital-phenotyping study described in Section 5 and interpret the estimated recovery strategy. In Section 7, we discuss the limitations of our method and propose future extensions to address them. ## 2 Nonparametric Additive Value Functions ### Preliminaries and Notation We consider a discrete time, infinite horizon Markov Decision Process (MDP) defined by the tuple \(\{\mathcal{S},\mathcal{A},\mathcal{P},R,\gamma\}\), where \(\mathcal{S}\) is a \(d\)-dimensional continuous state space, \(\mathcal{A}\) is a set of discrete (i.e., \(\mathcal{A}=\{1,\ldots,k\}\)) or continuous (i.e., \(\mathcal{A}=\mathbb{R}\)) actions, \(\mathcal{P}(\mathbf{s}^{\prime}|\mathbf{s},a)\) is a next-state transition probability kernel that specifies the probability of transitioning from state \(\mathbf{s}\in\mathcal{S}\) to the next state \(\mathbf{s}^{\prime}\in\mathcal{S}\) after taking action \(a\in\mathcal{A}\), \(R:\mathcal{S}\times\mathcal{A}\to\mathbb{R}\) is a reward function, and \(\gamma\in[0,1]\) is a discount factor for weighting long-term rewards. Within this stochastic environment, the action selection strategy is determined by a deterministic policy, \(\pi:\mathcal{S}\to\mathcal{A}\). To assess the quality of a policy, the expected discounted sum of rewards when starting at state \(\mathbf{s}\) and following policy \(\pi\) can be computed using the value function \(V^{\pi}:\mathcal{S}\to\mathbb{R}\). The value function starting a state \(\mathbf{s}\) is defined as \[V^{\pi}(\mathbf{s})=\mathbb{E}_{\pi}\left[\sum_{i=0}^{\infty}\gamma^{i}r_{i} \mid\mathbf{s}_{\mathrm{init}}=\mathbf{s}\right]. \tag{2.1}\] In control problems where we are interested in improving our action selection strategy, it is useful to consider the action-value function \(Q^{\pi}:\mathcal{S}\times\mathcal{A}\to\mathbb{R}\). Given a policy \(\pi\), \(Q^{\pi}\) represents the expected discounted sum of rewards after taking action \(a\) and state \(\mathbf{s}\), and following policy \(\pi\) thereafter, i.e., \[Q^{\pi}(\mathbf{s},a)=\mathbb{E}_{\pi}\left[\sum_{i=0}^{\infty}\gamma^{i}r_{i} \mid\mathbf{s}_{\mathrm{init}}=\mathbf{s},a_{\mathrm{init}}=a\right] \tag{2.2}\] Due to the Markovian property of our MDP, the action-value function (as well as the value function) is a fixed point of the Bellman operator \(Q^{\pi}=\mathcal{T}_{\pi}Q^{\pi}\), where the operator \(\mathcal{T}_{\pi}\) is defined as \[(\mathcal{T}_{\pi}Q)(\mathbf{s},a)=R(\mathbf{s},a)+\gamma\int_{\mathcal{S}}Q( \mathbf{s}^{\prime},\pi(\mathbf{s}^{\prime}))d\mathcal{P}(\mathbf{s}^{\prime} |\mathbf{s},a) \tag{2.3}\] or, equivalently in vector form, as \(\mathcal{T}_{\pi}Q=\mathcal{R}+\gamma\mathcal{P}^{\pi}Q\), where \(\mathcal{R}\in\mathbb{R}^{|\mathcal{S}||\mathcal{A}|}\) is a reward vector and \(\mathcal{P}^{\pi}\in\mathbb{R}^{|\mathcal{S}||\mathcal{A}|\times|\mathcal{S}|| \mathcal{A}|}\) is the induced transition matrix when following policy \(\pi\) after a next state transition according to \(\mathcal{P}(\mathbf{s}^{\prime}|\mathbf{s},a)\). For a given MDP, the optimal action-value function is defined as \(Q^{*}(\mathbf{s},a)=\sup_{\pi}Q^{\pi}(\mathbf{s},a)\) for all states and actions \((\mathbf{s},a)\in\mathcal{S}\times\mathcal{A}\). For a given action-value function \(Q\), we define a greedy policy \(\pi\) as \(\pi(\mathbf{s})=\arg\max_{a\in\mathcal{A}}Q(\mathbf{s},a)\) for all \(\mathbf{s}\in\mathcal{S}\). The greedy policy with respect to the optimal action-value function \(Q^{*}\) is then an optimal policy, denoted as \(\pi^{*}\). Hence, obtaining \(Q^{*}\) allows us to arrive at an optimal action selection strategy. ### Generalized Framework For an arbitrary policy \(\pi\), we introduce a generalized framework for modeling the action-value function \(Q^{\pi}\) as a sum of nonparametric additive component functions. Our approach handles both Figure 2: Representation of a nonparametric additive value function \(Q^{\pi}(\mathbf{s},a,x)\) with respect to the candidate variable \(x\) as detailed in (2.4). discrete (i.e., \(\mathcal{A}=\{1,\ldots,k\}\)) and continuous (i.e., \(\mathcal{A}\equiv\mathbb{R}\)) action spaces, while allowing for the incorporation of potentially time-varying, or time-invariant variables. First, we present our generalized nonparametric framework for modeling \(Q^{\pi}\) as \[Q^{\pi}(\mathbf{s},a,x)=g_{a}(x)+\sum_{j=1}^{d}f_{j,a}(\mathbf{s}_{j},x)+\epsilon. \tag{2.4}\] Under this model, we expand the input space of \(Q^{\pi}\) to include the candidate variable \(x\in\mathbb{R}\), and discretize the action space such that \(a\in\{1,\ldots,k\}\) if \(\mathcal{A}\) is not already discrete. Accordingly, \(g_{a}(\cdot)\) represents the additive marginal effect of \(x\) under action \(a\), and \(f_{j}(\cdot,\cdot)\) represents the additive joint effect of interactions between \(x\) and state features \(\mathbf{s}_{j}\) under action \(a\). Without making assumptions on the functional form of \(g_{a}(\cdot)\) and \(f_{j,a}(\cdot,\cdot)\), our model allows us to carefully examine additive nonlinear relationships that exist among relevant state features, actions and the variable \(x\). Second, our choice in \(x\) allows us to explore several unique representations of the additive components in (2.4). For example, \(x\) can represent time-varying or time-invariant confounders (e.g., age, gender, or the number of days since a surgical event) as well as continuous-valued actions \(a\in\mathbb{R}\): \[x=\left\{\begin{array}{ll}\mathbf{s}_{0},&\text{i.e., a candidate state feature or confounder,}\\ a,&\text{i.e., a continuous action}\end{array}\right.\quad. \tag{2.5}\] **Example 2.1**.: When \(x=\mathbf{s}_{0}\), the additive functions in (2.4) respectively equate to \(g_{a}(x)=g_{a}(\mathbf{s}_{0})\) and \(f_{j,a}(\mathbf{s}_{j},x)=f_{j,a}(\mathbf{s}_{j},\mathbf{s}_{0})\). Furthermore, we can augment the state space \(S\) using \(x\) to form \(S_{+}=\{x,s_{1},\ldots,s_{d}\}\), and succinctly represent \(Q^{\pi}(\mathbf{s},a,\mathbf{s}_{0})\) as \(Q^{\pi}(\mathbf{s}_{+},a)\), where \(\mathbf{s}_{+}\in S_{+}\) and \[Q^{\pi}(\mathbf{s}_{+},a)=g_{a}(\mathbf{s}_{0})+\sum_{j=1}^{d}f_{j,a}( \mathbf{s}_{j},\mathbf{s}_{0})+\epsilon. \tag{2.6}\] Thus, under the discrete action \(a\), \(g_{a}(\mathbf{s}_{0})\) models the nonlinear marginal effect of the confounder or state feature \(\mathbf{s}_{0}\), whereas \(f_{j,a}(\mathbf{s}_{j},\mathbf{s}_{0})\) models the nonlinear interaction between \(\mathbf{s}_{0}\) and state features \(\mathbf{s}_{j}\). **Example 2.2**.: Similarly, when \(x=a\), the additive functions in (2.4) respectively equate to \(g_{a}(a)=g(a)\) and \(f_{j,a}(\mathbf{s}_{j},a)=f_{j}(\mathbf{s}_{j},a)\). Under this choice of \(x\), we avoid explicit discretization of the action space \(\mathcal{A}\) and directly treat \(a\) as a continuous action. Thus, for a given state-action pair, \(Q^{\pi}(\mathbf{s},a,a)\) reduces to \(Q^{\pi}(\mathbf{s},a)\), where \[Q^{\pi}(\mathbf{s},a)=g(a)+\sum_{j=1}^{d}f_{j}(\mathbf{s}_{j},a)+\epsilon, \tag{2.7}\] and the additive marginal effect of selecting a continuous action \(a\) is modeled as \(g(a)\), and \(f_{j}(\mathbf{s}_{j},a)\) represents the additive effect of selecting action \(a\) under state feature value \(\mathbf{s}_{j}\). ## 3 Kernel Sieve Hybrid - Least Squares Policy Iteration We introduce a general approach for estimating \(Q^{\pi}(\mathbf{s},a)\) for both discrete and continuous action spaces. This estimation strategy offers us an intuitive way to (1) locally approximate the action-value function as an additive model with independent state features spanned by a B-spline basis expansion, (2) retrieve an estimate of the nonlinear additive components, and (3) obtain a sparse representation of the action-value function by selecting relevant regions of the domain of the component functions. ### Basis Expansion First, we model the action-value function using a centered B-spline basis expansion of the additive component functions. Let \(\{\psi_{1},\ldots,\psi_{m}\}\) be a set of normalized B-spline basis functions. For each component function, we project \(f_{j,a}\) onto the space spanned by the basis, \(\mathcal{B}_{m}=\text{Span}(\psi_{1},\ldots,\psi_{m})\). Accordingly, \(f_{j,a}(\mathbf{s}_{j},x)=\sum_{\ell=1}^{m}\varphi_{j\ell}(\mathbf{s}_{j}) \boldsymbol{\beta}_{j\ell;a}(x)\), where \(\varphi_{j\ell}\) are locally centered B-spline basis functions defined as \(\varphi_{j\ell}(\mathbf{s})=\psi_{\ell}(\mathbf{s})-E[\psi_{\ell}(\mathbf{s}_ {j})]\) for the \(j\)-th component function and the \(\ell\)-th basis component. As we will discuss in Section 3.2, our estimation strategy relies on performing a locally-weighted least-squares minimization of an objective criterion with respect to a fixed value of the variable \(x\). As such, we locally express our model in (2.4) by (i) setting \(x=z\), where \(z\in\mathcal{X}\) is some arbitrary fixed value, and by (ii) using the aforementioned centered B-spline basis expansion: \[Q^{\pi}(\mathbf{s},a,x=z)\approx\alpha_{a,z}+\sum_{j=1}^{d}\sum_{\ell=1}^{m} \varphi_{j\ell}(\mathbf{s}_{j})\beta_{j\ell;a,z}\quad. \tag{3.1}\] Under this local model, \(\alpha_{a,z}\in\mathbb{R}\) represents that marginal effect \(g_{a}(x)\) when \(z\) is a fixed value of \(x\). Accordingly, \(\beta_{j\ell;a,z}\in\mathbb{R}\) is the coordinate corresponding to the \(\ell\)-th B-spline basis of the \(j\)-th state-feature under \(z\). In Examples 2.1 and 2.2, we observe two choices for representing \(x\) that highlight the generalizability of our model structure. Under theses examples, the local additive components in (3.1) can also be re-expressed as follows: \[\alpha_{a,z}=\left\{\begin{aligned} &\alpha_{a,z}\quad \text{when }x=\mathbf{s}_{0},\\ &\alpha_{z}\quad\text{when }x=a\end{aligned}\right.\qquad \text{and}\quad\beta_{j\ell;a,z}=\left\{\begin{aligned} &\beta_{j\ell;a,z}\quad \text{when }x=\mathbf{s}_{0},\\ &\beta_{j\ell;z}\quad\text{when }x=a\end{aligned}\right.\quad. \tag{3.2}\] Since the dynamics of the MDP are unknown, our estimation strategy relies on a batch dataset \(\mathcal{D}=\{(\mathbf{s}^{[i]},a^{[i]},r^{[i]},\mathbf{s}^{[j\ell},x^{[i]}) \}_{i=1}^{N}\) of sampled transitions from the MDP of interest, where \(\mathbf{s}^{[i]\prime}\sim P(\cdot|\mathbf{s}^{[i]},a^{[i]})\), and \(x^{[i]}\) is the associated value of the candidate variable \(x\). When we consider all observations in the dataset \(\mathcal{D}\), we can equivalently re-express (3.1) in vector form as \[Q^{\pi}_{\boldsymbol{\beta}_{+}}=\widetilde{\boldsymbol{\Phi}}\boldsymbol{ \beta}_{+} \tag{3.3}\] where \(\boldsymbol{\beta}_{+}=(\boldsymbol{\beta}_{1+}^{T},\ldots,\boldsymbol{\beta}_ {|\mathcal{A}|+}^{T})^{T}\in\mathbb{R}^{(1+dm)|\mathcal{A}|}\), \(\boldsymbol{\beta}_{a+}=(\alpha_{a,z},\boldsymbol{\beta}_{1;a,z}^{T},\ldots, \boldsymbol{\beta}_{d;a,z}^{T})^{T}\in\mathbb{R}^{1+dm}\), and \[\widetilde{\boldsymbol{\Phi}}=\left(\begin{aligned} &\phi\left(\mathbf{s}^{[1]},a^{[1]},\right)^{T}\\ &\vdots\\ &\phi\left(\mathbf{s}^{[N]},a^{[N]},\right)^{T}\end{aligned} \right)=\left(\begin{aligned} &\varphi_{+}\left(\mathbf{s}^{[1]} \right)^{T}\mathds{1}(a^{[1]}=1)\cdots\varphi_{+}\left(\mathbf{s}^{[1]} \right)^{T}\mathds{1}(a^{[1]}=k)\\ &\vdots\\ &\varphi_{+}\left(\mathbf{s}^{[N]}\right)^{T}\mathds{1}(a^{[N]}=1 )\cdots\varphi_{+}\left(\mathbf{s}^{[N]}\right)^{T}\mathds{1}(a^{[N]}=k)\end{aligned}\right) \tag{3.4}\] such that \(\widetilde{\boldsymbol{\Phi}}\in\mathbb{R}^{N\times(1+dm)|\mathcal{A}|}\), \(\varphi_{+}(\mathbf{s})=\left(1\;\;\varphi_{2}(\mathbf{s}_{1})\cdots\varphi_ {d}(\mathbf{s}_{d})\right)^{T}\in\mathbb{R}^{1+dm}\) and \(\varphi_{j}(\mathbf{s}_{j})\in\mathbb{R}^{m}\) is a B-spline basis component vector. Note that \(E[\psi_{\ell}(\mathbf{s}_{j})]\) is estimated as \(\bar{\psi}_{j\ell}=N^{-1}\sum_{i=1}^{N}\psi_{\ell}(\mathbf{s}_{j}^{[i]})\) by using a sample of \(N\) data points from the dataset \(\mathcal{D}\). ### Kernel-Weighted Least Squares Fixed Point Approximation We estimate our model parameters \(\boldsymbol{\beta}_{+}\) by minimizing a kernel-weighted version of the classical projected Bellman error (PBE). In LPSI, we observe that a simple procedure for estimating a linear action-value function is to force the approximate function to be a fixed point under the projected Bellman operator (i.e., \(\Pi\mathcal{T}_{\pi}Q_{\mathbf{\beta}_{+}}\approx Q_{\mathbf{\beta}_{+}}\)). For this condition to hold, the fixed point of the Bellman operator \(\mathcal{T}_{\pi}\) must lie in the space of approximate value functions spanned by the basis functions over all possible state-action pairs, \(\mathcal{C}(\mathbf{\Phi})\). By construction, it is known that \(Q_{\mathbf{\beta}_{+}}=\mathbf{\Phi}\mathbf{\beta}_{+}\in\mathcal{C}(\mathbf{\Phi})\). However, since there is no guarantee that \(\mathcal{T}_{\pi}Q_{\mathbf{\beta}_{+}}\) (i.e., the result of the Bellman operator) is in \(\mathcal{C}(\mathbf{\Phi})\), it first must be projected onto \(\mathcal{C}(\mathbf{\Phi})\) using the projection operator \(\Pi\), such that \(\Pi\mathcal{T}_{\pi}Q_{\mathbf{\beta}_{+}}^{\pi}=\mathbf{\Phi}\mathbf{u}^{*}\) where \(\mathbf{u}^{*}\) is the solution to the following least-squares problem \[\mathbf{u}^{*}=\operatorname*{argmin}_{Q\in\mathcal{C}(\mathbf{\Phi})}\|Q_{\mathbf{\beta} _{+}}-\mathcal{T}_{\pi}Q_{\mathbf{\beta}_{+}}\|_{2}^{2}=\operatorname*{argmin}_{ \mathbf{u}\in\mathbb{R}^{k}}\|\mathbf{\Phi}u-\mathcal{T}_{\pi}\mathbf{\Phi}\mathbf{\beta}_{+} \|_{2}^{2}. \tag{3.5}\] Empirically, \(\mathbf{u}^{*}\) can be estimated using a sample-based feature design matrix \(\widetilde{\mathbf{\Phi}}\) constructed from a dataset of \(N\) transitions \(\mathcal{D}\), \[\mathbf{u}^{*} =\operatorname*{argmin}_{\mathbf{u}\in\mathbb{R}^{k}}\|\widetilde{\bm {\Phi}}\mathbf{u}-\widehat{\mathcal{T}}_{\pi}\widetilde{\mathbf{\Phi}}\mathbf{\beta}_{+} \|_{2}^{2} \tag{3.6}\] \[=\operatorname*{argmin}_{\mathbf{u}\in\mathbb{R}^{k}}\sum_{i=1}^{N} \left(\phi(\mathbf{s}^{[i]},a^{[i]})^{T}\mathbf{u}-\left[r^{[i]}+\gamma\phi( \mathbf{s}^{[i]\prime},\pi(\mathbf{s}^{[i]\prime}))^{T}\mathbf{\beta}_{+}\right] \right)^{2} \tag{3.7}\] where \(\widehat{\mathcal{T}}_{\pi}\) is the empirical Bellman operator \((\widehat{\mathcal{T}}_{\pi}Q_{\beta})(\mathbf{s},a)=r(\mathbf{s},a)+\gamma \,Q_{\beta}(\mathbf{s}^{\prime},\pi(\mathbf{s}))\) defined using a single transition \(\{\mathbf{s},a,r,\mathbf{s}^{\prime}\}\) from \(\mathcal{D}\). Rather than performing the projection step according to an \(\ell_{2}\)-norm, we propose using a _kernel-weighted_ norm with weights that are centered at a fixed value \(z\) that lies within the domain of the candidate variable \(x\). Let the \(K:\mathcal{X}\to\mathbb{R}\) be a symmetric kernel function with bounded support. We denote \(K_{h}(\cdot)=h^{-1}K(\cdot/h)\), where \(h>0\) is the bandwidth. The solution \(\mathbf{u}_{z}^{*}\) to the kernel-weighted Figure 3: A step-by-step illustration of kernel-weighted least squares fixed point approximation. First, using observations gathered in the batch dataset \(\mathcal{D}\), we construct a diagonal kernel-weight matrix \(\mathbf{W}_{z}\), where each diagonal weight is a function of the distance between the observed candidate variable \(x^{[i]}\) and the fixed value \(z\). Second, let \(\mathcal{F}\) be \(\mathcal{C}(\mathbf{\Phi})\), i.e., the space of approximate value functions. Since applying the Bellman operator \(\mathcal{T}_{\pi}\) to an arbitrary value function \(Q_{\mathbf{\beta}_{+}}\) can push the resulting quantity \(\mathcal{T}_{\pi}Q_{\mathbf{\beta}_{+}}^{\pi}\) out of the space \(\mathcal{F}\), we perform a projection step using the constructed kernel-weighted matrix \(\mathbf{W}_{z}\) (detailed in 3.2). This approach differs from classical least squares fixed point approximation (shown in grey), where an \(\ell_{2}\)-projection operator \(\widehat{\Pi}\) is used. Lastly, we find \(\widehat{\mathbf{\beta}}_{+}\) that minimizes the \(\ell_{2}\)-norm between \(Q_{\mathbf{\beta}_{+}}\) and \(\widehat{\Pi}_{\mathbf{W}_{z}}\mathcal{T}_{\pi}Q_{\mathbf{\beta}_{+}}^{\pi}\), i.e., the kernel-weighted projected bellman error. projection step is estimated as follows \[\mathbf{u}_{z}^{*} =\operatorname*{argmin}_{\mathbf{u}\in\mathbb{R}^{k}}\sum_{i=1}^{N}K_{h} (x^{[i]}-z)\left(\phi(\mathbf{s}^{[i]},a^{[i]})^{T}\mathbf{u}-\Big{[}r^{[i]}+\gamma \phi(\mathbf{s}^{[i]\prime},\pi(\mathbf{s}^{[i]}))^{T}\mathbf{\beta}_{+}\Big{]} \right)^{2} \tag{3.8}\] \[=\operatorname*{argmin}_{\mathbf{u}\in\mathbb{R}^{k}}\left(\widetilde {\mathbf{\Phi}}\mathbf{u}-\widehat{\mathcal{T}}_{\pi}\widetilde{\mathbf{\Phi}}\mathbf{\beta}_ {+}\right)^{T}\mathbf{W}_{z}\left(\widetilde{\mathbf{\Phi}}\mathbf{u}-\widehat{\mathcal{T} }_{\pi}\widetilde{\mathbf{\Phi}}\mathbf{\beta}_{+}\right)=(\widetilde{\mathbf{\Phi}}^{T} \mathbf{W}_{z}\widetilde{\mathbf{\Phi}})^{-1}\widetilde{\mathbf{\Phi}}^{T}\mathbf{W}_{z} \widehat{\mathcal{T}}_{\pi}\widetilde{\mathbf{\Phi}}\mathbf{\beta}_{+}, \tag{3.9}\] where \(\mathbf{W}_{z}=\operatorname*{diag}(K_{h}(x^{[1]}-z)\cdots K_{h}(x^{[N]}-z))\in \mathbb{R}^{N\times N}\) is a diagonal kernel-weight matrix. Under this weighted norm, transitions with a candidate variable \(x^{[i]}\) that are local to \(z\) contribute more to the overall fit of the least squares minimization. Accordingly, the empirical kernel-weighted projection operator is \(\widehat{\Pi}_{\mathbf{W}_{z}}=\widetilde{\mathbf{\Phi}}(\widetilde{\mathbf{\Phi}}^{T}\bm {W}_{z}\widetilde{\mathbf{\Phi}})^{-1}\widetilde{\mathbf{\Phi}}^{T}\mathbf{W}_{z}\). Using the projection operator \(\widehat{\Pi}_{\mathbf{W}_{z}}\), we can now directly find \(\mathbf{\beta}_{+}\) that minimizes the kernel-weighted empirical PBE represented as \[\mathcal{E}_{\mathcal{D}}=\|Q_{\mathbf{\beta}_{+}}^{\pi}-\widehat{\Pi}_{\mathbf{W}_{ z}}\mathcal{T}_{\pi}Q_{\mathbf{\beta}_{+}}^{\pi}\|_{2}^{2}=\|\widetilde{\mathbf{\Phi}}\mathbf{ \beta}_{+}-\widetilde{\mathbf{\Phi}}\underbrace{(\widetilde{\mathbf{\Phi}}^{T}\mathbf{W}_ {z}\widetilde{\mathbf{\Phi}})^{-1}\widetilde{\mathbf{\Phi}}^{T}\mathbf{W}_{z}\mathcal{T}_{ \pi}\widetilde{\mathbf{\Phi}}\mathbf{\beta}_{+}}_{\widetilde{g}(\mathbf{\beta}_{+})\in \mathbb{R}^{k}}\|_{2}^{2}. \tag{3.10}\] Since \(\widetilde{\mathbf{\Phi}}\widetilde{g}(\beta)\in\mathcal{C}(\widetilde{\mathbf{\Phi}})\), minimizing this objective function is equivalent to solving for \(\mathbf{\beta}_{+}\) in \(\mathbf{\Phi}\mathbf{\beta}_{+}=\mathbf{\Phi}\widetilde{g}(\mathbf{\beta}_{+})\), which can be simplified as \[\underbrace{\widetilde{\mathbf{\Phi}}^{T}\mathbf{W}_{z}\big{(}\widetilde{\mathbf{\Phi}}- \gamma\widetilde{\mathbf{\Phi}}^{\prime}\big{)}}_{\mathbf{A}_{z}}\mathbf{\beta}_{+}= \underbrace{\widetilde{\mathbf{\Phi}}^{T}\mathbf{W}_{z}\widetilde{\mathbf{\Pi}}}_{\mathbf{b}_{ z}}, \tag{3.11}\] where \(\widetilde{\mathbf{\Phi}}^{\prime}=\big{(}\phi(\mathbf{s}^{[1]\prime},\pi(\mathbf{ s}^{[1]\prime}))^{T}\,\cdots\,\,\phi(\mathbf{s}^{[N]\prime},\pi(\mathbf{s}^{[N] \prime}))^{T}\big{)}^{T}\). Thus, the solution to minimizing the kernel-weighted empirical PBE can be obtained analytically as \(\widehat{\mathbf{\beta}}_{+}=\mathbf{A}_{z}^{-1}\mathbf{b}_{z}\). This procedure is summarized in Figure 3. ### Component-wise Regularization via Group Lasso Since we are interested in obtaining a sparse representation of the elements in \(\mathbf{\beta}_{+}\), we apply a penalty to an estimating equation \(\mathcal{L}_{z}(\mathbf{\beta}_{+})\) of (3.11). Being that components of our basis functions are grouped by features, we incorporate a group Lasso penalty that performs group level variable selection by jointly constraining all coefficients that belong to a given feature. Consequently, our primary objective function for our estimator is \[\mathcal{L}_{z}(\mathbf{\beta}_{+})+\lambda\mathcal{R}(\mathbf{\beta}_{+})=-\frac{1}{2} \mathbf{\beta}_{+}^{T}\widetilde{\mathbf{A}}_{z}\mathbf{\beta}_{+}-\mathbf{\beta}_{+}^{T} \widetilde{\mathbf{b}}_{z}+\lambda\sum_{a}^{|\mathcal{A}|}\Big{(}\sqrt{m}\cdot| \alpha_{a}|+\sum_{j\geq 2}\left\|\mathbf{\beta}_{j;a}\right\|_{2}\Big{)}, \tag{3.12}\] where \(\lambda\) is a regularization parameter. Note that the group Lasso penalty \(\mathcal{R}(\mathbf{\beta}_{+})\) includes a \(\sqrt{m}\) factor used to appropriately scale the strength of the regularization term \(\lambda\) applied to \(|\alpha_{a}|\) and to that of the coefficients of the B-splines basis functions. This ensures that the grouped coefficients gets evenly penalized. To estimate \(\mathbf{\beta}_{+}\) under the objective function 3.12, we use the randomized coordinate descent method for composite functions proposed in Richtarik and Takac (2014). Under this procedure, we (1) randomly select a coordinate \(j\) from \(\{1,\ldots,d\}\) under a fixed action \(a\), and (2) update the current estimate of \(\beta_{j;a}^{(t)}\). We then repeat steps (1) and (2) until convergence to \(\widehat{\mathbf{\beta}}_{+}\). Each update in (2) can be written in closed form as \[U\left(\mathbf{\beta}_{j;a}^{(t)}\right) =\mathcal{U}_{\lambda_{j}/\mu}\left(\mathbf{\beta}_{j;a}^{(t)}-\mu \nabla_{j;a}\mathcal{L}_{z}(\mathbf{\beta}_{+}^{(t)})\right) \tag{3.13}\] \[=\mathcal{U}_{\lambda_{j}/\mu}\left(\mathbf{\beta}_{j;a}^{(t)}-\mu \,\mathbf{\Phi}_{j;a}^{T}\mathbf{W}_{z}\left((\mathbf{\Phi}-\gamma\mathbf{\Phi}^{\prime})\mathbf{ \beta}_{+}^{(t)}-\widetilde{R}\right)\right) \tag{3.14}\] where \(\mathcal{U}_{\lambda}\) is a soft-thresholding operator defined as \(\mathcal{U}_{\lambda}(\mathbf{v})=(\mathbf{v}/\|\mathbf{v}\|_{2})\cdot\max\left\{ 0,\|\mathbf{v}\|_{2}-\lambda\right\}\), \(\mu\) is the step size, and \(\lambda_{j}\) are regularization parameters. Note that \(\lambda_{1}=\lambda\sqrt{m}\) and \(\lambda_{j}=\lambda\ \forall j\in[d]\). Details of the estimation procedure of KSH-LSTDQ are provided in Algorithm 1. ``` Input:\(z\) (Fixed value), \(\mathbf{\beta}_{+}^{(0)}\) (Initial weights), \(K(\cdot)\) (Kernel function), \(0\leq\gamma<1\) (Discount factor), \(\mu\) (Step size), \(\epsilon\) (Stopping Criteria), \(\lambda\) (Regularization Parameter), \(\pi\) (Current Policy) Data: Dataset of transitions \(\mathcal{D}=\{(\mathbf{s}^{[i]},a^{[i]},r^{[i]},\mathbf{s}^{[i]\prime},x^{[i]}) \}_{i=1}^{N}\) Initialization: Construct \(\mathbf{W}_{z}\), \(\mathbf{\Phi}\) and \(\mathbf{\Phi}^{\prime}\) while\(||\mathbf{\beta}_{+}^{(t+1)}-\mathbf{\beta}_{+}^{(t)}||\geq\epsilon\)do Select \(j\in[d]\) with probability \(1/d\) for\(a\in\mathcal{A}\)do Update \(\mathbf{\beta}_{j,a}^{(t+1)}\leftarrow\mathcal{U}_{\lambda_{j}/\mu}\left(\mathbf{ \beta}_{j;a}^{(t)}-\mu\,\mathbf{\Phi}_{j;a}^{T}\mathbf{W}_{z}\left((\mathbf{\Phi}-\gamma \mathbf{\Phi}^{\prime})\mathbf{\beta}_{+}^{(t)}-\widetilde{R}\right)\right)\) end for end for ``` **Algorithm 1**KSH-LSTDQ (via Randomized Coordinate Descent for Group Lasso) This algorithm allows us to retrieve an estimate of \(\mathbf{\beta}_{+}\) relative to the fixed value \(z\) and, accordingly, approximate the additive functions in 2.4, as \[\widehat{g}_{a}(z)=\widehat{\alpha}_{a,z}\quad\text{and}\quad\widehat{f}_{j,a} (\mathbf{s}_{j},z)=\sum_{k=1}^{m}\varphi_{jk}(\mathbf{s}_{j})\widehat{\beta}_ {jk;a,z}\quad\forall j\geq 2. \tag{3.15}\] To retrieve a nonlinear, smooth estimate of \(g_{a}(\cdot)\) and \(f_{j,a}(\mathbf{s}_{j},\cdot)\) respectively, we compute the estimators \(\widehat{\alpha}_{a,z}\) and \(\widehat{f}_{j,a}(\mathbf{s}_{j},z)\) for each value of \(z\) contained within the set \(\mathcal{Z}=\{z_{1},\ldots,z_{M}\}\) that densely covers the domain of \(x\). This procedure amounts to running Algorithm 1\(M\) times (i.e., once for each element in \(\mathcal{Z}\)). ### Approximate Policy Iteration The aforementioned estimation strategy is a policy evaluation method for obtaining an approximate representation of the action-value function \(Q^{\pi}\) under a fixed policy \(\pi\). By using policy iteration, we can construct a procedure for estimating \(Q^{*}\) under an improved, or potentially optimal policy \(\pi^{*}\)(Howard, 1960; Bertsekas, 2011). To perform policy iteration, we begin with an arbitrary policy \(\pi_{0}\), or the behavioral policy \(\pi_{\text{b}}\) used to generate \(\mathcal{D}\). At each iteration \(t\), we evaluate the current policy \(\pi_{t}\) by estimating \(Q^{\pi_{t}}\) according to (3.1) over a grid of local points \(\mathcal{Z}=\{z_{1},\ldots,z_{M}\}\). The policy improvement step follows by using the recently approximated action-value function \(Q^{\pi_{t}}\) to generate the new greedy policy \(\pi_{t+1}\). Since \(Q^{\pi_{t}}\) is represented using a grid of \(m\) local models (each computed with respect to each fixed value of \(x\)), the action selection strategy and representation of \(\pi_{t+1}\) is closely determined by our choice of \(x\). This process, as detailed in Algorithm 2, repeats until convergence. **Example 3.1**.: When \(x=\mathbf{s}_{0}\), we represent the greedy policy \(\pi(\mathbf{s})\) using the local model whose value of \(z\) is closest to \(\mathbf{s}_{0}\). In other words, let \(\mathcal{Z}=\{z_{1},\dots,z_{M}\}\) and \(\mathcal{B}\in\mathbb{R}^{M\times(1+(d-1)m)|\mathcal{A}|}\) be a matrix of weights, where each row \(i\) corresponds to a set of model weights estimated under the value \(z_{i}\). The greedy policy \(\pi(\mathbf{s})\) is defined as \(\pi(\mathbf{s})=\arg\max_{a}\phi\left(\mathbf{s},a\right)^{T}\mathcal{B}_{i^{ *}*}\), where \(i^{*}=\arg\min_{i\in\{1\dots,M\}}|\mathbf{s}_{0}-z_{i}|\). **Example 3.2**.: When \(x=a\), the greedy policy is represented as the fixed value \(z\) of local model that maximizes its associated action-value function. Let \(\mathcal{Z}=\{z_{1},\dots,z_{M}\}\) and \(\mathcal{B}\in\mathbb{R}^{M\times(1+(d-1)m)}\) be a matrix of weights, where each row \(i\) corresponds to a set of model weights estimated under the value \(z_{i}\). The greedy policy \(\pi(\mathbf{s})\) is defined as \(\pi(\mathbf{s})=z_{i^{*}}\) where \(i^{*}=\arg\max_{i}\phi\left(\mathbf{s},\cdot\right)^{T}\mathcal{B}_{i^{*}}\). ## 4 Simulation Study In this section, we perform a simulation study to examine the key properties of the KSH-LSTDQ and KSH-LSPI algorithms. Specifically, we highlight the KSH-LSTDQ algorithm's performance in estimating the marginal nonlinear additive functions \(g_{a}(x)\) and compare the performance of the KSH-LSPI algorithm against a set of neural network-based approaches. ### Estimating Marginal Components We consider a multidimensional, continuous state MDP with binary actions and an additive reward function. For each sampled trajectory, elements of the initial state vector \(\mathbf{s}^{(0)}\in\mathbb{R}^{d}\) is sampled as \(\mathbf{s}^{(0)}_{i}\in\text{Unif}(\frac{1}{2},\frac{1}{2})\;\;\forall i\in[d]\). At each time step \(t\), we randomly sample an action \(a^{(t)}\in\{0,1\}\) with probability \(\frac{1}{2}\). Accordingly, each next state transition occurs as \(\mathbf{s}^{(t)}\sim\mathcal{N}(\mathbf{s}^{(t-1)}+\delta_{a},0.1)\), where \(\delta_{a}=0.1\cdot\mathbb{1}(a^{(t)}=0)-0.1\cdot\mathbb{1}(a^{(t)}=1)\). Under this MDP, we construct a reward function \[r(\mathbf{s},a)=u_{1}(\mathbf{s}_{1},a)+u_{2}(\mathbf{s}_{2},a) \tag{4.1}\] with reward components that are reliant only on the state features \(\mathbf{s}_{1}\) and \(\mathbf{s}_{2}\), where \[u_{1}(\mathbf{s}_{1},a) =(5\mathbf{s}_{1}^{2}+5)\mathbb{1}(a=1)-(2\mathbf{s}_{1}^{3}-5) \mathbb{1}(a=0), \tag{4.2}\] \[u_{2}(\mathbf{s}_{2},a) =(5\sin(\mathbf{s}_{2}^{2})+5)\mathbb{1}(a=1)+(4\mathbf{s}_{2}-5) \mathbb{1}(a=0) \tag{4.3}\] and \(g_{j}(\mathbf{s}_{j},a)=0\;\forall j\geq 3\). For an arbitrary policy \(\pi\), the construction of this reward function induces a corresponding action-value function that is additive with respect each non-zero reward component, specifically \[Q^{\pi}(\mathbf{s},a) =\mathbb{E}_{\pi}\left[\sum_{i=0}^{\infty}\gamma^{i}r(\mathbf{s}^ {(i)},a^{(i)})\mid\mathbf{s}^{(0)}=\mathbf{s},a^{(0)}=a\right]\] \[=\sum_{j=1}^{2}\mathbb{E}_{\pi}\left[\sum_{i=0}^{\infty}\gamma^{ i}u_{j}(\mathbf{s}_{j}^{(i)},a^{(i)})\mid s^{(0)}=\mathbf{s},a^{(0)}=a\right]\] \[=U_{1}(\mathbf{s}_{1},a)+U_{2}(\mathbf{s}_{2},a).\] Using \(n\) trajectories sampled from this MDP (represented as a batch dataset \(\mathcal{D}\)), we evaluate the behavioral policy (i.e., \(\pi(s^{[i]})=a^{[i]}\)) and retrieve the marginal component function \(g_{a}(x)\) of the following nonparametric additive model \[Q^{\pi}(\mathbf{s},a)=g_{a}(\mathbf{s}_{i})+\sum_{j\in[d]/i}f_{j,a}(\mathbf{s }_{j},\mathbf{s}_{i})+\epsilon, \tag{4.4}\] Figure 4: A comparison of estimated marginal component functions \(\widehat{g_{a}}(\mathbf{s}_{i})\) and MC-estimates of \(u_{i}(\mathbf{s}_{i},a)\) as described in Section 4.1. For each action, the solid lines represent the estimate of the marginal component function, \(g_{a}(\mathbf{s}_{i})\), of \(Q^{\pi}(s,a)\) as modeled in Equation 4.4 under a bandwidth of \(h=0.001\) (left) and \(h=0.01\) (right), while the dashed line represents the Monte-Carlo estimate of \(U_{i}(\mathbf{s}_{i},a)\). The observed distribution of the state feature \(\mathbf{s}_{i}\) is displayed using the density in grey. where \(x=\mathbf{s}_{i}\) and \(i\in\{1,2\}\). To measure the performance of our model against a target, we utilize Monte-Carlo (MC) sampling on the MDP to retrieve a direct estimate of the \(Q^{\pi_{\epsilon}}(s,a)\), evaluated as \(\widehat{Q}_{\text{MC}}^{\pi_{\epsilon}}(\mathbf{s},a)=\frac{1}{n}\sum_{i=1}^{ n}\sum_{j=1}^{\ell}\gamma^{j}r_{ij}\), where \(\ell\) is the length of each trajectory and \(n\) is the number of sampled trajectories. Since our action-value function is additive, we can similarly construct MC estimates for the component functions \(U_{1}(\mathbf{s}_{1},a)\) and \(U_{2}(\mathbf{s}_{2},a)\). Lastly, using a pre-specified grid of points \(\mathcal{Z}=\{z_{1},\dots,z_{M}\}\), we repeat Algorithm 1\(M\) times to obtain smooth estimates of \(g_{a}(\mathbf{s}_{i})\) as described in Section 3.3. In Figure 4, we observe the nonlinear marginal component functions of the estimated nonparametric additive model represented in Equation (4.4). Under this example, we set the dimensionality of the state space to \(d=5\) and the discount factor to \(\gamma=0.5\). The dataset \(\mathcal{D}\) consisted of 1000 sampled trajectories each with a length of \(\ell=10\); the MC estimates were obtained by sampling trajectories of length \(\ell\) 100 times. Figure 4 compares the Monte-Carlo estimate of \(u_{i}(\mathbf{s}_{i},a)\) to the estimated component function \(\widehat{g_{a}}(\mathbf{s}_{i})\) retrieved by the KSH-LSTDQ estimator under a bandwidth of \(h=0.001\) and \(h=0.01\), respectively. As the bandwidth of the kernel function increases, the model produces a smoother component function since larger weights are assigned to observations further from each local point \(z\). Furthermore, note that towards the boundary of the domain, the value of the component function is pulled towards zero. Since, our simulation relies on state transitions sampled from a normal distribution, these regions of the domain generally have fewer observations. As a result, the group Lasso penalty shrinks these sparse regions toward 0. Lastly, while our model, is generally able retrieve the shape of the underlying function at non-sparse regions of the domain, our estimates are also slightly biased for complex functions, as observed for \(\widehat{g_{a}}(\mathbf{s}_{2})\). ### Comparison to Neural Approaches We evaluate the performance of the KSH-LSPI algorithm against a set of widely-used neural network-based approaches, specifically: neural fitted Q-iteration (NFQ), deep Q-network (DQN), double deep Q-network (DDQN), and conservative Q-learning (CQL) (Mnih et al., 2013; Van Hasselt, 2010; Riedmiller, 2005; Kumar et al., 2020). Each model is trained using a batch dataset Figure 5: Regret analysis comparing the performance of KSH-LSPI models, where the candidate feature \(x\) is independently represented using state features \(\{\mathbf{s}_{1},\mathbf{s}_{2},\mathbf{s}_{3}\}\), and neural network-based approaches as described in Section 4.2. Within each sub-figure, the dimensionality of the state space and the number of episodes used to generate the batch dataset are varied. of experiences, which are gathered from a random policy interacting in an MDP with correlated state features and an additive reward function. Similar to Equation (4.1), the reward function is dependent on the first two state features \(\{\mathbf{s}_{1},\mathbf{s}_{2}\}\) and the selected action \(a\). Appendix A.1 provides a detailed description of the MDP and the data generation process. In each experiment, we adjust the dimensionality of the MDP's state space and the number of episodes used to generate the batch dataset. For the KSH-LSPI algorithm, we fit a separate model, where the candidate feature \(x\) is represented as one of the first three state features \(\{\mathbf{s}_{1},\mathbf{s}_{2},\mathbf{s}_{3}\}\). Here, each state feature, denoted as \(\mathbf{s}_{i}\), contributes to the marginal component, \(g_{a}(\mathbf{s}_{i})\), as illustrated in Equation (4.4). We perform policy iteration in accordance with Example 3.1, setting the maximum number of allowed policy iterations to 3. Detailed specifications and architectures of both the KSH-LSPI models and the neural network-based approaches can be found in Appendix A.2. Estimated policies from each approach were evaluated within the MDP used to generate the training batch dataset. Specifically, each policy was rolled out for 10 time steps (i.e., an episode), 1000 times. A regret analysis was performed, where, at the end of each episode, the difference between the optimal reward at each time step and the reward obtained by the current policy was calculated. The average of these differences over all episodes was then computed to obtain the estimated mean regret for each experiment. Figure 5 presents results from the regret analysis, where the dimensionality of the state space and the number of episodes used to generate the batch dataset were varied. Within each experiment, we observe that the KSH-LSPI model under a candidate feature of \(\mathbf{s}_{2}\) and \(\mathbf{s}_{3}\), perform similarly to the neural network-based approaches when the number of episodes is 100, and worse when the number episodes used the generated the batch dataset increases to 1000. Conversely, when \(\mathbf{s}_{1}\) (i.e., the feature that accounts for the most variation within the observed rewards) is set as the candidate feature, the KSH-LSPI model outperforms the neural-network based models, and is further improved as the number of episodes increases. These results highlight a key sensitivity in the KSH-LSPI model; that is, the appropriate selection of the candidate feature \(x\) largely influences model performance. ## 5 Motivating Case Study Postoperative recovery is defined as the period of functional improvement that occurs from the end of surgery and hospital discharge to the instance in which normal function has been restored (Bowyer and Royse, 2016). Depending on the type of surgery administered, this period of functional recovery can vary drastically and be accompanied by mild to severe complications. For patients who received a corrective surgery for spine disease, post-operative recovery is impacted by the complexity of the diagnosis and surgical procedure received. Additional barriers to recovery for spine disease patients include stress, pain, cognitive dysfunction, and potential postoperative complications (Wainwright, Immins, and Middleton, 2016). To improve the postoperative recovery and care of spine patients, physicians have employed a multi-pronged approach that focuses on protocols that expedite functional recovery, decrease postoperative complications, and improve subjective patient experience (Elsarrag et al., 2019). As part of this effort, patient mobilization and consistent pain management are heavily suggested (Burgess and Wainwright, 2019). To advance theses efforts, physicians require objective measurements of a patient's functional capacity and pain over the course of their recovery (Cote et al., 2019; Panda et al., 2020; Karas et al., 2020; Boaro, Reeder, and Siddi, 2021). With respect to spine patients, such measurements can provide a formal understanding and quantification of mobilization activities that expedite overall patient recovery and minimize the risk of complications. We consider \(n=67\) neurosurgical spine patients with median age of 57 years (IQR: 48-65.5) that were enrolled between June 2016 and March 2020 as part of a digital phenotyping study at Brigham and Women's Hospital. Each patients underwent a neurosurgical intervention in relation to their spine disease. For data collection, patients installed the Beiwe application on their smartphones. Beiwe is a high-throughput research platform that was developed by the Onnela lab at Harvard T.H. Chan School of Public Health for smartphone-based digital phenotyping on iOS and Android devices. Passive features collected on Beiwe includes GPS and accelerometer data in their raw unprocessed form, Bluetooth and WiFi logs, and anonymized phone call and text message logs. Samples were collected from the GPS data stream for 1 minute every 5 minutes, and from the accelerometer data stream for 10 seconds every 10 seconds. Using the raw data sampled from the GPS and accelerometer sensors, a set of behavioral features concerning patient mobility are computed at the daily level (Liu and Onnela, 2021). A subset of these features are represented in Table 1. For active data collection, patients were electronically surveyed once daily at 5PM Eastern standard time to evaluate their current pain level. The prompt of the micro-survey was "Please rate your pain over the last 24 hours on a scale from 0 to 10, where 0 is no pain at all and 10 is the worst pain imaginable." In conjunction with the daily self-reported micro-surveys, these constructed features allow researchers to objectively identify post-operative trends in mobility and pain as it relates to overall functional recovery (Boaro, Reeder, and Siddi, 2021; Cote et al., 2019). To this end, we seek to leverage reinforcement learning to estimate and interpret mobility-based action-value functions that provide recommendations concerning questions such as "What level of mobilization is advisable after surgery?" and "How should these levels be adjusted given a patient's current condition?". \begin{table} \begin{tabular}{||c c c||} \hline \hline Distance Traveled (km) & Radius of Gyration (km) & Average flight duration (km) \\ \hline Time Spent at Home (hours) & Maximum Diameter (km) & Fraction of the day spent stationary \\ \hline Max. Distance from Home (km) & Num. Significant Places Visited & Time Spent Walking \\ \hline Average flight length (km) & Number of Steps & Average Cadence \\ \hline \hline \end{tabular} \end{table} Table 1: Subset of GPS and accelerometer-based summary statistics of digital phenotyping. Definitions can be found on the _Forest_ GitHub repository (www.github.com/onnela-lab/forest). Figure 6: Smoothed mobility proportions (with standard errors represented in grey) for spine disease cohort centered on day of surgery. The lighter shaded area corresponds to the first 30 post-operative days. Receiving a neurosurgical intervention is followed by a period of decreased mobility, where patients tend to travel less and stay at home over a longer duration. Individual-level differences in recovery are driven by factors such as the type of surgery received, the specific diagnosis of spine disease, and patient demographics. The overall goal of these recommendations is to manage a patient's overall pain level and promote improved recovery. Furthermore, by utilizing an interpretable representation of the estimated action-value function, we seek to identify clinical and digital phenotyping features that are important to consider for decision-making. ## 6 Application to Surgical Recovery Using data collected from the spine disease cohort described in Section 5, we implement nonparametric additive models to estimate action-value functions associated with 1. A **behavioral policy** that aims to mimic decisions commonly taken by patients, and 2. An **improved policy** retrieved from performing approximate policy iteration on the estimated behavioral policy. In both cases, the estimated decision-making policy aims to suggest the daily number of steps necessary to reduce long-term (\(\gamma\gg 0\)) post-operative pain response. We explore both discrete and continuous action spaces and provide a practical interpretation of the additive functional components as presented in Equations (2.6) and (2.7), respectively. ### Data Pre-processing We consider the recovery period of \(n=67\) neurosurgical spine disease patients with a mean post-operative follow-up of 87 days (SD = 51.21 days). Baseline clinical information on this study cohort can be found in Table 2. Digital phenotyping features based on raw GPS and accelerometer data were constructed and summarized on a daily time scale to closely monitor each patients' clinical recovery and/or progression after surgery. These features include passively sampled summary statistics that uniquely describe a patient's daily mobility and activity levels. We construct a simple MDP where each time step \(t\) corresponds to a day since surgery. The state space, \(S\in\mathbb{R}^{d}\) is a multidimensional, continuous state vector that consists of relevant digital phenotyping features and patient-specific demographic information (i.e., age and days since surgery). In total, \(d=9\) features were used in this analysis.1 The action space \(\mathcal{A}\in\mathbb{R}\) represents the number of steps taken per day. For the discrete action model, the action space, \(\mathcal{A}\in\{0,1\}\), is Figure 7: Pre- and post-operative pain responses with time centered on the day of surgery (i.e., blue line) with a fitted local regression (i.e., black line) for a random selection of patients. While surgery corresponds to a sharp decline in self-reported pain, we observe a heterogeneous recovery experience among these four patients. binarized such that 0 represents moving less than the subject-level pre-operative median number of steps taken per day and 1 represents moving above this threshold. The rewards, \(r\in\mathbb{R}\), are chosen to be the negative value of the self-reported pain score, where each score is taken from a numerical rating scale between 0 (i.e., no pain) and 10 (i.e., worst pain imaginable). Lastly, we consider a discount factor \(\gamma\) of 0.5 to examine estimated policies that aim to reduce long-term pain response. Under this MDP, we consider up to the first 60 days since surgery for each patient. Patients with a post-operative follow-up period of less than 5 days were excluded. Entries with missing values in either the digital phenotyping features or the daily self-reported pain scores were removed. The batch dataset \(\mathcal{D}\) with \(N=\) 1,409 daily transitions was constructed using data collected from the study cohort and represented using the MDP. All state features were normalized to [0,1] for model fitting. ### Model Fitting To estimate the action-value function associated with the behavioral policy \(\pi_{b}\), we implement Algorithm 1 where we construct \(\mathbf{\Phi}^{\prime}\) using the observed next-state action contained within each patient-level trajectory in \(\mathcal{D}\). That is, \(\mathbf{\Phi}^{\prime}_{\mu}=\phi(\mathbf{s}^{{}^{\prime}[i]},a^{[i+1]})\) for the \(i^{\text{th}}\) observed transition. Accordingly, the action-value function associated with an improved policy \(\pi^{*}\) is estimated by performing approximate policy iteration (as detailed Algorithm 2) on the action-value function associated with the behavioral policy. For both discrete and continuous action versions of the general model (2.4), we use a Gaussian kernel \(K(u)=e^{-\frac{1}{2}u^{2}}\) and a grid of evenly-spaced points \(\mathcal{Z}\) within a [0,1] range for discretization. For the discrete action model, we estimate the marginal effect \(g_{a}(x)\) and the \begin{table} \begin{tabular}{l l} Variable & \(n\) (\%) or Median (25\({}^{\text{th}}\)–75\({}^{\text{th}}\)) \\ \hline \hline **Demographic Data** & \\ Age & 57.0 (48.0–65.5) \\ Female gender & 34 (50.7) \\ \hline **Site of surgery** & \\ Cervical & 19 (28.4) \\ Lumbar & 27 (40.3) \\ Thoracic & 2 (3.0) \\ Multiple & 18 (26.9) \\ \hline **Data Collection** & \\ GPS days of follow-up & 61 (49–61) \\ Accelerometer days of follow-up & 61 (50.5–61) \\ Daily pain survey response rate & 59.4 (42.4–76.9) \\ \hline **Digital Phenotypes** & \\ Number of places visited & 3 (2–5) \\ Time spent at home (hours) & 18.3 (12.9–21.9) \\ Distance traveled (km) & 32.3 (10.8–62.3) \\ Maximum distance from home (km) & 10.6 (4.5–25.5) \\ Radius of gyration (km) & 1.50 (0.18-5.01) \\ Time spent not moving & 21.2 (20.2–22.2) \\ Average cadence & 1.64 (1.55–1.74) \\ Number of steps & 948.6 (356.9–2,005) \\ \hline \hline \end{tabular} \end{table} Table 2: Participant demographic information and digital phenotyping data for spine disease cohort. Summaries are computed according to the first 60 postoperative days since surgery (including the day of surgery). additive joint effects \(f_{j,a}(\mathbf{s}_{j},x)\) for \(j\geq 2\) for each candidate state feature \(x\) in a set \(\mathcal{H}\), where \(x\neq\mathbf{s}_{j}\). To select the hyperparameters of the KSH-LSTDQ estimator (i.e., the degree of the B-spline functions, number of basis functions, bandwidth, and regularization penalty), we partitioned the dataset \(\mathcal{D}\) into training and validation sets according to a \(80\%-20\%\) patient-level split and performed a grid search. Using these partitions, we retrieved a set of hyperparameters that minimized the validation mean squared error between the estimated action-value under the behavioral policy when \(\gamma=0\) and the true immediate rewards, \[\text{MSE}(\mathcal{D}_{\text{Val}})=\frac{1}{|\mathcal{D}_{\text{Val}}|}\sum _{(\mathbf{s},a,r)\sim\mathcal{D}_{\text{Val}}}\Big{(}\widehat{Q}^{\pi_{b}}( \mathbf{s},a)-r\Big{)}^{2}.\] Accordingly, these hyperparameters were used to retrieve the KSH-LSTDQ estimators for MDPs where \(\gamma\) is set to \(0.5\). The set of hyperparameters used for each estimated model is displayed in Appendix B.1. ### Results and interpretations We visualize and interpret the estimated the additive component functions of the action-value functions associated with the behavioral and improved policies. #### 6.3.1 Discrete Action Model For the discrete action model, we model a nonparametric action-value function for each candidate state feature in the set \(\mathcal{H}\), which we represent as \(x\). Here, \(\mathcal{H}\) consists of the following features: age, number of days since surgery, time spent at home (hours), and distance traveled (km). **Marginal State Feature Effects.** In Figure 8, we examine the marginal effect \(\widehat{g}_{a}(x)\) of the estimated action-value function for each candidate state feature \(x\in\mathcal{H}\) under the behavioral policy \(\pi_{b}\) and the improved policy \(\pi^{*}\) constructed using policy iteration. Specifically, \(\widehat{g}_{a}(x)\) returns the estimated marginal change in _long-term_ negative pain response for a given state feature \(x\) under action \(a\). Across each sub-figure in Figure 8(a), moving above a patient's pre-operative baseline number of steps (\(a=1\)) within a given day is associated with a higher immediate negative pain response in comparison to the converse action (\(a=0\)), regardless of the value of \(x\). This observation is inline with clinical research that suggests movement at or above a patient's pre-operative baseline is associated with improved post-operative functional recovery (Duc et al., 2013; Ozkara et al., 2015; Ozkara et al., 2015; Cote et al., 2019). Within each select action, the marginal effect shows a nonlinear change in long-term pain response. When \(x=\text{\emph{age}}\), Figure 8(a) suggests a marginal increase in long-term negative pain response for small and large values of age. This observation supports clinical studies suggesting the existence of age-related pain sensitivity that peaks during mid-life (Yezierski, 2012). Additionally, when \(x=\text{\emph{time spent at home}}\), our model suggests that spending more time at home is associated with a nonlinear decrease in negative pain response. Furthermore, when \(x=\text{\emph{distance traveled}}\), we observe that, regardless of the selected action, traveling less than \(150\)km is associated with a constant effect on negative pain response, whereas traveling beyond \(150\)km within a given day is associated with an increasing effect. We note that this association is possibly due to survivorship bias present in the model estimates, where a few patients report minimal pain during periods of excessive travel. Lastly, when \(x=\text{\emph{days since surgery}}\), our model examines the impact of mobilization as the number of days since surgery increases. We note the difference in the marginal effect among both actions is maximized for days closest to the onset of surgery, suggesting that increased mobilization during early periods of recovery may be associated with decreased pain response. This finding supports current clinical practice that suggests early mobilization enhances surgical recovery, a cornerstone of post-operative pain management (Wainwright, Immins, and Middleton, 2016; Burgess and Wainwright, 2019). The differences between the marginal effects associated with the behavioral and improved policies, as shown in Figure 8(b), are subtle. While the underlying trends and ordering of actions are relatively consistent, the estimated effect sizes appear to be smaller for select candidate state features under the improved policy (e.g., \(x=\text{age}\), time spent at home, or days since surgery) compared to those of the behavioral policy. **Joint Effects between State Features.** In Figures 9 and 10, we examine the joint effect \(f_{j,a}(x,\mathbf{s}_{j})\) of the estimated action-value function between select state features \(\mathbf{s}_{j}\) (i.e., age, time spent not moving, average cadence, and maximum distance from home) and candidate state features \(x\in\mathcal{H}\). The value of each joint feature pair corresponds to a nonlinear effect on an estimated smooth surface representing the additive, long-term change in negative pain response. We specifically examine the benefit of selecting a given action over its converse by visualizing the difference between the joint effects under both actions, i.e., \(f_{j,1}-f_{j,0}\). Differences greater than zero indicate an additive preference for action \(a=1\), over the converse \(a=0\). When examining the joint effect between \(x=\text{\emph{days since surgery}}\) and \(\mathbf{s}_{j}=\text{\emph{age}}\), we observe that regardless of the value of each corresponding feature, moving more than the pre-operative baseline is associated with an increase in negative pain response throughout the domain of the joint component function. Interestingly, this association is more pronounced among younger patients under the improved policy. This observation is consistent with clinical research suggesting a relationship between increased post-operative movement and improved rehabilitation, and its potential modification by factors such as age (Ozkara et al., 2015; Duc et al., 2013; Jaensson, Dahlberg, and Nilsson, 2019). When \(x=\text{\emph{distance traveled}}\) and \(\mathbf{s}_{j}=\text{\emph{average cadence}}\), maintaining a slower average walking Figure 8: A comparison of the marginal component function \(\widehat{g}_{a}(x)\) of \(Q^{\pi}(\mathbf{s},a,x)\) estimated under the behavioral policy \(\pi=\pi_{b}\) vs. the improved policy \(\pi=\pi^{*}\). Each sub-figure is associated with a separate nonparametric additive model of \(Q^{\pi}(\mathbf{s},a,x)\), where the state feature representing \(x\) is changed. For each action, the solid lines represent the estimate of the marginal component function \(\widehat{g}_{a}(x)\) over the range of observed values of \(x\), whereas the points represent the value of the associated observed rewards (i.e., negative pain score) over \(x\). cidence over longer distances seems to be associated with moving beyond the pre-operative baseline step count. This association is relatively consistent across both the behavioral and improved policies. However, under the improved policy, a positive association is noticed with faster walking cadences near the upper boundary of total distance traveled. In general, the differential relationship between \(x=\)_distance traveled_ and \(\mathbf{s}_{j}=\)_average cadence_ could be indicative of the shift from automaticity to executive control of locomotion, as seen in rehabilitation literature (Clark, 2015). This shift may occur as distances increase or as walking becomes more challenging (e.g., due to physical exertion, elevated pain response, or injury), requiring individuals to expend more cognitive effort (i.e., executive control) to manage their gait. Figure 10: Contour plots representing the differential benefit of selecting action \(a=1\) over \(a=0\) with respect to joint effects \(\widehat{f}_{j,a}(\mathbf{s}_{j},x)\) under \(Q^{\pi}(\mathbf{s},a,x)\) estimated under the behavioral policy \(\pi=\pi_{b}\) vs. the improved policy \(\pi=\pi^{*}\). Each sub-figure is associated with a separate nonparametric additive model of \(Q^{\pi}(\mathbf{s},a,x)\), where the state feature representing \(x\) is changed. Figure 9: Surface plots representing the difference between the joint component functions \(\widehat{f}_{j,1}(\mathbf{s}_{j},x)\) and \(\widehat{f}_{j,0}(\mathbf{s}_{j},x)\) (i.e., the differential benefit of selecting action \(a\)=1 over \(a=0\)) of \(Q^{\pi}(\mathbf{s},a,x)\) estimated under the behavioral policy \(\pi=\pi_{b}\) vs. the improved policy \(\pi=\pi^{*}\). #### 6.3.2 Continuous Action Model We estimate a nonparametric action-value function for a continuous action (i.e., number of steps taken) under the behavioral and improved policies. **Marginal effects.** In Figure 11, we examine the marginal effect \(\widehat{g}(a)\) of the estimated action-value function. Specifically, \(\widehat{g}(a)\) returns the estimated change in _long-term_ negative pain score for a select value of \(a\), or number of steps taken. Similar to the discrete action model, the marginal effect of the continuous action reveals a positive association between number of steps taken and long-term negative pain response, especially under the behavioral policy. For the behavioral policy (as shown in Figure 11(a)), we observe that the marginal effect is log-shaped and increases with number of steps taken. On the other hand, for the improved policy (as shown in Figure 11(b)), we observe that the marginal effect of the improved policy is relatively constant across the observed number of steps taken. **Joint effects between State Features and Actions.** In Figures 12 and 13, we examine the joint effects \(\widehat{f}_{j}(\mathbf{s}_{j},a)\) between select state features \(\mathbf{s}_{j}\) and the continuous action \(a\) of the estimated action-value functions. When \(a=\mathit{step\ count}\) and \(\mathbf{s}=\mathit{time\ spent\ at\ home}\), we observed that increased time spent at home beyond 15 hours is associated with a positive increase in negative pain response across observed values of step count under the behavioral policy. This trend changes under the improved policy, where joint effect is maximized when step count increases and time spent at home increases and when step count is minimized and time spent at home increases. Similar to the discrete action model, we observe a differential change in the joint effect associated with age. Under the behavioral and improved policy, an increase in step count across age is relatively associated with an increase the negative pain response. ability to capture the nonlinear additive contribution of each state-action feature represented in the model. Furthermore, by introducing a group Lasso penalty to our primary objective function, we perform component-wise variable selection and retrieve a parsimonious representation of the action-value function. In the simulation study, we evaluate the performance of the proposed estimator and examine its sensitivity to changes in its hyperparameters. Future work aims to delve deeper, examining the estimator's finite sample properties both theoretically and through further simulations. The application of the proposed method to the digital phenotyping spine disease dataset also provides new insights into mobilization behaviors that support post-operative pain management and reaffirmed several well-studied clinical findings. In future applications to spine disease recovery, we hope to extend the model by including categorical features such as gender, race, and diagnosis, as well as Figure 12: Surface plots representing the joint component functions \(\widehat{f}_{j}(\mathbf{s}_{j},a)\) of \(Q^{\pi}(\mathbf{s},a)\) estimated under the behavioral policy \(\pi=\pi_{b}\) vs. the improved policy \(\pi=\pi^{*}\). Figure 13: Contour plots representing the joint component functions \(\widehat{f}_{j}(\mathbf{s}_{j},a)\) of \(Q^{\pi}(\mathbf{s},a)\) estimated under the behavioral policy \(\pi=\pi_{b}\) vs. the improved policy \(\pi=\pi^{*}\). Each sub-figure is associated with the same nonparametric additive model, but represents a different state feature \(\mathbf{s}_{j}\) on the y-axis. additional clinical features such as medication use. However, this study is not without limitations. Outcomes from our model require careful interpretation and should not be deemed significant without comprehensive uncertainty quantification. In future adaptations of the KSH-LSPI model, we hope to formalize our uncertainty concerning our model estimates by incorporating a form of interval estimation. In the offline reinforcement learning setting, uncertainty-based approaches have shown promise in offline RL by prioritizing risk adverse policies when performing policy improvement (Sonabend-W et al., 2020; O'Donoghue et al., 2017; Ghavamzadeh et al., 2016). This naturally brings light to a limitation concerning our method's approach for policy improvement. After evaluating the current policy using KSH-LSTDQ, our policy improvement step greedily selects actions that maximize the estimated action-value function. Unfortunately, function approximation methods in offline reinforcement learning are prone to providing overly optimistic values for state-action pairs that are unobserved in the training data. Hence, safe policy improvement steps, within the actor-critic framework, that regularize the learned policy toward the behavioral policy is encouraged in offline reinforcement learning, especially in healthcare applications (Wang et al., 2020). Another potential remedy would be to initialize our algorithm using an initial policy that closely reflects behaviors that would be suggested by a clinical expert. Initialization using physician guided policies helps prevent the algorithm from becoming overly optimistic by selecting best actions that physicians themselves may select (Gottesman et al., 2019). The push for interpretability in machine learning models, especially within healthcare contexts, is driven by a need for transparency in decision-making processes. Compared to the powerful, but often less interpretable neural network methodologies, nonparametric additive models for value-functions offers a representation where decision-making policies can be understood and scrutinized. Such interpretability is essential for potential clinical applications, given the need for clinicians to trust and validate the recommendations derived from these models. In conclusion, the KSH-LSPI model, while having areas that require further refinement, provides a promising framework that aligns with the demand for both efficacy and transparency.
2306.02594
Measuring the X-ray luminosities of DESI groups from eROSITA Final Equatorial-Depth Survey: I. X-ray luminosity -- halo mass scaling relation
We use the eROSITA Final Equatorial-Depth Survey (eFEDS) to measure the rest-frame 0.1-2.4 keV band X-ray luminosities of $\sim$ 600,000 DESI groups using two different algorithms in the overlap region of the two observations. These groups span a large redshift range of $0.0 \le z_g \le 1.0$ and group mass range of $10^{10.76}h^{-1}M_{\odot} \le M_h \le 10^{15.0}h^{-1}M_{\odot}$. (1) Using the blind detection pipeline of eFEDS, we find that 10932 X-ray emission peaks can be cross matched with our groups, $\sim 38 \%$ of which have signal-to-noise ratio $\rm{S}/\rm{N} \geq 3$ in X-ray detection. Comparing to the numbers reported in previous studies, this matched sample size is a factor of $\sim 6$ larger. (2) By stacking X-ray maps around groups with similar masses and redshifts, we measure the average X-ray luminosity of groups as a function of halo mass in five redshift bins. We find, in a wide halo mass range, the X-ray luminosity, $L_{\rm X}$, is roughly linearly proportional to $M_{h}$, and is quite independent to the redshift of the groups. (3) We use a Poisson distribution to model the X-ray luminosities obtained using two different algorithms and obtain best-fit $L_{\rm X}=10^{28.46\pm0.03}M_{h}^{1.024\pm0.002}$ and $L_{\rm X}=10^{26.73 \pm 0.04}M_{h}^{1.140 \pm 0.003}$ scaling relations, respectively. The best-fit slopes are flatter than the results previously obtained, but closer to a self-similar prediction.
Yunliang Zheng, Xiaohu Yang, Min He, Shi-Yin Shen, Qingyang Li, Xuejie Li
2023-06-05T04:54:45Z
http://arxiv.org/abs/2306.02594v2
Measuring the X-ray luminosities of DESI groups from eROSITA Final Equatorial-Depth Survey: I. X-ray luminosity - halo mass scaling relation ###### Abstract We use the eROSITA Final Equatorial-Depth Survey (eFEDS) to measure the rest-frame 0.1-2.4 keV band X-ray luminosities of \(\sim\) 600,000 DESI groups using two different algorithms in the overlap region of the two observations. These groups span a large redshift range of \(0.0\leq z_{\rm g}\leq 1.0\) and group mass range of \(10^{10.76}h^{-1}M_{\odot}\leq M_{h}\leq 10^{15.0}h^{-1}M_{\odot}\). (1) Using the blind detection pipeline of eFEDS, we find that 10932 X-ray emission peaks can be cross matched with our groups, \(\sim 38\%\) of which have signal-to-noise ratio \({\rm S/N}\geq 3\) in X-ray detection. Comparing to the numbers reported in previous studies, this matched sample size is a factor of \(\sim 6\) larger. (2) By stacking X-ray maps around groups with similar masses and redshifts, we measure the average X-ray luminosity of groups as a function of halo mass in five redshift bins. We find, in a wide halo mass range, the X-ray luminosity, \(L_{\rm X}\), is roughly linearly proportional to \(M_{h}\), and is quite independent to the redshift of the groups. (3) We use a Poisson distribution to model the X-ray luminosities obtained using two different algorithms and obtain best-fit \(L_{\rm X}=10^{28.46\pm 0.03}M_{h}^{1.024\pm 0.002}\) and \(L_{\rm X}=10^{26.73\pm 0.04}M_{h}^{1.140\pm 0.003}\) scaling relations, respectively. The best-fit slopes are flatter than the results previously obtained, but closer to a self-similar prediction. keywords: galaxies:groups:general - galaxies:clusters:general - X-rays:galaxies:clusters - dark matter ## 1 Introduction A galaxy group1 is a concentration of galaxies assumed to be embedded within an extended dark matter halo, providing cosmological probes of the spatial distribution and growth history of large-scale structure. The relatively high density make galaxy group an ideal site for studying the formation and evolution of galaxies within the framework of hierarchical paradigm. However, from observational point of view, the membership of group systems are not easy to determine because dark matter halos cannot be observed directly. Therefore, numerous group finding algorithms have been developed to identify the galaxy groups either from photometric or spectroscopic surveys: e.g., Yang et al. (2005) and Einasto et al. (2007) from the 2-degree Field Galaxy Redshift Survey; Weinmann et al. (2006), Yang et al. (2007, 2012), Tempel et al. (2014, 2017), Munoz-Cuartas & Muller (2012), and Rodriguez & Merchan (2020) from the Sloan Digital Sky Survey; Lu et al. (2016) and Lim et al. (2017) from the the Two Micron All Sky Survey; Robotham et al. (2011) from the Galaxy and Mass Assembly Survey; Wang et al. (2020) from the zCOSMOS Survey; Yang et al. (2021, hereafter Y21) from the DESI Legacy Image Surveys (LS). These group catalogs provide group systems that have relatively reliable membership determination, which is important for studying the galaxy evolution driven by the environment (e.g., Peng et al. 2010, 2012; Wetzel et al. 2012; Wang et al. 2018; Liu et al. 2019; Davies et al. 2019). Galaxy interactions such as mergers and close encounters are crucial mechanisms for the transformation of galaxy population in group environment (e.g., Ellison et al. 2008, 2013; Patton et al. 2016; Pearson et al. 2019; Feng et al. 2019, 2020). Footnote 1: In this paper, we refer to a system of galaxies as a group regardless of its mass and richness (i.e., rich clusters or groups with a single galaxy member) Besides the galaxy members, another known baryonic component retained within the group systems is the intragroup medium (IGM), which is a diffuse gas hot enough to emit X-rays mainly through bremsstrahlung. This hot IGM interacts with the gas within the infalling galaxies leading to the removal of the cold gas that fuels the star formation activity. This IGM can also alter the properties of galaxies which are already in the groups. The density, temperature and entropy profiles of the IGM might decode the entire thermal history of that group. Albeit with these tection of groups typically has a low efficiency. Unlike a typical massive galaxy group having X-ray emission extend up to several Mpc, less massive groups often show lower and flatter X-ray surface brightness (e.g., Mulchaey, 2000; Santos et al., 2008; Rasia et al., 2013; Lovisari et al., 2017; Yuan & Han, 2020). In order to pursue the gas properties in lower mass groups, large area and deep X-ray observations are always in great demand. Apart from this, an alternative way to enhance the detection limit is to use prior information about the position and size of each group that can be obtained from e.g., optical observations. Since the first X-ray all-sky survey performed with the ROSAT telescope, X-ray properties of the optical-selected groups have been extensively studied (e.g., Donahue et al., 2001; Mulchaey et al., 2003; Brough et al., 2006; Dai et al., 2007; Popesso et al., 2007; Shen et al., 2008; Wang et al., 2014; Zheng et al., 2022). Although a number of subsequent X-ray surveys reaching fluxes about three dex fainter than RASS have been achieved, the sky coverage of most of them is smaller than \(\sim 100\) deg\({}^{2}\), only a small number of the group systems have been analysed (Rasmussen et al., 2006; Andreon & Moretti, 2011; Hicks et al., 2013; Pearson et al., 2017). Based on these X-ray observations, a number of X-ray luminosity v.s. halo mass relations were obtained. However, these relations are not yet converged especially at the low mass end due to the insufficient observations (Lovisari et al., 2021, and references therein). Recently, eROSITA offers the next major step forward for studying the X-ray properties in \(0.2-10\) keV for group systems that can be identified based on the galaxy surveys with large sky coverages. eROSITA will finish the entire sky scanning for eight times, using an array of seven aligned telescope modules (TMs). Before the complete all-sky survey, the eROSITA Final Equatorial-Depth Survey (eFEDS) was designed to test the capability of eROSITA. This field overlaids on a various of deep optical/NIR surveys such as HSC Wide Area Survey (Aihara et al., 2018), KIDS-VIKINGS (Kuijken et al., 2019), DESI Legacy Imaging Survey (Dey et al., 2019), and so on. In addition, eFEDS also overlaps the region of XMM-ATLAS survey (Ranalli et al., 2015) which can provide a useful dataset for comparison and test the reliability of the results obtained by eFEDS. This data set, combined with the group catalogs recently constructed by Y21 from the DESI LS observations within redshift range \(0.0<z<1.0\), will thus provide us an unique opportunity to measure the X-ray luminosity around galaxy groups that span both large redshift and halo mass ranges. It will enable us to better constrain the X-ray luminosity v.s. halo mass scaling relation in a much larger redshift and halo mass range. This paper is organized as follows. In section 2, we describe the data used in this work. In section 3, we perform the X-ray luminosity measurement for the DESI groups, and test the reliability of our X-ray luminosity measurement by comparing to existing X-ray group catalogs. We investigate the scaling relation between the X-ray luminosity and group mass in section 4. Finally, we draw our conclusions in section 5. Throughout this paper, we assume a flat \(\Lambda\) cold matter cosmology with parameters: \(\Omega_{\rm m}=0.315\) and \(H_{0}=100h\) km s\({}^{-1}\)Mpc\({}^{-1}\) with \(h=0.7\). If not specified otherwise, the X-ray luminosity \(L_{\rm X}\) and fluxes \(f_{\rm X}\) are given in the range of \(0.1-2.4\) keV band. ## 2 The DESI group catalog The group catalog used in this work is taken from Y21, which extended the halo-based group finder developed by Yang et al. (2005) and applied it to the DESI Legacy Imaging Surveys. Every galaxy within the photometric redshift range of \(0.0<z<1.0\) and \(z-\)band magnitude brighter than \(m_{z}=21\) mag was assigned to a unique group. This catalog has been removed the area within \(|b|\leq 25^{\circ}\) to avoid the regions of higher stellar density. The redshift of each galaxy is taken from the random-forest-algorithm-based photometric redshift estimation from the _Photometric Redshifts for the Legacy Surveys_ (PRLS, Zhou et al., 2021), with a typical redshift error of \(\sigma_{z}/(1+z)\sim 0.02\). To ensure the redshift information as accurate as possible, a small fraction of the redshifts have been replaced by the available spectroscopic redshifts to date (see more detail in Yang et al., 2021). This group catalog has been further updated based on the galaxy catalog which has been updated to DR9, containing \(\sim 100\) million groups with \(\sim 120\) million galaxy members having five-band photometries (\(g\), \(r\), \(z\), \(W1\), \(W2\)), with a sky coverage of \(\sim 18200\) deg\({}^{2}\). The sky coverage of eFEDS is \(\sim 140\) deg\({}^{2}\), only \(\sim 100\) deg\({}^{2}\) of which has overlaid on the footprint of DESI galaxies as shown in Figure 1. In total, there are \(\sim 600,000\) DESI groups overlapped with eFEDS and can be used to perform the X-ray luminosity measurements. In Y21, the group dark matter halos are defined as having an overdensity of 180 times the background density of the universe. The halo mass (\(M_{h}\)) of each group has been estimated based on abundance matching between the total group luminosity and halo mass assuming a Planck18 cosmology (Planck Collaboration et al., 2020). The \(M_{h}\) has an uncertainty of \(\sim 0.2\) dex at high mass end (\(M_{h}\gtrsim 10^{14}h^{-1}M_{\odot}\)), increasing to \(\sim 0.4\) dex at \(M_{h}\sim 10^{12.3}h^{-1}M_{\odot}\) and then decreasing to \(\sim 0.3\) dex at \(M_{h}\sim 10^{11}h^{-1}M_{\odot}\). The angular virial radius, \(\theta_{180}\), is calculated using \[\theta_{180}=\left(\frac{M_{h}}{\frac{4\pi}{3}\cdot 180\Omega_{\rm m}\cdot \frac{3H_{0}^{2}}{8\pi G}}\right)^{1/3}\cdot D_{\rm c}^{-1}, \tag{1}\] where \(D_{\rm c}\) is the comoving distance of that grid. ## 3 X-ray detection We use the public eROSITA data from the eFEDS field. The eFEDS field is divided into four sections, each of which has a separate event list. In this section, we reduced the data with eROSITA Science Analysis Software System (eSASS). Following Brunner et al. (2021), we apply the astrometric corrections to the observation attitude and then recalculate the event coordinates using the eSASS tasks evatt and radec2xy. Next, we convert the event list into an image with evtool command and generate the corresponding exposure map with expmap command. In this work, the imaging analysis is performed in the soft X-ray band (\(0.2-2.3\) keV) with average exposure time of \(\sim 1.2\) ks after correcting for telescope vignetting across most of the field2. Footnote 2: As pointed by Brunner et al. (2021), some events could not be used due to an unrecognized malfunction of the camera electronics, resulting in a reduced exposure depth in the affected areas (see figure 1 in Brunner et al., 2021). Such events do not exist in the calibrated event files. Because of the group position and halo radius have already been determined, we use these information when analysing their X-ray properties. The algorithm we use to measure the X-ray of each DESI group is similar to those of Shen et al. (2008, hereafter S08), Wang et al. (2014, hereafter W14), and Zheng et al. (2022) but with a set of improvement. In section 3.1, we determine the X-ray center for each DESI group. In section 3.2, we perform the stacks for groups without Figure 1: The distribution of the DESI groups overlaid on the footprint of eFEDS (red lines enclosed). The dots represent the DESI groups with blind-detected X-ray centers color-coded by their \(z_{\rm g}\). The radius of the dots corresponds to the \(M_{\rm H}\) of each galaxy group. The contour show the galactic hydrogen column density,\(N_{\rm H}\), along the line of sight to each point given by HEALPIX resampling of Leiden/Argentine/Bonn Survey of Galactic HI (Kalberla et al., 2005). Figure 2: An example eFEDS image (left panel) for a blind-detected source (blue solid circle) overplotted with the probable DESI groups that might host it. The red solid circles show the regions within a distance of \(R_{180}\) from the BGG of each candidate group. In this example, we regard the most massive one (group #2906) hosts that X-ray emission. The neighbor X-ray emission (blue dashed circles) is also considered as part of the extended X-ray emission from the same group, while the others (grey dashed circles) are regarded as contaminants. The red dashed circle represents the region within a distance of \(R_{180}\) but re-centered on that X-ray emission. The right panel shows the corresponding DESI image. In both panels, the BGG and satellites of group #2906 are marked by magenta filled and green opened circles, respectively. blind-detected centers at different \(M_{h}\) and \(z_{g}\) bins. In section 3.3 and 3.4, we obtain the source count rate and X-ray luminosity for each DESI group using different algorithms. In section 3.5, we compare our results with the previous studies. ### Determine the X-ray center for each DESI group Based on the eFEDS maps, one can detect the possible X-ray peaks that are emitted from various kinds of X-ray sources. Brunner et al. (2021) present a primary catalog of 27910 X-ray sources detected in the \(0.2-2.3\) keV band with detection likelihood \(\mathcal{Z_{\rm det}}\geq 6\) and a supplementary catalog of 4774 X-ray sources detected in the same band but with detection likelihood of \(5\leq\mathcal{Z_{\rm det}}<6\). Almost all of the blind-detected targets are point-like sources with extent likelihood \(\mathcal{Z_{\rm ext}}=0\), while only 542 sources with extent likelihood \(\mathcal{Z_{\rm det}}\geq 6\) are treated as X-ray emission from massive galaxy groups (Liu et al., 2022). However, Bulbul et al. (2021) pointed that high redshift galaxy clusters or nearby groups hosting bright AGNs might be potentially misclassified as point sources by the pipeline due to the sizeable point-spread function of eROSITA, and select a catalog of 346 X-ray sources with extent likelihood \(\mathcal{Z_{\rm ext}}=0\) that are indeed galaxy groups with mass of \(10^{13}-4.5\times 10^{14}M_{\odot}\) in disguise from the primary catalog. Both studies apply a multi-component matched filter (MCMF) cluster confirmation tool (Klein et al., 2018, 2019) to determine the redshifts of 888 X-ray clusters in total. This implies that more faint X-ray point sources might be emitted from smaller or more distant galaxy groups. In order to remove the signals from contaminants, we need to mask out all of the background or foreground sources when calculating the count rates for each group. Salvato et al. (2021) have presented the identification of the counterparts to the \(\mathcal{Z_{\rm ext}}=0\) sources in primary catalog and classify them into'secure galactic', 'likely galactic','secure extragalactic', and 'likely extragalactic' sources. Most of the point-like sources belong to the last two cases, we thus regard all of the targets in supplementary catalog as extragalactic sources. Then we cross-match the extragalactic sources to SDSS DR16 quasar catalog (Lyke et al., 2020) within a tolerance of 10 arcsec and identify \(\sim 2300\) background quasars. Besides the 888 X-ray groups identified by Liu et al. (2022) and Bulbul et al. (2021), the remaining galactic sources and quasars are contaminants that are not associated with any group systems. For the remaining extragalactic X-ray sources, we match each of them to DESI groups within a maximum separation of \(0.3R_{180}\) and \(|z-z_{\rm MCMF}|\leq 0.05\) (if it has redshift assigned by Liu et al. (2022) and Bulbul et al. (2021)) from the BGG of each DESI group. Owing to the fact that most of the extragalactic X-ray sources have numerous DESI groups matched, we regard the most massive one as the host of that X-ray emission for simplicity. If no groups matched, this X-ray source might be emitted from the targets with \(z\gtrsim 1\). For the groups with numerous X-ray sources matched, we regard the one closest to the BGG as X-ray center of that group, the matched sources within \(0.3R_{180}\) from the X-ray center as parts of the extended X-ray emission, and the others beyond \(0.3R_{180}\) would be re-matched to the second most massive group satisfying the aforementioned criteria. This iterative process goes on until there is no further change in the group matching. In the left panel of Figure 2, we show an example image for the DESI groups matched by an X-ray source (blue dashed circle), there are two groups might host that X-ray emitter. According to our criteria, this emitter is more likely to be the X-ray center for the more massive candidate. Finally, 10932 DESI groups host at least one blind-detected sources. For most of the other DESI groups, we assign the position of BGGs as their X-ray centers. In the next section, we will check whether the position of the BGG is close to a peak in X-ray emission. ### X-ray Stacks Before individual measurement, we perform the stacks for DESI groups without resolved X-ray emission first. As discussed in the last section, the net photon count is very low for most of the DESI groups on eFEDS map. To show the reliability of the determination for X-ray center, we produce the stacked images for the groups by rescaling the data for each group to a common size. We donot weight the photon by the square of ratio between the group luminosity distance and an arbitrary fixed value, because the redshift bin is relatively small for each subsample. We binned the X-ray images with pixel size of \(4^{\circ}\) and mask out all of the contaminants. In figure 3, we show the stacked images of DESI groups without blind-detected X-ray center at different \(M_{h}\) and \(z_{g}\). Note that we only show the data bins with at least 500 groups. There is no doubt that the signals are clearer than individual measurements. We see the X-ray excess, although not very significant, around the X-ray center for most of the stacks, implying that BGG can well represent the X-ray peak of a group system. The central excess appears clearer with increasing \(M_{h}\) at a given redshift bin, because the X-ray emission is much more evident in massive systems, while such excess appears more fuzzy for distant groups due to the flux limit and resolution. For each stack, we compute the background using an annuli at \(1.5R_{180}<R<2.0R_{180}\) and derive the corresponding surface brightness profile. The surface brightness profile for X-ray emission of group system can be well described by an empirical \(\beta-\)profile (Cavaliere & Fusco-Femiano, 1976): \[\mu(R/R_{180})=\mu_{0}\left[1+\left(\frac{R}{R_{c}}\right)^{2}\right]^{-3 \beta+0.5}, \tag{2}\] where \(\mu_{0}\) is the central surface brightness and \(R_{c}=0.18R_{180}\) is the core radius. The parameters (\(\mu_{0}\), \(\beta\)) for each stack are not fitted using the above form directly because the observed data of the innermost bins almost determines the fitting results. We choose the better determination of these parameters using the cumulative form of \(\beta-\)model. The cumulative source count rate as a function of radius is computed by integrating the net source counts in concentric ring. In figure 4, we show the cumulative source count rate and the best-fitting \(\beta-\)profile for DESI groups without blind-detected X-ray center at different \(M_{h}\) and \(z_{g}\). As can be seen, the surface brightness distribution can be well characterized by \(\beta-\)profile except some least massive bins. We also plot the results with a fixed value of \(\beta=2/3\), which have been extensively adopted (e.g., Arnaud & Evrard, 1999; Reiprich & Bohringer, 2002; Ettori et al., 2004; Maughan et al., 2006; Hicks et al., 2008), for reference. In most of the stacks, the best-fit results show slightly less concentrated (\(0.4\lesssim\beta<2/3\)) compared with the references, partly due to the off-center effect that the X-ray peak is not necessarily coincident with the position of BGG. In this work, we focus on the X-ray flux of each group, although the slight off-center effect might lower the value of \(\beta\), the overall count rate will not be affected much. Moreover, the fluctuations in background estimate might lower the signal-to-noise (S/N) of cumulative count rate in the outermost bins and enlarge the error of \(\beta\). For security, we only calculate the count rates enclosed within \(R_{\rm X}=0.5R_{180}\) and make a \(\beta-\)profile extension correction to make up the X-ray luminosity missed in the range \(R_{\rm X}\leq R\leq R_{180}\)(see Bohringer et al., 2000; Shen et al., 2008; Wang et al., 2014) for each individual group. The Figure 3: Stacked dFEDS images of DESI groups without resolved X-ray centers in different \(M_{h}\) and \(z_{g}\) bins. The dashed circles represent the regions within a radius of \(R_{180}\). Only the data bins with at least 500 groups are shown here. Figure 4: Stacked cumulative surface brightness profiles as a function of \(R/R_{180}\) of DESI groups without resolved X-ray centers in different \(M_{h}\) and \(z_{g}\) bins (red points with error bars). The blue solid lines are the best-fit cumulative \(\beta-\)model for each stack, while the green dashed lines represent the same fitting but with a fixed \(\beta=2/3\) for reference. Only the data bins with at least 500 groups are shown here. extension correction factor is not very sensitive to the value of \(\beta\) when \(\beta\gtrsim 0.4\) (\(\lesssim 0.3\) dex), and we adopt a fixed value of \(\beta=2/3\). ### Source Count Rate #### 3.3.1 Mean Background Subtraction Algorithm When calculating the count rate for each group, we locate the eFEDS field centered on the X-ray center and mask out all the contaminants that are not part of that group. Next, we set out to determine the X-ray background for each source. In S08 and W14, they determine the X-ray background for each group from an annulus with inner radius \(R_{180}\) and a width of a few arcmins. Due to the relatively lower exposure time for eFEDS field and the fluctuation of the background estimate for each group is quite large, we determine the average count rate within the annuli at \(1.5R_{180}<R<2.0R_{180}\) for each galaxy group instead of the neighboring background subtraction algorithm. The average background count rate density \(\rho_{\rm bkg}^{\rm mean}\simeq 2.47\times 10^{-5}\) cts/s/pixel2. Besides the instrumental background, a fair proportion of the background photons might be emitted from the other extragalactic sources, the galactic column density of neutral hydrogen (\(n_{\rm H}\)) might increase the fluctuations for the mean background level. Indeed, the \(n_{\rm H}\) for the groups used in this work are varied from \(\log\left(n_{\rm H}/{\rm cm}^{-2}\right)\simeq 20.26\) to \(20.67\) given by HEALPIX resampling of Leiden/Argentine/Bonn Survey of Galactic HI (Kalberla et al., 2005). In Appendix A, we show the energy conversion factor (ECF) that convert the soft X-ray band flux to the \(0.2-2.3\) keV band count rates based on the power-law model but with \(\log\left(n_{\rm H}/{\rm cm}^{-2}\right)\) ranged from \(20.2\) to \(20.7\), and their differences are relatively small (\(\lesssim 0.1\) dex). Thus, we ignore the fluctuation in the estimate for \(\rho_{\rm bkg}^{\rm mean}\). By subtracting the background counts scaled to the aperture radius of \(R_{\rm X}=0.5R_{180}\), one can obtain the source count rates for each individual group. Footnote 2: As shown in section 4, we see that the redshift dependency is weak. We thus donot consider any redshift dependency in \(L_{\rm X}-M_{h}\) relation. #### 3.3.2 Patrol Background Subtraction Algorithm In addition, we perform an alternative method to calculate the source count rates for each DESI group. First, we mask out all of the pixels that lie in at least one of the following regions: 1. The regions that are not overlaid on the DESI footprint (\(|b|\leq 25^{\circ}\)). 2. The masked regions due to bright stars, globular clusters, or bad pixels in DESI footprint. 3. The regions enclosing the blind-detected sources. 4. The pixels lie in the aperture radius of at least one DESI groups. If we set the aperture radius of each group to \(R_{180}\), the patrol area makes less than \(\sim 0.5\) deg\({}^{2}\) of the total surveyed area. As discussed in the last section, the average count rates are concentrated within \(R_{\rm X}=0.5R_{180}\). Therefore, we adopt the value from \(0.5\) to \(1.0R_{180}\) and derive the background count rate density \(\rho_{\rm bkg}\) and the corresponding patrol area \(A_{\rm bkg}\) as shown in figure 5. It can be seen from figure 5 that the patrol background level is lower than the mean background level due to the contribution from these galaxy groups. The patrol background level show little dependence on the selection of aperture radius, the error is also very small when we adopt the value of \(R/R_{180}=1.0\). Therefore, we take use of the value of patrol background count rate density, \(\rho_{\rm bkg}^{\rm prd}\simeq 2.31\times 10^{-5}\) cts/pixel\({}^{2}\), when \(R/R_{180}=1.0\). The patrol background count rates are mainly emitted from various sources such as instrumental background, local hot bubble, X-ray binaries, very distant (\(z_{g}\geq 1\)) groups and AGNs that are not resolved in eFEDS map. After removing the signals from these sources, the remaining are in principle emitted from the galaxy groups in the catalog used in this work only. However, the X-ray estimate might be impacted by the projection effect along the line-of-sight, causing the X-ray flux to be overestimated (Wang et al., 2014). For each group, one need to disentangle the count rates within \(R_{\rm X}\) for each individual group. Here we use a Monte Carlo mock to quantify the group X-ray luminosity overestimation due to the projection effect. Starting from all the groups in our sample, we first assume an average \(L_{\rm X}-M_{h}\) relation to assign X-ray luminosities, \(L_{\rm X,ass}\), to individual groups3. The initial guess is adopted from the fitting for the results using mean background subtraction algorithm. Then we convert the \(L_{\rm X,ass}\) to the photon counts with an exposure time of 5 million seconds, which is by a factor of \(\sim 4000\) longer than observation to ensure the signals for faint groups are sufficiently high. For each group, the mock photons are randomly generated following the \(\beta-\)model profile. After generating the mock image, we can derive the assigned (\(N_{\rm ass}\)) and projected (\(N_{\rm pro}\)) counts within a radius of \(R_{\rm X}\) for each group in the same way as we did in observation. We calculate the ratio of the obtained \(N_{\rm pro}\) and the assigned \(N_{\rm ass}\), Footnote 3: As shown in section 4, we see that the redshift dependency is weak. We thus donot consider any redshift dependency in \(L_{\rm X}-M_{h}\) relation. \[f_{\rm corr}=\frac{N_{\rm ass}}{N_{\rm pro}}, \tag{3}\] which is used as the correction factor for each of our group. We use this factor to calculate the X-ray luminosity based on this algorithm for each group (see section 3.4 for details) and obtain the tentative \(L_{\rm X}-M_{h}\) relation (see section 4 for details). Then we use this tentative relation to reproduce the above process, until there is no further change in the average \(L_{\rm X}-M_{h}\) relation. In the final version, the Figure 5: The red solid line with shaded region represents the background count rate density with errors for the patrol area based on different value of aperture radius (from 0.5 to 1.0 \(R_{180}\), see details in section 3.3), the corresponding patrol area is shown in blue solid line. For reference, we also plot the mean background count rate density in black dashed line. typical value of \(f_{\rm corr}\) is \(\sim 0.31\). In figure 8, we show the number density for \(f_{\rm corr}\) as a function of \(M_{h}\). As can be seen, small groups are heavily affected by the projection effect. However, the lower background level (\(\rho_{\rm bkg}^{\rm ptrl}<\rho_{\rm bkg}^{\rm mean}\)) also raises the flux estimate before corrected by projection effect. Both effects raise the scatter of the the \(L_{\rm X}\) estimate. #### 3.3.3 The \(\rm S/N\) ratios for individual group For each galaxy group, their signal-to-noise, \(\rm S/N\), is calculated using \[\rm S/N=\frac{N_{\rm src}}{\sqrt{N_{\rm src}+N_{\rm bkg}}}, \tag{4}\] where \(N_{\rm bkg}=\pi R_{\rm X}^{2}\cdot\rho_{\rm bkg}t_{\rm exp}\) is the background photon counts scaled to the aperture of radius \(R_{\rm X}\), \(t_{\rm exp}\) is the mean exposure time of that source, and \(N_{\rm src}\) is the net source photon counts within the aperture of radius \(R_{\rm X}\)4. Note that the \(\rm S/N\) derived based on patrol background subtraction algorithm is slightly higher than mean background subtraction algorithm. Footnote 4: During the commission, light leak contamination was reported in TMS and TM7 (Predehl et al., 2021), which will affect the X-ray events with energies below \(\sim 0.8\) keV. However, as we have tested by including or excluding the X-ray events detected by TMS and TM7, the count rate of each group are in good agreement with each other. Thus in order to have higher \(\rm S/N\), TMS and TM7 are kept in our analysis. In Figure 6, we show the number density distribution for the \(\rm S/N\) ratios of \(\rm X\)-ray groups based on different algorithm as a function of \(M_{h}\) and \(z_{\rm g}\), respectively. For reference, we also plot the fraction of the groups above different \(\rm S/N\) thresholds as a function of \(M_{h}\) and \(z_{\rm g}\), respectively. Among all the groups, there are \(\sim 0.9\%\) (5284, within which 4195 are in the blind detection source list) have \(\rm S/N\geq 3\), \(\sim 14.3\%\) (84642) have \(\rm S/N\geq 1\), and \(\sim 47.3\%\) (278985) have \(\rm S/N\geq 0\) if we use the mean background subtraction algorithm, while \(\sim 1.0\%\) (6075, within which 4637 are in the blind detection source list) have \(\rm S/N\geq 3\), \(\sim 17.3\%\) (102032) have \(\rm S/N\geq 1\), and \(\sim 52.7\%\) (311120) have \(\rm S/N\geq 0\) if we use the patrol background subtraction algorithm. The average \(\rm S/N\) is mainly lowered by small groups because the group X-ray luminosity positively correlate to their \(M_{h}\). A little more than half of the small groups with \(M_{h}\leq 10^{13}h^{-1}M_{\odot}\) have \(N_{\rm src}<0\) because of the negative expected median value for a source with nearly zero count rates relative to the background level. ### The X-ray Luminosities After deriving the count rates for each DESI group, we convert it into soft X-ray flux by dividing the source count rates, \(C_{\rm src}\), to ECF. Assuming a spectral model, the ECF is obtained as the ratio of the count rate given by an XSpec mock spectrum to its model flux. The Ancillary Response File (ARF) and Response Matrix File (RMF), which are created for the mock spectrum, are generated by eSASS tool 5. In this work, we adopt the ECF based on the power-law models with photon index \(\Gamma=2.0\). The details for our chosen are provided in Appendix A. For an individual group, the X-ray luminosity can be expressed as Footnote 5: In practice, we un-correct the ARF by dividing the “SPECRESP” by the correction “CORRCOMB” when multiplying the model spectrum by the effective area (Liu et al., 2022). \[L_{\rm X}=\frac{4\pi d_{L}^{2}f_{\beta}\cdot C_{\rm src}}{g\left(n_{\rm H},z_{ \rm g},T\right)}, \tag{5}\] and the source count rates can be expressed as \[C_{\rm src}=\frac{f_{\rm corr}N_{\rm src}}{t_{\rm exp}}, \tag{6}\] where \(d_{L}\) is the luminosity distance of the group, \(f_{\beta}\) is the extension correction factor, \(f_{\rm corr}\) is the flux fraction of an X-ray group in a multi-cluster detection, \(N_{\rm src}\) is the source count rates within aperture Figure 6: The contours outline the number density distribution for the \(\rm S/N\) ratios of X-ray groups as a function of halo mass (left panel) and redshift (right panel), respectively. The solid, dashdot, and dotted lines represent the fraction of the groups with \(\rm S/N\geq 0\), \(\rm S/N\geq 1\), and \(\rm S/N\geq 3\) as a function of halo mass and redshift, respectively. of radius \(R_{\rm X}\), \(t_{\rm exp}\) is the exposure time, and \(g\left(n_{\rm H},z_{\rm g},T\right)\) is the ECF depend on column density of neural hydrogen (\(n_{\rm H}\)), redshift (\(z_{\rm g}\)), and temperature (\(T\)). In this work, we adopt the \(n_{\rm H}\) given by HEALPIX resampling of Leiden/Argentine/Bonn Survey of Galactic HI (Kalberla et al., 2005). We note that the correction factor are set to \(f_{\rm corr}=1\) for the results obtained using mean background subtraction algorithm. In figure 7, we compare the two sets of \(L_{\rm X}\), those obtained by mean background subtraction algorithm are generally higher than that obtained by patrol background subtraction algorithm at positive \(L_{\rm X}\), but the latter has more positive values than the former. Although \(\sim 50\%\) groups have negative source count rate and their \(L_{\rm X}\) are negative as a result, we retain all of the samples in subsequent analysis. In figure 8, we show the comparison between the \(L_{\rm X}\) obtained by mean and patrol background algorithm. The difference tend to be larger at the lower \(L_{\rm X}\) end due to the projection correction. ### Comparison with existing X-ray Clusters Having obtained the X-ray luminosity for all of our groups, we proceed to compare our X-ray measurements with the results that available in previous studies. The datasets we perform the cross-identification are as follows: 1. eFEDS X-Ray Catalog (Brunner et al., 2021): A catalog of blind-detected sources based on eFEDS. This catalog contains \(\sim 33000\) blind-detected sources, including 542 extended sources that are regarded as groups (Liu et al., 2022) and 346 point sources but suspected as groups in disguise (Bulbul et al., 2021). The count rate for each source has been PSF-corrected. We compare the results for the 10932 DESI groups hosting resolved X-ray with the counterparts in this catalog only. To show a fair comparison, the corresponding Figure 8: The grey map shows the number density distribution for \(f_{\rm corr}\) as a function of \(M_{h}\). The red contour represents the comparison between the \(L_{\rm X}\) obtained using mean and patrol background algorithms. The dashed line is one-to-one correspondence between them. Figure 7: The grey filled contour shows the results given by mean background subtraction algorithm, while the red opened contour represents the results given by patrol background subtraction algorithm. The right side show the distributions of the \(L_{\rm X}\) obtained by mean (grey shaded) and patrol (red opened) background subtraction algorithms, respectively. In the lower-right corner, we show the fraction of the groups with \(L_{\rm X}<0\) obtained by both algorithms, respectively. Figure 9: The 0.2 – 2.3 keV band source count rate, \(C_{\rm acc}\), distribution for the results using mean (dashed) and patrol (solid) background subtraction algorithms for \(\rm S/N>0\) (green), \(\rm S/N\geq 1\) (blue), and \(\rm S/N\geq 3\) (purple) groups, respectively. The yellow filled histogram shows the results for all the blind-detected sources based on eFEDS (Brunner et al., 2021), while the red hatched histogram shows the group candidates filtered by Liu et al. (2022) and Bulbul et al. (2021) and overlaid on DESI footprint. rest-frame \(0.1-2.4\) keV band X-ray luminosity are converted using the ECFs given by this work. 2. XMM-ATLAS Survey (Ranalli et al., 2015): XMM-Newton observations in the H-ATLAS SDP area, covering \(\sim 7\) deg\({}^{2}\) with a limit of \(\sim 2\times 10^{-15}\) erg/s/cm\({}^{2}\) in \(0.5-2.0\) keV band and overlapping with the eFEDS footprints. This catalog gives the observed \(0.5-2.0\) keV band flux of each source by assuming a power-law spectra with a photon index of \(\Gamma=1.7\) and Galactic absorption of \(n_{\rm H}=2.3\times 10^{20}\) cm\({}^{-2}\). We cross-match the DESI group catalog with XMM-ATLAS samples within a tolerance of 20 arcsec. There are 961 DESI groups have counterparts in their catalog, 409 of them have flux larger than \(\sim 2\times 10^{-15}\) erg/s/cm\({}^{2}\) in eFEDS observations. Note that this catalog does not give the redshift of each XMM-ATLAS source. In order to make a fair comparison, the redshifts of those match XMM-ATLAS sources are assigned from their counterparts in our sample, and we convert the flux to the rest-frame \(0.1-2.4\) keV band flux corrected for Galactic absorption. Figure 9 displays the \(0.2-2.3\) keV band \(C_{\rm src}\) distributions for DESI groups and blind-detected sources. Compared to the X-ray groups detected by Liu et al. (2022) and Bulbul et al. (2021) which are shown using the red hatched histogram, our X-ray groups with S/N\(\geq 3\) (purple solid and dashed histograms) are about an order of magnitude more. The shift of the \(C_{\rm src}\) given by the patrol background subtraction algorithm in the \(x\)-axis direction is mainly caused by the projection correction factor, which is evident by comparing with the dashed histogram for the results using the mean subtraction algorithm. Although the vast majority of our groups do not have S/N\(\geq 3\) X-ray detection, they can still be used to carry out scientific studies, e.g., through stacking algorithm, etc. In figure 10, we show the comparison of the X-ray luminosities between our measurements and those obtained from literatures. First, our results obtained by mean background subtraction algorithm are slightly lower (\(\lesssim 0.05\) dex) than that given by Brunner et al. (2021), which might due to the selection of the aperture radius. The inset in the lower-right of the left panel show the \(R_{180}\) distributions for the groups with \(L_{\rm X}\) derived using mean background subtraction algorithm lower (hatched) and higher (filled) than that given by Brunner et al. (2021), respectively. Clearly, the former is generally smaller than the latter, implying the selection of the aperture radius might affect the results. Because of the count rate is integrated to the aperture radius, the X-ray luminosities of groups with large radius in projection are generally overestimated and vice versa. In addition, our results are systematically lower than those obtained by Ranalli et al. (2015) and the S/N of our results are generally lower because the average exposure time of eFEDS is lower than that of XMM-Newton. However, X-ray selected samples are known to miss galaxy groups with lower X-ray flux. We separate the groups matched by Ranalli et al. (2015) into those with X-ray flux brighter and fainter than \(\sim 2\times 10^{-15}\) erg/s/cm\({}^{2}\) in eFEDS observations, those above the flux threshold show good agreement with Ranalli et al. (2015). From the right panel of figure 10, we see that our results obtained by patrol background subtraction algorithm are systematically lower (\(\sim 0.15\) dex) than that of Brunner et al. (2021) because we corrected for the projection effect. Taking into account with and without the projection effect, our X-ray measurements for the corresponding groups are in nice agreement with both of these studies. ## 4 X-ray luminosity - halo mass relation One of the most important X-ray scaling relations for cosmology with galaxy groups is the \(L_{\rm X}-M_{h}\) relation. To derive the \(L_{\rm X}-M_{h}\) relation, a complete sample are required because the scatter of Figure 10: The comparison of the \(L_{\rm X}\) obtained using mean (left) and patrol (right) background subtraction algorithms with the \(L_{\rm X}\) taken from the literatures: 1). Brunner et al. (2021): contour map; 2). Ranalli et al. (2015): symbols with error bars, the groups brighter and fainter than \(\sim 2\times 10^{-15}\) erg/s/cm\({}^{2}\) are shown in circle and diamond, respectively. The grey solid lines in both panels are one-to-one correspondence between the results to be compared. The plot in the inset in the lower-right of the left panel shows the \(R_{180}\) distributions for the groups with \(L_{\rm X}\) derived using mean background subtraction algorithm lower (hatched) and higher (filled) than the \(L_{\rm X}\) given by Brunner et al. (2021). Note that the \(L_{\rm X}\) taken from the literatures have been converted using the ECFs given by this study (see section 3.5). is quite large at a given \(M_{h}\) and a X-ray flux-limited sample suffers from the selection bias that brighter objects can be observed out to farther distance. Such bias has previously been taken into account for obtaining \(L_{X}-M_{h}\) relations based on X-ray selected group sample using different assumptions (e.g., Vikhlinin et al., 2009; Pratt et al., 2009; Mittal et al., 2011; Lovisari et al., 2015). Now that we have measured the X-ray luminosities for _all_ the DESI groups overlaid on the eFEDS footprints, i.e., the X-ray measurements are completed for the groups at given \(M_{h}\) and \(z_{g}\). It is thus quite straight forward to derive the related \(L_{X}-M_{h}\) relations. We use the following two ways to derive the relations and make self-consistent checks. ### Stacking Method In order to check if there are any redshift dependence in the \(L_{X}-M_{h}\) relations, we first separate the groups into different \(M_{h}\) and \(z_{g}\) bins. To get sufficient signals for our investigation, we stack the X-ray luminosities for groups in each bin. The stacked X-ray luminosity \(L_{\rm X,S}\) for given \(N\) groups can be obtained in two ways. The first way is calculating the mean \(L_{\rm X}\) directly: \(L_{\rm X,S}=\sum_{i=1}^{N}\frac{L_{X,i}}{N}\), and the second way to calculate the stacked X-ray luminosity can be expressed as \[L_{\rm X,S}=f_{B}\cdot\frac{\sum\limits_{i=1}^{N}N_{\rm src,i}}{\sum\limits_ {i=1}^{N}\frac{g_{i}\cdot t_{\rm beg,i}}{4\pi d_{i,i}^{2}\cdot f_{\rm conf,i}}}, \tag{7}\] where \(d_{L,i}\) is the luminosity distance for \(i\)\(th\) group. Both calculations are nearly consistent, and we take use of the results given by Equation 7 unless stated otherwise. In figure 11, we show the stacked X-ray luminosity \(L_{\rm X,S}\) obtained in different methods color-coded by their \(z_{g}\). For the results obtained by the same method, their normalizations as well as the slopes show no significant differences. However, the stacked \(L_{\rm X,S}\) for patrol background subtraction algorithm are slightly lower than that for mean background subtraction algorithm at low \(M_{h}\) end. The projection effect tend to be more evident for small groups, the lower background level cannot fully compensate for the reduced flux due to flux correction factor. ### Direct model fitting The other way to obtain the \(L_{\rm X}-M_{h}\) relation is by assuming a functional form and fit for the related parameters. Here we assume the \(L_{\rm X}-M_{h}\) relation has a power law form: \[\left(\frac{L_{\rm X}}{\rm erg/s}\right)=10^{A}\cdot\left(\frac{M_{h}}{h^{-1} M_{\odot}}\right)^{B}, \tag{8}\] where \(10^{A}\) is the normalization and \(B\) is the slope. Some previous studies (e.g., Vikhlinin et al., 2009; Reicher et al., 2011) have taken into account the redshift evolution of the normalization by multiplying \([\rm H(z)/H_{0}]^{C}\), where \(\rm H(z)\) is the Hubble-Lemaitre parameter, \(\rm H_{0}\) is the Hubble constant and \(C\) is a constant. However, as we have not seen any significant redshift evolution behavior in this study, we do not consider the redshift evolution term here. Owing to the fact that the photon counts are very small for numerous groups, especially at the low-mass end, we model the \(L_{\rm X}-M_{h}\) relation such that the observed \(L_{\rm X}\) is distributed around the scaling relation in a Poisson form. The probability for the \(\rm i\)\(th\) group is given as \[\mathcal{P}\left(L_{\rm X,i}|M_{h,i},A,B\right)=\frac{e^{\lambda_{i}}\cdot \lambda_{i}}{\Gamma\left[\frac{g_{i}\left(\frac{L_{X,i}}{f_{B}}+L_{\rm B,i} \right)\cdot t_{i}}{4\pi d_{\rm L,i}^{2}}+1\right]}, \tag{9}\] where \(L_{\rm X,i}\), \(M_{h,i}\), \(f_{B}\), \(t_{i}\), \(d_{\rm L,i}\), and \(g_{i}\) are the X-ray luminosity, halo mass, \(\beta\)-profile extension correction, mean exposure time, luminosity distance, and ECF of the \(\rm i\)\(th\) group, respectively. Also, \(L_{\rm B,i}\) is the subtracted background luminosity scaled to the \(R_{\rm X}\), which can be expressed as: \(L_{\rm B,i}=\rho_{\rm bkg}\cdot\pi R_{\rm X}^{2}\cdot\frac{4\pi d_{\rm L,i}^{2}}{g_{ i}}\). Note that the term, \(g_{i}\left(\frac{L_{\rm X,i}}{f_{B}}+L_{\rm B,i}\right)\cdot t_{i}\), in the denominator Gamma function, is the overall photon events within a radius of \(R_{\rm X}\) for the \(\rm i\)\(th\) group. We assume that the X-ray luminosity for each group is determined by their \(M_{h}\) only, each group has an expected photon events, \(\lambda_{i}\), and it is defined as: \[\lambda_{i}=\frac{g_{i}\left(\frac{\left(L_{\rm X,i}\right)}{f_{B}}+L_{\rm B, i}\right)\cdot t_{i}}{4\pi d_{\rm L,i}^{2}}, \tag{10}\] where \(\left\langle L_{\rm X,i}\right\rangle=10^{A}\cdot\left(\frac{M_{h,i}}{h^{-1}M _{\odot}}\right)^{B}\). This yields a likelihood function Figure 11: The X-ray group luminosity, \(L_{\rm X}\), obtained by both algorithms versus halo mass, \(M_{h}\), for all the DESI groups used in this work. The triangles and hexagons with error bars represent the _stacked_ X-ray luminosity, \(L_{\rm X,S}\), as a function of \(M_{h}\) color-coded by redshift for the results obtained by both algorithms, respectively. Only the data bins with at least 50 groups are plotted. The solid lines shows our best-fit for the results of overall samples, while the dash-dot lines are the best-fit for the \(M_{h}\geq 10^{13}h^{-1}M_{\odot}\) subsamples. The magenta, green, blue, and purple dashed lines show the results obtained by Wang et al. (2014), Eckmüller et al. (2011), Schellenberger & Reiprich (2017), and Kettula et al. (2015), respectively. that can be written as \(\ln\mathcal{Z}\equiv\sum\limits_{i=1}^{N}\ln\mathcal{P}\left(L_{X,i}|M_{h,i},A,B\right)\) and we need to find the best-fit parameters that maximizes the likelihood. ### Results Our best-fit \(L_{\rm X}-M_{h}\) relations for both algorithms are presented in figure 11, where we report a normalize of \(10^{28.46\pm 0.03}\) with a slope of \(1.024\pm 0.002\) for mean background subtraction algorithm, and a normalize of \(10^{26.73\pm 0.04}\) with a slope of \(1.140\pm 0.003\) for patrol background subtraction algorithm. Very encouragingly, both results are consistent and show nice agreement with their \(L_{\rm X,S}\), respectively, demonstrating that our model constraints are self-consistent. For comparison, we also plot the results obtained previously by Wang et al. (2014), Eckmuller et al. (2011), Schellenberger & Reiprich (2017), and Kettula et al. (2015) in Figure 11. Note that Eckmuller et al. (2011) and Kettula et al. (2015) give the \(L_{\rm X}-M_{180}\) relations and Schellenberger & Reiprich (2017) gives the \(L_{\rm X}-M_{500}\) scaling relation after correcting the Malmquist and Eddington biases. Their group mass indicators are slightly different from ours. To unify the definition of \(M_{h}\), we convert the \(M_{180}\) and \(M_{500}\) to \(M_{h}\) (\(M_{180}\)) by assuming that the dark matter halos follow a Navarro-Frenk-White (NFW, Navarro et al. 1997) density profile with concentration parameters given by the concentration-mass relation of Maccio et al. (2007). Based on this assumption, we get \(M_{h}/M_{180}=1.03\) and \(M_{h}/M_{500}=1.38\) when the concentration index is \(c_{180}=6\). Note that the concentration index is negatively correlate to the \(M_{h}\), and the \(M_{h}/M_{180}\) and \(M_{h}/M_{500}\) are varied with concentration index. However, the difference between the \(M_{h}/M_{180}\) (\(M_{h}/M_{500}\)) by adopting \(c_{180}=5\) and \(c_{180}=12\) are smaller than \(\lesssim 0.01\) dex (\(\lesssim 0.07\) dex), we thus ignore the change of the slope for these relations taken from the literature. Clearly, our model constrain of the slopes, \(1.024\to 1.140\), are flatter than the slope ranging of \(1.27\sim 1.65\) obtained by the literature but close to the slope predicted by self-similar relation: \(L_{\rm X}^{0.1-2.4\rm keV}\propto M\) (Equation 26 in Schellenberger & Reiprich 2017). However, these previous results are generally obtained from the samples with \(M_{h}\gtrsim 10^{13}h^{-1}M_{\odot}\), the slope of the \(L_{\rm X}-M_{h}\) relation might be different in different \(M_{h}\) ranges. Here we perform the same method to fit the \(L_{\rm X}-M_{h}\) relation for groups with \(M_{h}\geq 10^{13}h^{-1}M_{\odot}\), and we plot the best-fit results for both algorithms in dash-dot lines in figure 11, where we report a normalize of \(10^{26.91\pm 0.06}\) with a slope of \(1.135\pm 0.004\) for mean background subtraction algorithm, and a normalize of \(10^{25.64\pm 0.08}\) with a slope of \(1.217\pm 0.005\) for patrol background subtraction algorithm. These results are still flatter than those taken from the literatures but steeper than the result obtained for overall samples. As pointed by Lovisari et al. (2021), a mass-dependent bias in the group mass estimate might potentially affect the slope of the \(L_{\rm X}-M_{h}\) relation, especially at the low-mass end. Due to the fact that it is difficult to distinguish the low-temperature emitting gas of small groups from the galactic foreground, the X-ray properties of them are generally observed out to a smaller radial extent. An estimate of the group mass based on X-ray information and hydrostatic equilibrium might affect the shape of the \(L_{\rm X}-M_{h}\) relation. In this work, the group mass, \(M_{h}\), is obtained from the abundance matching according to the accumulative halo mass and group luminosity functions, and the uncertainty of \(M_{h}\) is less than \(\sim 0.4\) dex. Thus independently estimated \(M_{h}\) might make the \(L_{\rm X}-M_{h}\) relation less prone to be biased. ## 5 Conclusion In this study, using the optical information, such as the position of the massive galaxy members, \(M_{h}\), and \(z_{\rm g}\), we used two different algorithms to measure the luminosities in soft X-ray (rest-frame 0.1 - 2.4 keV) band for \(\sim 600,000\) groups identified from DESI DR9 and overlaid on the footprints of the eFEDS, ranging in redshifts of \(0.0\leq z_{\rm g}\leq 1.0\) and group mass of \(10^{10.76}h^{-1}M_{\odot}\leq M_{h}\leq 10^{15.0}h^{-1}M_{\odot}\). The main results of this paper are summarized as follows. 1. Among these groups, \(\sim 0.9\%\) of them have \(\rm S/N\geq 3,\sim 14.3\%\) of them have \(\rm S/N\geq 1\), and \(\sim 47.3\%\) of them have \(\rm S/N>0\) when we subtract the background using the average count rate density in the background ring of each group, while the percentages are slightly higher (\(\sim 1.0\%\), \(\sim 17.3\%\), and \(\sim 52.7\%\) for \(\rm S/N\geq 3\), \(\rm S/N\geq 1\), and \(\rm S/N>0\), respectively) when we subtract the background using the count rate density averaged over the regions that not lie within \(R_{180}\) of any groups. By comparing to the blind-detected X-ray groups based on eFEDS, the number of X-ray groups been detected with \(\rm S/N\geq 3\) have increased nearly by a factor of 6. 2. By stacking the X-ray images of the groups that have no resolved X-ray centers in different \(M_{h}\) and \(z_{\rm g}\) bins. The BGG can well represent the X-ray peak of a group system, and the average surface brightness profiles roughly follow the \(\beta\)-model prediction. We measure the stacked X-ray luminosities around similar mass groups that are divided into five redshift bins. We find the X-ray luminosity scales linearly with halo mass and is independent of the redshift. 3. By properly taking into account the Poisson fluctuations, we obtain the overall scaling relations between X-ray luminosity and halo mass with \(L_{\rm X}=10^{28.46\pm 0.03}M_{h}^{1.024\pm 0.002}\) and \(L_{\rm X}=10^{26.73\pm 0.04}M_{h}^{1.140\pm 0.003}\) based on the results using two different algorithms, both of which are consistent with the results obtained using stacking method. Both scaling relations are flatter than those obtained previously by Wang et al. (2014), Eckmiller et al. (2011), Schellenberger & Reiprich (2017), and Kettula et al. (2015), but closer to the self-similar prediction. Combined with the DESI Legacy Imaging Surveys, our results display the capability of eROSITA to determine the X-ray emission out to \(R_{180}\) for a deep flux limited galaxy group sample. Future analysis using eROSITA all-sky survey data, combined with the group catalog with more accurate redshifts, would provide much enhanced quantitative X-ray measurement. Detailed analysis of the hot gas evolution in galaxy groups, and the physical modeling of their evolution will be presented in forthcoming papers. ## Acknowledgements We are thankful for Teng Liu for helpful discussions. This work is supported by the National Science Foundation of China (Nos. 11833005, 11890692, 11621303, 12141302), 111 project No. B20019, and Shanghai Natural Science Foundation, grant No.19ZR1466800. We acknowledge the science research grants from the China Manned Space Project with No.CMS-CSST-2021-A02. The computations in this paper were run on the Gravity Supercomputer at Shanghai Jiao Tong University. This work is based on data from the DESI Legacy Imaging Surveys. The DESI Legacy Imaging Surveys consist of three individual and complementary projects: the Dark Energy Camera Legacy Survey (DECaLS), the Beijing-Arizona Sky Survey (BASS), and the Mayall \(z\)-band Legacy Survey (MzLS). DECaLS, BASS and MzLS together include data obtained, respectively, at the Blanco telescope, Cerror Tololo Inter-American Observatory, NSF's NOIRLab; the Bok telescope, Steward Observatory, University of Arizona; and the May-all telescope, Kitt Peak National Observatory, NOIRLab. NOIRLab is operated by the Association of Universities for Research in Astronomy (AURA) under a cooperative agreement with the National Science Foundation. Pipeline processing and analyses of the data were supported by NOIRLab and the Lawrence Berkeley National Laboratory (LBNL). Legacy Surveys also uses data products from the Near-Earth Object Wide-field Infrared Survey Explorer (NEOWISE), a project of the Jet Propulsion Laboratory/California Institute of Technology, funded by the National Aeronautics and Space Administration. Legacy Surveys was supported by: the Director, Office of Science, Office of High Energy Physics of the U.S. Department of Energy; the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility; the U.S. National Science Foundation, Division of Astronomical Sciences; the National Astronomical Observatories of China, the Chinese Academy of Sciences and the Chinese National Natural Science Foundation. LBNL is managed by the Regents of the University of California under contract to the U.S. Department of Energy. The Photometric Redshifts for the Legacy Surveys (PRLS) catalog used in this paper was produced thanks to funding from the U.S. Department of Energy Office of Science, Office of High Energy Physics via grant DE-SC0007914. This work is also based on data from eROSITA, the soft X-ray instrument aboard SRG, a joint Russian-German science mission supported by the Russian Space Agency (Roskosmos), in the interests of the Russian Academy of Sciences represented by its Space Research Institute (IKI), and the Deutsches Zentrum fur Luftund Raumfahrt (DLR). The SGR spacecraft was built by Lavochkin Association (NPOL) and its subcontractors, and is operated by DPOL with support from the Max Planck Institute for Extraterrestrial Physics (MPE). The development and construction of the eROSITA X-ray instrument was led by MPE, with contributions from the Dr. Karl Remesis Observatory Bamberg & ECAP (FAU Erlangen-Nuernberg), the University of Hamburg Observatory, the Leibniz Institute for Astrophysics Potsdam (AIP), and the Institute for Astronomy and Astrophysics of the University of Tubingen, with the support of DLR and the Max Planck Society. The Argelander Institute for Astronomy of the University of Bonn and the Ludwig Maximilians Universitat Munich also participated in the science preparation for eROSITA. The eROSITA data shown here were processed using the eSASS software system developed by the German eROSITA consortium. ## Data Availability The data underlying this article will be shared on reasonable request to the corresponding author.
2308.14882
R-Matrix calculations for opacities: I. Methodology and computations
An extended version of the R-matrix methodology is presented for calculation of radiative parameters for improved plasma opacities. Contrast and comparisons with existing methods primarily relying on the Distorted Wave (DW) approximation are discussed to verify accuracy and resolve outstanding issues, particularly with reference to the Opacity Project (OP). Among the improvements incorporated are: (i) large-scale Breit-Pauli R-matrix (BPRM) calculations for complex atomic systems including fine structure, (ii) convergent close coupling wave function expansions for the (e+ion) system to compute oscillator strengths and photoionization cross sections, (iii) open and closed shell iron ions of interest in astrophysics and experiments, (iv) a treatment for plasma broadening of autoionizing resonances as function of energy-temperature-density dependent cross sections, (v) a "top-up" procedure to compare convergence with R-matrix calculations for highly excited levels, and (vi) spectroscopic identification of resonances and bound \eion levels. The present R-matrix monochromatic opacity spectra are fundamentally different from OP and lead to enhanced Rosseland and Planck mean opacities. An outline of the work reported in other papers in this series and those in progress is presented. Based on the present re-examination of the OP work, it is evident that opacities of heavy elements require revisions in high temperature-density plasma sources.
A. K. Pradhan, S. N. Nahar, W. Eissner
2023-08-28T20:10:42Z
http://arxiv.org/abs/2308.14882v1
# R-Matrix calculations for opacities: I. Methodology and computations ###### Abstract An extended version of the R-matrix methodology is presented for calculation of radiative parameters for improved plasma opacities. Contrast and comparisons with existing methods primarily relying on the Distorted Wave (DW) approximation are discussed to verify accuracy and resolve outstanding issues, particularly with reference to the Opacity Project (OP). Among the improvements incorporated are: (i) large-scale Breit-Pauli R-matrix (BPRM) calculations for complex atomic systems including fine structure, (ii) convergent close coupling wave function expansions for the (e + ion) system to compute oscillator strengths and photoionization cross sections, (iii) open and closed shell iron ions of interest in astrophysics and experiments, (iv) a treatment for plasma broadening of autoionizing resonances as function of energy-temperature-density dependent cross sections, (v) a "top-up" procedure to compare convergence with R-matrix calculations for highly excited levels, and (vi) spectroscopic identification of resonances and bound (e + ion) levels. The present R-matrix monochromatic opacity spectra are fundamentally different from OP and lead to enhanced Rosseland and Planck mean opacities. An outline of the work reported in other papers in this series and those in progress is presented. Based on the present re-examination of the OP work, it is evident that opacities of heavy elements require revisions in high temperature-density plasma sources. ## 1 Introduction Opacity is due to interaction of radiation with matter. It is a fundamental parameter in plasma, astrophysics, and atomic physics that determines radiation transport, and entails absorption and scattering of photons by atoms at all frequencies of radiation prevalent in a given environment. Methods for calculating opacities are well-established, and essentially involve the atomic physics of bound-bound and bound-free transition probabilities incorporated within an equation-of-state (EOS) of the plasma. However, in practice complexities arise owing to several physical factors that influence the accurate determination of opacity, and are addressed in this series of papers. As this work is an extension of the Opacity Project (hereafter OP), we first briefly outline OP and its calculations described under the _Atomic data for opacities_ (hereafter ADOC) series of papers, and their limitations. Next, we describe the extensions and improvements over OP in the present series _R-Matrix calculations for opacities_ (hereafter RMOP), subsequently referred to as papers RMOP1, RMOP2, RMOP3, RMOP4. ### The Opacity Project The OP work by M.J. Seaton and collaborators [1, 2, 3] and reference therein] was devoted to the development of a framework for calculation of opacities based on the close coupling approximation implemented in the powerful R-Matrix (RM) method by P.G. Burke and collaborators, and employed extensively for accurate calculations of a variety of radiative and collisional atomic processes [4, 5, 6]. The OP work entailed an EOS for stellar interior plasmas based on the "chemical picture" by D. Mihalas, D.G. Hummer and W. Dappen (named MHD-EOS [7]), that connects physically with OP atomic data via an _occupation probability_ factor of ionization fractions, level populations, and partition function in the modified Saha-Boltzmann equations that accounts for plasma interactions. Despite unprecedented effort and advances, the OP R-matrix work reported in ADOC faced several then intractable difficulties that limited the scope of atomic calculations. Primarily, the limitations were due to computational constraints which, in turn, did not enable accounting for important physical effects and a complete R-matrix calculation of atomic opacities. The main features and deficiencies of OP are as follows: (I) The calculations were in LS coupling neglecting relativistic fine structure, (II) The close coupling (hereafter CC) wavefunction expansion for the target or the core ion in the (e + ion) system included only a few ground configuration LS terms, (III) Inner-shell excitations could not be included owing to the restricted target ion expansion, (IV) While autoionizing resonances in bound-free photoionization cross sections were delineated within the few excited target terms, (V) Total angular and spin (e + ion) symmetries with large orbital angular-spin quantum numbers were not computed. All of these factors are crucial for a complete and accurate opacity calculation. Therefore, the OP work incorporated a relatively small subset of R-matrix data. Rather, most of the opacities contributions were obtained using atomic structure codes and the Distorted Wave (hereafter DW) approximation, similar to other opacity models [6-10]. In addition to the limitations of ADOC work mentioned above, new physical issues emerge in extending R-matrix calculations towards a complete calculation of opacities. There are three major problems that need to be solved: (A) convergence of large coupled channel wavefunction expansions necessary to include sufficient atomic structures manifest in opacity spectra, (B) completeness of high \(n\ell\) contributions up to \(n\equiv\infty\), and (C) attenuation of resonance profiles due to _intrinsic_ autoionization broadening (included in RM calculations in an ab initio manner) and _extrinsic_ plasma effects due to temperature and density, as generally considered for bound-bound line opacity. ### Scientific problems The erstwhile OP work summarized above concluded that the agreement between OP and another independent calculation OPAL [15, 3] do not differ by more than 2.5%, implying that a further revision of opacities was not needed [34]. However, there are outstanding problems related to opacities derived from the OP and all other opacity models. The foremost among them is related to a downward revision of solar abundances of common volatile elements such as carbon, nitrogen, oxygen and neon, relative to earlier ones by up to \(\sim\)50% [12, 13]. Thereupon, astrophysicists suggested that an _upward_ revision of opacities by \(\sim\)10% [18, 33] would countenance the lower solar abundances, since abundances are inversely linked to opacities which affect the radiation field in non-local thermodynamic equilibrium (NLTE) models employed to analyze observed line profiles of elements. In particular, the iron opacity plays a crucial role owing to relatively high abundance of iron. Also, recent experimental measurements of iron opacity were higher than given by OP and other models [29, 30]. Whereas opacity models have been improved by including additional transition arrays resonances, etc., the discrepancies with astrophysical and experimental results remain outstanding. This series describes the work carried out since the OP opacities reported in 2003 and available via database OPServer [28]. ### BPRM and DW Methods Current opacity models employ the DW approximation or variants thereof. In order to compare and contrast the present BPRM results, as well as to test complementarity and completeness of atomic data, we have also carried out relativistic distorted wave calculations reported in paper RMOP3 of this RMOP series. In principle, the DW approximation based on an atomic structure calculation coupled to the continuum yields complete sets of opacities. Oscillator strengths and photoionization cross sections are computed for all possible bound-bound and bound-free transitions among levels specified by electronic configurations included in the atomic calculation. However, since the DW approximation includes only the coupling between initial and final states, the complexity of interference between the bound and continuum wavefunction expansions involving other levels is neglected. That manifests itself as quasi-bound levels and autoionizing resonances embedded in the continua. DW models employ the independent resonance approximation that treats the bound-bound transition probability independently from coupling to the continuum. Apart from relative simplicity of atomic computations, the advantages of DW models is that well-established line broadening treatments may be employed to account for plasma interactions. Another advantage is ease of completeness of datasets that can be augmented by including additional configurations with multiple-electron excitations. Furthermore, high angular-spin momenta do not pose a computational problem commonly encountered in CC calculations. For these reasons the DW method is generally employed for opacities calculations. In contrast, RMOP calculations are computationally laborious and time-consuming. However, coupling effects can affect atomic parameters significantly. ### Prior work Opacity in the bound-free continuum is dominated by autoionizing resonances, as shown in recently completed works cited above and present results. Hitherto, they have been treated generally as lines akin to bound-bound transitions. The most important consequence, and likely source of missing opacity, is the _intrinsic_ autoionizing broadening and the _extrinsic_ plasma broadening thereof. The much wider spread of resonances in the continuum than lines raises the opacity significantly [24, 25]. Recent work [26] extended Fe xvii R-matrix calculations by including more configurations than NP16a. Whereas that confirmed our earlier results for photoionization cross sections, there are several issues: (i) D21 do not consider plasma broadening of autoionizing resonances that enhance opacities significantly, (see papers II and III), (ii) the D21 comparison between DW and unbroadened RM appears to agree, although fundamentally different since the DW method treats autoionizing levels and broadening thereof as for lines, (iii) D21 do not compare unbroadened RM cross sections for Fe xvii previously available from database NORAD [31], (iv) inexplicably, D21 RM Fe xvii Rosseland mean opacities are 10% below below the primarily DW results from OP2005, whereas all other DW models yield values up to 1.5 times higher [25]; there is no reason why RM opacities, even without broadening, should be lower than OP and other DW models, except that D21 might have an incomplete number of initial Fe xvii levels in their RM calculations. Other issues such as radiative data, cross sections, and shapes of autoionizing resonances due to plasma broadening are addressed in this series. Experimental opacity measurements at the Sandia Z facility for Fe, Ni, and Cr have highlighted deficiencies in theoretical models [29, 30]. However, experimental results need to be viewed in the context of the _very limited energy range where monochromatic iron opacity is actually measured_. Indeed,the experimental energy range does _not_ include the region of maximum opacity from Fe ions around \(\sim\) 1 Kev (well-known in X-ray spectroscopy). Therefore, experimental opacities _per se_ contribute only about 20% to the Rosseland mean opacities directly. However, extrapolating the differences between OP and experimental data in that limited range, B15 estimate a solar mixture opacity enhancement of 7\(\pm\)3% (the large error bars imply a factor of 2.5 discrepancy between the low and high experimental values). Opacity is a sensitive function of temperature and density, and an incomplete tabulation in a limited range may give inconsistent results since different ionization states of Cr, Fe and Ni contribute. For example, N-like Cr ions with 3 active p-electrons make the largest contribution at the Z temperature/density, whereas F-like Fe with 5 p-electrons is the largest contributor; for Ni it is Ne-like, a closed p-shell configuration. These issues need to be examined individually at a much wider range of energy-temperature-density to ascertain the source of discrepancies. Thus, although experimental results might point to "missing physics", it is first important to include physics that is known but missing, such as plasma broadening described in this series of papers. ### Overview of RMOP calculations Sections of this first paper P1 cover the following topics, as well as general features of subsequent papers in the series: (i) opacities and solar temperature-density structure, (ii) local-thermodynamic-equilibrium (LTE) plasma equation-of-state valid in stellar interiors, (iii) relativistic effects using the Breit-Pauli R-Matrix (BPRM) approximations, (iv) DW and BPRM calculations, and (v) plasma broadening of autoionizing resonances in bound-free opacity, (vi) convergence and completeness of atomic data. ## 2 Monochromatic and mean opacities The atomic parameters comprising the monochromatic opacity are due to bound-bound (bb), bound-free (bf), free-free (ff), and photon scattering (sc) contributions: \[\kappa_{ijk}(\nu)=\sum_{k}a_{k}\sum_{j}x_{j}\sum_{i,i^{\prime}}[\kappa_{bb}(i,i^{\prime};\nu)+\kappa_{bf}(i,\epsilon i^{\prime};\nu)+\kappa_{ff}(\epsilon i,\epsilon^{\prime}i^{\prime};\nu)+\kappa_{sc}(\nu)]\, \tag{1}\] where \(a_{k}\) is the abundance of element \(k\), \(x_{j}\) the \(j\) ionization fraction, \(i\) and \(i^{\prime}\) are the initial bound and final bound/continuum states of the atomic species, and \(\epsilon\) represents the electron energy in the continuum. The atomic absorption coefficients are related to the local radiation field at temperature T described by the Planck function \[B_{\nu}(T)=\frac{(2h\nu^{3}/c^{2})}{e^{h\nu/kT}-1}. \tag{2}\] Macroscopic quantities such as radiative forces and fluxes may be computed in terms of mean opacities, such as the Planck Mean Opacity (PMO) \[\kappa_{P}B(T)=\int\kappa_{\nu}B_{\nu}d\nu. \tag{3}\] Of particular interest to opacity calculations is the Rosseland Mean Opacity (RMO), \(\kappa_{R}\) RMO defined as the _harmonic mean_ of monochromatic opacity \(\kappa_{ijk}(\nu)\) as \[\frac{1}{\kappa_{R}}=\frac{\int_{0}^{\infty}g(u)\kappa_{\nu}^{-1}du}{\int_{0} ^{\infty}g(u)du}\quad;\quad g(u)=u^{4}e^{-u}(1-e^{-u})^{-2}, \tag{4}\] where \(g(u)=dB_{\nu}/dT\) is the derivative of the Planck weighting function (corrected for stimulated emission). Eq. 4 is mathematically and physically a complex quantity to evaluate. Whereas the opacity determines radiative transfer through the stellar interior, the RMO is related to the total radiation flux that eventually escapes the star and observed [22]. Although the singularity in the denominator \(1/\kappa_{n}u\) is generally avoided owing to overlapping spectral features, the RMO depends critically on the precise distribution of monochromatic opacity at all frequencies at a given (T,\(\rho\)) at each point inside the star. The opacity spectrum is a complex quantity with superimposed dips or windows and large peaks that vary by orders of magnitude due to energy dependence of atomic parameters, \(\kappa_{bb}(i,i^{\prime})=(\pi e^{2}/m_{e}c)N_{i}f_{ii^{\prime}}\phi_{\nu}\), and \(\kappa_{bf}=N_{i}\sigma_{\nu}\). The \(\kappa_{\nu}\) is then primarily a function of the bb oscillator strengths \(f\), bf photoionization cross sections \(\sigma_{\nu}\), level populations \(N_{i}\), and the line-profile factor \(\phi_{\nu}\). The RMOP framework for large-scale computations comprises mainly the first two components of the opacity in Eq. (1): (i) the bb transition probabilities and (ii) the bf photoionization cross sections. ### Solar structure and opacity Tables 1 and 2 provide a numerical glimpse of solar interior structure and related plasma and atomic parameters. In Table 1 we focus on the region outside of the nuclear fusion core in the radiative zone up to the boundary at the base of the convection zone (BCZ) [21]. Helioseismological analysis of thousands of modes of solar oscillations yields a precise measurement of the BCZ at solar radius R\({}_{\odot}\) = 0.713\(\pm\)0.001. At and above the BCZ outward energy transport via radiative diffusion gives way to convection which becomes more efficient since \((dT/dr)_{diff}>(dT/dr)_{ad}\), the adiabatic temperature gradient. There are two main reasons for convective motions to be more efficient at the BCZ: the weight of the outer layers is less than the radiation pressure from the interior below, and the _increase_ in opacity from higher to lower temperatures. Opacity increases due to the prevalence of lower stages of ionization, as more bound electrons are active in absorption of radiation via larger number of bound-bound and bound-free transitions than at higher temperatures below the BCZ. Table 2 shows the ionization states of the dominant elements that determine opacity at the BCZ: O, Ne and Fe. Almost 90% oxygen is in H-like or fully ionized, and 86% of neon is in H-like and He-like ionization states. But one and two electron K-shell ionization states do not contribute as much to opacity, as lower ones such as the partially filled L-shell Fe ions with percentage contributions given in Table 2. Just three Fe ions constitute 85% of iron at BCZ temperatures and densities. Those ions, Fe XVII, XVIII, XIX have very complex atomic structure and large number of radiative transitions that need to be accounted for. Large-scale calculations are necessary to compute accurate opacities, and detailed calculations for these three ions are reported in RMOP2. In table 3 we present a sample of the lowest and highest levels for the Fe xviii that has the highest ionization fraction of all Fe ions at BCZ conditions (table 2). As described in paper RMOP2, RMOP calculations for Fe xviii yield 1,174 bound levels, and a total of 1,604 levels including high-lying levels with \(n>4\) described in paper RMOP4 calculations to test convergence and completeness. The MHD-EOS parameters given in table 3 demonstrate the typical distribution of occupation probabilities and level populations across the bound-level spectrum of complex Fe ions. Very high-lying levels make insignificant contribution to opacity calculations. Previous works have discussed the inexplicable differences of orders of magnitude in occupation probabilities between OP and OPAL [32]. The EOS issue therefore remains open for future study. ### LTE equation-of-state Stellar interiors are generally assumed to be characterized by a local temperature-density (TD) parameter in LTE at any given point in the star. However, TD tracks vary by orders of magnitude as nuclear energy produced in the core is transported through the radiative diffusion zone and the materially convective zones up to the atmosphere where radiation escapes. A realistic EOS must therefore account for atomic-plasma effects all throughout. In the first paper on OP opacities ([2], hereafter SYMP), the authors defined "stellar envelopes to be regions where atoms are not markedly perturbed by the plasma environment"; the stellar envelope generally comprising of radiative and convection zones. The MHD-EOS is a modified version of the Saha-Boltzmann equations, based on the concept of _occupation probability_\(w\) of an atomic level being populated, taking into \begin{table} \begin{tabular}{|c|c|c|c|} \hline \(r/R_{\odot}\) & \(\rho\)(g/cc) & T(K) & \(N_{e}(cm^{-3})\) \\ \hline 0.00 & 162.2 & 1.58(7) & 1.0(26) \\ 0.35 & 6.89 & 5.75(6) & 3.57(24) \\ 0.40 & 3.88 & 5.01(6) & 2.01(24) \\ 0.45 & 2.29 & 4.47(6) & 1.19(24) \\ 0.50 & 1.31 & 3.89(6) & 6.82(23) \\ 0.55 & 0.82 & 3.47(6) & 4.25(23) \\ 0.60 & 0.51 & 3.09(6) & 2.67(23) \\ 0.65 & 0.33 & 2.69(6) & 1.71(23) \\ 0.71 & 0.20 & 2.24(6) & 1.02(23) \\ \hline \end{tabular} \end{table} Table 1: Solar opacity parameters derived from [20] using elemental abundances from [12, 13] (numbers in parenthesis are powers of 10), with the exception of the central temperature at \(r/R_{\odot}=0\) from several other sources. The boundary of the radiative zone and base of the convection zone(BCZ) is accurately determined from helioseismology to be 0.713\(\pm\)0.001 [14, 16]. \begin{table} \begin{tabular}{|c|c|} \hline Element & Ionization state (fraction) \\ \hline Oxygen & O VII (0.11), O VIII (0.47), O IX (0.42) \\ Neon & Ne VIII (0.10), Ne IX (0.51), Ne X (0.35) \\ Iron & Fe XVI (0.031), Fe XVII (0.196), Fe XVIII (0.372), Fe XIX (0.284), Fe XX (0.098) \\ \hline \end{tabular} \end{table} Table 2: Main solar BCZ atomic opacity contributing elements and ionization states and fractions \(>\)0.03, at T \(=2.24\times 10^{6}\)K and \(N_{e}=10^{23}cm^{-3}\), obtained from [19] using the Q-form of the MHD-EOS [7, 8]. The actual elemental opacity contributions depend significantly on the theoretical model employed with respect to solar abundances and the EOS [23]. account perturbations of energy levels by the plasma environment, \[N_{ij}=\frac{N_{j}g_{ij}w_{ij}e^{-E_{ij}/kT}}{U_{j}}. \tag{5}\] The \(w_{ij}\) are the occupation probabilities of levels \(i\) in ionization state \(j\). The occupation probabilities do not have a sharp cut-off, but approach zero for high-\(n\) as they are "dissolved" due to plasma interactions. The partition function is re-defined as \[U_{j}=\sum_{i}g_{ij}w_{ij}e^{(-E_{ij}/kT)}. \tag{6}\] \(E_{ij}\) is the excitation energy of level \(i\), \(g_{ij}\) its statistical weight, and \(T\) the temperature. The \(w_{ij}\) are determined upon free-energy minimization in the plasma at a given temperature-density. An atomic level \(i\) is considered dissolved by the plasma microfield when its highest Stark sub-level overlaps with the lowest sub-level of the \(i+1\) level (discussed further in RMOP3). The original version of MHD-EOS estimated the range of validity to \(\rho<)0.02 g/cc. That rather restrictive density limit is less than prevalent at the BCZ (c.f. Table 1), and most of the solar interior. The later version called the Q-MHD [8] has been employed in all present calculations. However, the EOS employed by OPAL differs considerably from OP; these differences and approximations made in the OP work, have been previously discussed, particularly for H-like ions for which data have been available [3, 32]. Nevertheless, the agreement between OP and OPAL to \(<5\%\)[33] seems to indicate that the differences in EOS do not affect final results. But the EOS redistributes level populations significantly; further work on improving the MHD-EOS is in progress. ## 3 R-matrix opacity calculations In this section we describe in some detail the differences from the OP work mentioned previously. In addition, a description of the revised RMOP codes, related extensions, and the new set of opacity codes is described. ### Convergent close coupling calculations Owing to the fact that the OP R-matrix calculations included only a few LS terms of the target or core ion, the (e + ion) wavefunction expansions were far from convergence of computed quantities and completeness. In the present work, particularly for iron ions reported in RMOP2, an effort is made to ensure convergence of photoionization calculations in the close coupling (CC) approximation using the R-matrix method as developed in the OP [3], and later in the Iron Project (IP) [10]. In the CC approximation, the atomic system is represented as the 'target' or the 'core' ion of N-electrons interacting with the (N+1)\({}^{th}\) electron. The (N+1)\({}^{th}\) electron may be bound in the electron-ion system, or in the electron-ion continuum depending on its energy to be negative or positive. The total wavefunction, \(\Psi_{E}\), of the (N+1)-electron system in a symmetry \(SL\pi\) or \(J\pi\) is an expansion over the eigenfunctions of the target ion, \(\chi_{i}\) in specific state \(S_{i}L_{i}(J_{i})\pi_{i}\), coupled with the (N+1)\({}^{th}\) electron function, \(\theta_{i}\): \[\Psi_{E}(e+ion)=A\sum_{i}\chi_{i}(ion)\theta_{i}+\sum_{j}c_{j}\Phi_{j}, \tag{7}\] where the sum is over the ground and excited states of the target or the core ion. The (N+1)\({}^{th}\) electron with kinetic energy \(k_{i}^{2}\) corresponds to a channel labeled \(S_{i}L_{i}(J_{i})\pi_{i}k_{i}^{2}\ell_{i}(SL(J)\pi)\). The \(\Phi_{j}\)s are bound channel functions of the (N+1)-electron system that account for short range correlation not considered in the first term and the orthogonality between the continuum and the bound electron orbitals of the target. Substitution of \(\Psi_{E}(e+ion)\) in the Schrodinger equation \[H_{N+1}\Psi_{E}=E\Psi_{E} \tag{8}\] introduces a set of coupled equations that are solved using the R-matrix method. The solution is a continuum wavefunction \(\Psi_{F}\) for an electron with positive energies (E \(>\) 0), or a bound state \(\Psi_{B}\) at a _negative_ total energy (E \(\leq\) 0). The complex resonance structures in photoionization cross sections result from channel couplings between the continuum channels that are open (\(k_{i}^{2}\ >\) 0), and ones that are closed (\(k_{i}^{2}\ <\) 0). Resonances occur at electron energies \(k_{i}^{2}\) corresponding to autoionizing states belonging to Rydberg series, \(S_{i}L_{i}\pi_{i}\nu\ell\) where \(\nu\) is the effective quantum number, converging on to the target threshold \(S_{i}L_{I}\). Convergence of the (e + ion) expansion in Eq. 1 is a difficult computational problem in CC calculations, since the numerical size of the Hamiltonian increases as square of \begin{table} \begin{tabular}{|c|c|c|c|} \hline Level & Energy (Ry) & W(OP) & N(\% pop) \\ \hline \(1s^{2}2s^{2}2p^{5}(^{2}P_{3/2}^{o})\) & -99.924 & 1.00 & 8.79 \\ \(1s^{2}2s^{2}2p^{5}(^{2}P_{1/2}^{o})\) & -99.010 & 1.00 & 4.09 \\ \(1s^{2}2s^{2}2s(^{1}S_{0})\) & -90.156 & 1.00 & 2.03 \\ \(1s^{2}2s^{2}2p^{4}3s(^{4}P_{5/2})\) & -43.203 & 0.99 & 0.15 \\ \(1s^{2}2s^{2}2p^{4}3s(^{4}P_{3/2})\) & -42.957 & 0.99 & 0.05 \\ \(1s^{2}2s^{2}2p^{4}3s(^{4}P_{1/2})\) & -42.477 & 0.99 & 0.05 \\ Highest levels \(n>4\) & -0.500 & 0.56 & 4.06(-5) \\ Non-hydrogenic & -0.343 & 0.01 & 1.10(-7) \\ \hline \end{tabular} \end{table} Table 3: MHD Equation-of-state parameters for Fe xviii at solar BCZ: \(T=2\times 10^{6}K,\ N_{e}=10^{23}/cc\). Out of 1604 bound levels calculated, the lowest six levels, energies, occupation probabilities W(OP), and percentage level populations are given. The highest bound levels approaching the first ionization threshold E \(\to\) 0 are also given. The rapid decrease in W and N(%pop) by orders of magnitude is evident. Notation: 4.06(-5) = 4.06 \(\times 10^{-5}\). the total number of channels in both the first and the second sum on the RHS. For example, the calculations reported in RMOP2 there are hundreds of target levels for each iron ion and thousands of corresponding channels. ### Relativistic effects and BPRM codes The limited OP R-matrix calculations did not consider fine structure. However, subsequent IP work employed the BPRM framework [10, 11] including fine structure target levels and recoupling scheme \(LS\to LSJ\). The relativistic BPRM Hamiltonian is given by \[H_{N+1}^{\rm BP}=\sum_{i=1}^{N+1}\left\{-\nabla_{i}^{2}-\frac{2Z}{r_{i}}+\sum _{j>i}^{N+1}\frac{2}{r_{ij}}\right\}+H_{N+1}^{\rm mass}+H_{N+1}^{\rm Dar}+H_{N+ 1}^{\rm so}. \tag{9}\] where the last three terms are relativistic corrections: \[\begin{array}{l}\mbox{the mass correction term, }\;H^{\rm mass}=-\frac{ \alpha^{2}}{4}\sum_{i}p_{i}^{4},\\ \mbox{the Darwin term, }\;H^{\rm Dar}=\frac{Z\alpha^{2}}{4}\sum_{i}\nabla^{2}( \frac{1}{r_{i}}),\\ \mbox{the spin}-\mbox{orbit interaction term, }\;H^{\rm so}=Z\alpha^{2}\sum_{i} \frac{1}{r_{i}^{2}}\mathbf{l}_{i}.\mathbf{s}_{i},\end{array} \tag{10}\] respectively. The BPRM codes used for the present opacity calculations are shown in Fig. 1 which is modified from the LS coupling version given in ADOC II [5]. The atomic structure codes Superstructure (SS), CIV3, STG1, STG2, STGH, STGB, STGF, STGBB and STGBF are described in [5]. Briefly, SS and CIV3 are atomic structure codes; either one is first employed to obtain reasonably accurate target wavefunctions, eigenenergies, and oscillator strengths for the target ion. The target ion orbital radial functions are then used by STG1 to reconstruct the target and calculate R-matrix basis functions and radial integrals for the (e + ion) system. With radial integrals from STG1 as input, STG2 computes angular coefficients and matrices of the Hamiltonian and dipole operators. The BP recoupling \(LS\to LSJ\) is implemented in the code RECUPD [11]. STGH diagonalizes the BP Hamiltonian and produces the H and D files required to obtain physical parameters such as energy levels and radiative data such as oscillator strengths and photoionization cross sections. Other new codes or extended versions (bracketed by asterisks) comprise of the following. ### Level identification Energy levels from R-matrix calculations in STGB are obtained as eigenvalues of bound states without spectral designation, as in atomic structure calculations. The new code BPID is employed to assign spectroscopic identification of all computed fine structure levels. Following diagnolization of the Hamiltonian matrix, the R-matrix basis functions are obtained and used to compute energy levels in STGB. BPID then analyzes the parameters computed in STGB to determine spectroscopic identification. Those are the channel percentage weights and quantum defects, complemented independently by atomic structure calculations. Level identification is necessary not only for spectroscopic designations required in practical applications, but also for matching and high \(n\ell(SLJ)\) "top-up" of computed oscillator strengths from STGBB and photoionization cross sections from STGBF to test completeness of atomic data. ### Radiation damping For highly charged ions, in particular H-like and He-like ions of Fe-group elements, radiative damping of autoionizing resonances is important (e.g. [35]), and may considered using the extended code STGBF-RD. However, for opacities calculations this is not needed since total photon absorption cross section regardless of subsequent radiative decays is required. ### Unified (e + ion) recombination Level-specific and total (e + ion) recombination cross sections may be computed employing the unified method subsuming both radiative and di-electronic recombination Figure 1: R-matrix codes for opacity calculations. in _ab initio_ manner within the R-matrix CC formulation [36], using the code STGRC ([6], and references therein). ### Convergence and Completeness Even when the CC calculations are converged to practically acceptable accuracy, as discussed in RMOP2, completeness may not have been achieved with respect to all of the bound-bound and bound-free transitions in an ion. But R-matrix calculations become computationally intensive with increasing energy as successive thresholds of the target ion are exceeded and more channels open up. At the same time computations need to be done at all energies with a sufficiently fine energy mesh to resolve autoionzation resonance structures. However, above the highest target level all channels are open and there are no more resonances. Although the number of open channels may be large, the cross sections are featureless and slowly varying with energy. Moreover, for high \(n\ell(SLJ)\) levels resonance structure are weak and may be neglected. In such cases a "top-up" procedure using DW methods may be employed to test if convergence has been achieved to ensure completeness of atomic data for opacities. One such "top-up" procedure is described in paper RMOP4. Generally, we find that the "top-up" contribution to opacities is small and does not exceed \(\sim\)5% for any given ion. ### Plasma effects The OP and RMOP calculations are carried out for isolated atomic systems. As such, external effects due to plasma environment at specific temperature, density, abundances, etc. need to be considered in opacity calculations. Those effects determine the EOS as discussed above. But in addition, they alter computed atomic features such as line shapes of bound-bound atomic transitions significantly. In OP work quasi-bound levels that give rise to resonances in the continuum are treated as bound levels _a priori_, and plasma broadening of autoionizing resonances is neglected due to, (i) difficulty in including pressure broadening, and (ii) because quantum interference between resonances and the continuum is considered to be small [32]. Therefore, a perturbative approach in the independent resonance approximation, akin to independent treatment of radiative and di-electronic recombination, is employed. However, as we demonstrate in paper RMOP3 in detail, plasma broadening of autoionizing resonances fundamentally different from that of line broadening of bound levels. Practically, plasma broadening not only has a significant but large effect on bound-free opacity and derivative quantities such as the Rosseland and Planck mean opacities. ### Opacities calculations The opacity codes employed in RMOP calculations have not heretofore been published, and are different from those in the OP work. In the initial stages of OP, both sets of codes have been extensively checked against each other for opacities reported in [2]. However, most of OP data was from sources other than R-matrix calculations and processed to compute opacities in a different manner than described herein. Fig. 2 shows the schematic diagram of the codes and datasets in RMOP calculations (codes bracketed by '\(*\)' have not been heretofore presented). #### 3.8.1 Atomic data The input RMOP data for opacity calculations are the final products from codes shown in Fig. 1. Each ion is treated as an (e + ion) system characterized by (Z,N), the atomic number Z and the number of electrons in the target ion. The input atomic datasets consists of four files: (i) t-file -- target level energies and statistical weights, (ii) e-file -- energy levels as computed by STGB and further processes using BPID, (iii) f-file -- oscillator strengths for E1 transitions computed in STGBB, (iv) p-file -- photoionization cross sections from STGBF. These files are input to the code INTFACE that interfaces the atomic data, and maps out at a photon frequency mesh of 100,000 frequencies (in contrast the OP work is at 10,000 frequencies), Figure 2: Plasma opacity codes. into bound-bound (bb) and bound-free (bf) files for opacity calculations separately for each ion. _Prior to input into INTFACE, the p-files are pre-processed by the code PBRO for plasma broadening of autoionizing resonances in photoionization cross sections to produce broadened bf-files_. PBRO computes plasma broadened cross sections (described in RMOP3) for each temperature and density. This results in a large number of pb-files for all TD pairs from a single unbroadened p-file for each level of each ion of each element in opacities calculations. INTFACE then processes either the unbroadened p-files or broadened pb-files and produces corresponding bf-files mapped on to the opacity frequency mesh. Thus, a huge amount of data is produced as result of the interface of atomic and plasma parameters, most of it too large to be stored and therefore treated as intermediate files that are recreated for each TD. #### 3.8.2 Equation-of-state The MHD-EOS parameters are taken from OPCD codes using the Q-form [7, 8, 3], so that there are no inconsistencies owing to the EOS between RMOP and OP. The input EOS parameters consist of: a -- abundances of elements, x -- ionization fractions of each ion, and w -- EOS data to obtain occupation probabilities. The opacity code OPAC computes monochromatic, Planck and Rosseland mean opacities, \(\kappa_{\nu},\kappa_{P},\kappa_{R}\) respectively, using the INTFACE bb and bf files, and EOS parameters, independently for each T-D, or ranges thereof. Since one of the primary motivation of the RMOP calculations is to solve the aforementioned solar abundances problem, different sets of abundances may be used to ascertain differences among them. #### 3.8.3 Bound and continuum opacities The most important difference between RMOP and other opacity calculations is the treatment of bound-bound as distinct from bound-free continuum opacity. There is a clear division between lines as strictly the transitions among negative energy bound levels, and autoionizing resonances in the bound-free continua. Practically, this difference manifests itself in the code OPAC (Fig. 2). The bb-opacity consists of negative energy bound levels only, and corresponding oscillator strengths, and the bound-free opacity consists of photoionization cross sections with resonances that are otherwise treated as lines in DW opacity calculations. The DW calculations may couple lines _a posteori_ to single-channel feature-less continuum perturbatively, but not in a fully coupled manner as in RMOP opacities. The combined bb and bf opacity spectra therefore are quite different in detail, which reflects in the calculation of mean opacities. There are additional steps necessary in order to ensure completeness, as discussed in RMOP4, relating to the division between negative and positive energy levels and to ensure that there is no double-counting of levels if it is necessary to include high-\(n\ell\) contributions, although they are found to matter little since high-lying levels have insignificant populations. The treatment of free-free contribution to plasma broadening, discussed in RMOP3, is also implemented in OPAC. Large datasets of \(f\)-values for transitions _among positive energy levels_, obtained from atomic structure codes such as Superstructure or variants, are required to compute this contribution. Although, small relative to electron impact, Stark and Doppler broadening, it nevertheless needs to be included for completeness. ## 4 Acknowledgments This work has been partially supported by grants from the US National Science Foundation, NASA, and the Department of Energy. Most of the computational work was carried out at the Ohio Supercomputer Center.
2301.09466
Rationally connected threefolds with nef and bad anticanonical divisor, II
Let $X$ be a smooth complex projective rationally connected threefold with $-K_X$ nef and not semi-ample. In our previous work, we classified all such threefolds when $|{-}K_X|$ has no fixed divisor. In this paper, we continue our classification when $|{-}K_X|$ has a non-zero fixed divisor.
Zhixin Xie
2023-01-23T14:54:56Z
http://arxiv.org/abs/2301.09466v1
# **Rationally connected threefolds with nef and bad anticanonical divisor, II** ###### Abstract Let \(X\) be a smooth complex projective rationally connected threefold with \(-K_{X}\) nef and not semi-ample. In our previous work, we classified all such threefolds when \(|-K_{X}|\) has no fixed divisor. In this paper, we continue our classification when \(|-K_{X}|\) has a non-zero fixed divisor. 2010 _Mathematics subject classification._ 14E30, 14M22. ## 1 Introduction Complex projective Fano manifolds play an important role in the framework of the Minimal Model Program (MMP) and appear as one of the building blocks in the birational classification of varieties. Fano manifolds are classified up to dimension three - the classification of Fano threefolds was achieved by Mori-Mukai and Iskovskikh, see [14] and [15, 16]. Complex projective manifolds with nef anticanonical divisor are a natural generalisation of Fano manifolds and one hopes to similarly fulfil a complete classification for this class of manifolds. One of the methods to study such a manifold is the decomposition theorem for projective manifolds with nef anticanonical bundle by Cao and Horing [17]: for such a manifold \(X\), its universal cover \(\widetilde{X}\) decomposes as a product \[\widetilde{X}\simeq\mathbb{C}^{q}\times\prod Y_{j}\times\prod S_{k}\times Z,\] where \(Y_{j}\) are irreducible projective Calabi-Yau manifolds, \(S_{k}\) are irreducible projective hyperkahler manifolds (so that \(Y_{j}\) and \(S_{k}\) have trivial canonical bundle), and \(Z\) is a projective rationally connected manifold with \(-K_{Z}\) nef (and non trivial as \(Z\) is rationally connected). In view of this result, it is important to study the case when X is rationally connected, and it is also the most difficult one. Recent results by Birkar, Di Cerbo and Svaldi [1, Theorem 1.6] showed that birationally, there are only finitely many deformation families of projective rationally connected threefolds with nef anticanonical divisor. Thus it is in principle possible to classify these varieties. This paper addresses the classification problem of smooth projective rationally connected threefolds \(X\) with \(-K_{X}\) nef and not semi-ample. In our earlier work [18], the case when \(|-K_{X}|\) has no fixed divisor is completely classified. In this paper, we consider the remaining case when \(|-K_{X}|\) has a non-zero fixed divisor. It is shown in op. cit. that a general member of the mobile part of \(|-K_{X}|\) has at lease two irreducible components. Our main result gives a complete classification when an irreducible component is a non-rational surface and shows that in this case, a general member of the mobile part of \(|-K_{X}|\) has exactly two irreducible components. ### Previous work Let \(X\) be a smooth projective rationally connected threefold with \(-K_{X}\) nef. If \(-K_{X}\) is semi-ample, we refer to [1, Sections 5, 6] for a partial classification. Another approach to obtain a classification in this case stems from its similarity with the case of weak Fano threefolds, i.e. threefolds with nef and big anticanonical bundle. One may analyse the plurianticanonical morphism \[\phi_{|-mK_{X}|}\colon X\to Y\] for \(m\) sufficiently large, as done in the weak Fano case, which led to boundedness of weak Fano threefolds, see [1, 1, 10]. Together with a discussion of the Mori contractions, one may obtain a classification by following the strategy in [1, 1], where the authors gave a classification of weak Fano threefolds with Picard number two. We thus focus on the case where \(-K_{X}\) is not semi-ample. In [1], Bauer and Peternell gave the following criterion for verifying the non semi-ampleness. **Theorem 1.1**.: ([1, Theorem 2.1]) _Let \(X\) be a smooth projective rationally connected threefold with \(-K_{X}\) nef. Then the Iitaka dimension \(\kappa(X,-K_{X})\) is at least \(1\)._ _If the nef dimension 1\(n(X,-K_{X})\) is \(1\) or \(2\), then \(-K_{X}\) is semi-ample and the nef reduction map associated to \(-K_{X}\) can be taken as the Stein factorisation of the map defined by some positive multiple of \(-K_{X}\) which is globally generated._ Footnote 1: See [1, Theorem 2.1] for the definition of nef dimension and the construction of the nef reduction map associated to a nef divisor. By a result of Kawamata [1, Theorem 6.1], if \(\kappa(X,-K_{X})=\nu(X,-K_{X})\), where \(\nu\) denotes the numerical dimension, then \(-K_{X}\) is semi-ample. Thus, in practice, the above theorem, together with [1, Theorem 6.1], implies that the non semi-ampleness of \(-K_{X}\) is equivalent to \(n(X,-K_{X})=3\) and \(\nu(X,-K_{X})=2\), which is also equivalent to \(\nu(X,-K_{X})=2\) and \(\kappa(X,-K_{X})=1\). We start the investigation with the base locus of the anticanonical system \(|-K_{X}|\) as the latter one is non-empty and not semi-ample. When \(|-K_{X}|\) has no fixed divisor, a complete classification is obtained in [11, Theorem 1.1]. Now if \(|-K_{X}|\) has a non-zero fixed divisor, it turns out that, after a finite sequence of flops, one can assume that the mobile part of \(|-K_{X}|\) is base-point-free. **Theorem 1.2**.: ([11, Theorem 1.2, Corollary 2.8, Lemma 4.2]) _Let \(X\) be a smooth projective rationally connected threefold with \(-K_{X}\) nef and not semi-ample. Assume that \(\operatorname{Fix}|-K_{X}|\neq 0\). Then there exists a finite sequence of flops \(\psi\colon X\dashrightarrow X^{\prime}\) such that the following holds:_ * \(X^{\prime}\) _is smooth,_ * \(-K_{X^{\prime}}\) _is nef,_ * \(\operatorname{Mob}|-K_{X^{\prime}}|\) _is base-point-free and induces a fibration_ \(f\colon X^{\prime}\to\mathbb{P}^{1}\)_._ _Moreover, we have_ \[|-K_{X^{\prime}}|=A+|kF|\text{ with }k\geq 2,\] _where \(F\) is a general fibre of \(f\). Furthermore, we have_ \[A^{3}=A^{2}\cdot F=0,\] _and \(F\) is a smooth surface with \(-K_{F}\) effective, nef and not semi-ample._ Now, in order to study the geometry of the fibration \(X^{\prime}\to\mathbb{P}^{1}\) in Theorem 1.2, we consider the following setup where we denote \(X^{\prime}\) by \(X\) for simplicity of notation in the rest of our discussion. **Setup 1.3**.: _Let \(X\) be a smooth projective rationally connected threefold with anticanonical bundle \(-K_{X}\) nef and not semi-ample. Assume that \(A\coloneqq\operatorname{Fix}|-K_{X}|\neq 0\) and \(|B|\coloneqq\operatorname{Mob}|-K_{X}|\) is base-point-free, inducing a fibration \(f\colon X\to\mathbb{P}^{1}\)._ _If \(F\) is a fibre of \(f\), then \(F\) is a smooth surface with \(-K_{F}\) effective, nef and not semi-ample. We have_ \[|-K_{X}|=A+|kF|\text{ with }k\geq 2\] _and_ \[A^{3}=A^{2}\cdot F=0.\] _Now we write \(A=A_{h}+A_{v}\), where \(A_{h}\) and \(A_{v}\) are effective divisors such that \(A_{h}|_{F}=-K_{F}\) and \(A_{v}|_{F}=0\) for a general fibre \(F\)._ ### Main results and organisation of the paper In this paper, we give a complete classification when the general fibre \(F\) in Setup 1.3 is a non-rational surface. **Theorem 1.4**.: (A) _In Setup 1.3, assume that the surface \(F\) is non-rational. Then \(X=\mathbb{P}(\mathcal{V})\) is a \(\mathbb{P}^{1}\)-bundle over \(Y\), where \(Y\) is isomorphic to \(\mathbb{P}^{2}\) blown up in \(9\) points such that \(-K_{Y}\) is nef and base-point-free (thus induces an elliptic fibration \(\pi\colon Y\to\mathbb{P}^{1}\) with general fiber denoted by \(R\)) and \(\mathcal{V}\) is a rank-\(2\) vector bundle defined by a non-split extension_ \[0\to\mathcal{O}_{Y}\to\mathcal{V}\to\mathcal{O}_{Y}(K_{Y})\to 0,\] _and the fibration \(f\) factors as \(X\to Y\stackrel{{\pi}}{{\to}}\mathbb{P}^{1}\)._ _Furthermore, \(|-K_{X}|=2D+|2F|\) with \(D=\mathbb{P}\big{(}\mathcal{O}_{Y}(K_{Y})\big{)}\simeq Y\), and \(F=\mathbb{P}\big{(}\mathcal{E}\big{)}\) is a \(\mathbb{P}^{1}\)-bundle over the smooth elliptic curve \(R\), where \(\mathcal{E}\) is a rank-\(2\) vector bundle over \(R\) defined by a non-split extension_ \[0\to\mathcal{O}_{R}\to\mathcal{E}\to\mathcal{O}_{R}\to 0.\] (B) _Conversely, let \(Y\) be \(\mathbb{P}^{2}\) blown up at \(9\) points such that \(-K_{Y}\) is nef, base-point-free and thus induces an elliptic fibration \(\pi\colon Y\to\mathbb{P}^{1}\) with general fibre denoted by \(R\). Let \(\mathcal{V}\) be a rank-\(2\) vector bundle over \(Y\) defined by a non-split extension_ \[0\to\mathcal{O}_{Y}\to\mathcal{V}\to\mathcal{O}_{Y}(K_{Y})\to 0 \tag{1}\] _and let \(\varphi\colon X\coloneqq\mathbb{P}(\mathcal{V})\to Y\). Then \(-K_{X}\) is nef and not semi-ample and \(|-K_{X}|=2D+|2F|\), where \(D\coloneqq\mathbb{P}(\mathcal{O}_{Y}(K_{Y}))\) and \(F\) is a general fibre of the fibration \(f\coloneqq\pi\circ\varphi\colon X\to\mathbb{P}^{1}\). Moreover, \(F=\mathbb{P}(\mathcal{E})\) is a \(\mathbb{P}^{1}\)-bundle over the smooth elliptic curve \(R\), where \(\mathcal{E}\) is a rank-\(2\) vector bundle over \(R\) and defined by a non-split extension_ \[0\to\mathcal{O}_{R}\to\mathcal{E}\to\mathcal{O}_{R}\to 0.\] Since we applied Theorem 1.2 in order to reduce to Setup 1.3, the classification obtained in Theorem 1.4 is up to flops. Thus we also want to track, a posteriori, the sequence of flops. To this end, we describe all extremal rays of the cone \(\overline{\operatorname{NE}}(X)\cap K_{X}^{\perp}\) for the threefolds \(X\) in Theorem 1.4. We obtain more precisely the following result. **Proposition 1.5**.: _In Theorem 1.4, we have a morphism \(\operatorname{NE}(D)\to\operatorname{NE}(X)\) induced by the inclusion \(D\hookrightarrow X\). The cone \(\operatorname{NE}(X)\) is closed and the extremal rays of the subcone \(\operatorname{NE}(X)\cap K_{X}^{\perp}\) are spanned by the classes of \((-1)\)-curves on \(D\) and the classes of \((-2)\)-curves on \(D\) (or the class \(-K_{D}\) if there is no \((-2)\)-curve on \(D\), i.e. if the elliptic fibration \(f|_{D}\colon D\to\mathbb{P}^{1}\) has no reducible fibre)._ _Moreover, there are infinitely many flopping contractions on \(X\) and each flopping contraction contracts an extremal ray spanned by a class of a \((-1)\)-curve on \(D\)._ **Plan.** We briefly explain the organisation of the paper. In Section 2, we discuss some general results about the geometry of threefolds \(X\) as in Setup 1.3. Section 3 is devoted to the proof of our main Theorem 1.2. Under Setup 1.3 and assuming that a general fibre \(F\) of \(f\colon X\to\mathbb{P}^{1}\) is a non-rational surface, we start by investigating the structure of the fixed divisor \(A\) of the anticanonical system \(|-K_{X}|\) in Subsection 3.1. Arguing by contradiction, we will show that the fixed divisor \(A\) has no \(f\)-vertical part, and we describe more precisely the \(f\)-horizontal part of \(A\) in Proposition 3.3. This leads to further restrictions on the geometry of \(X\) when we run the MMP in Subsection 3.2 to obtain the classification in Theorem 1.2. In Section 4, we study the cone of effective curves of \(X\). To this end, we describe all the \(K_{X}\)-trivial curves by first showing that every \(K_{X}\)-trivial curve class is proportional to a curve (class) contained in the fixed divisor of \(|-K_{X}|\). Since the fixed divisor (with reduced structure) is isomorphic to the blow-up of \(\mathbb{P}^{2}\) at the \(9\) base points of a cubic pencil, its cone of effective curves is classically known. In this way, we describe all extremal rays of the cone \(\overline{\operatorname{NE}}(X)\cap K_{X}^{\perp}\) and the corresponding extremal contractions in Lemma 4.4, which implies directly Proposition 1.5. **Acknowledgements.** Some results in this paper are based on my PhD thesis. I would like to express my sincere gratitude to my supervisor, Andreas Horing, for his patient guidance, his constant encouragements and his careful proof-reading. I heartily thank Cinzia Casagrande for interesting discussions and inspiring suggestions. I thank Vladimir Lazic and Nikolaos Tsakanikas for useful comments on the paper. ## 2 Preliminary In this paper we work over the field \(\mathbb{C}\). ### Notation and terminology We use the following notation throughout the paper, see [10, Definition 2.1.3, Remark 2.3.17] for definitions. **Notation 2.1**.: _Let \(X\) be a normal projective variety and \(D\) a Cartier divisor on \(X\). We denote by_ * \(\kappa(D,X)\) _the Iitaka dimension of_ \(D\)_._ * \(\nu(D,X)\coloneqq\max\{n\mid D^{n}\not\equiv 0\}\) _the numerical dimension of_ \(D\) _when_ \(D\) _is nef._ _Consider the complete linear system \(|D|\). We denote by_ * \(\operatorname{Fix}|D|\) _the fixed divisor of_ \(|D|\)_._ * \(\operatorname{Mob}|D|=|D|-\operatorname{Fix}|D|\) _the mobile part of_ \(|D|\) Note that the numerical dimension can also be defined for a pseudo-effective divisor, see for example [13, Chapter V, Definition 2.5]. **Definition 2.2**.: _Let \(X\) be a normal projective variety. A flopping contraction is an extremal birational contraction \(f\colon X\to Y\) to a normal variety \(Y\) such that the exceptional locus of \(f\) has codimension at least two in \(X\) and \(K_{X}\) is numerically \(f\)-trivial._ _If additionally \(D\) is a \(\mathbb{Q}\)-Cartier divisor on \(X\) such that \(-(K_{X}+D)\) is \(f\)-ample, then the \((K_{X}+D)\)-flip of \(f\) is called the \(D\)-flop._ ### Results on surfaces We will need the following results on surfaces. **Lemma 2.3**.: ([13, Lemma 4.4, Corollary 4.6]) _Let \(S\) be a projective Gorenstein surface such that the anticanonical divisor \(-K_{S}\) is of the form:_ \[-K_{S}\sim D_{1}+D_{2},\] _where \(D_{1}\) is effective and \(D_{2}\) is a non-zero effective Cartier divisor which is nef and divisible by \(r\geq 2\) in \(\operatorname{NS}(S)\)._ _Suppose that \(D_{2}^{2}=0\), and that one of the following assertions holds:_ 1. \(S\) _is not covered by_ \(D_{2}\)_-trivial curves;_ 2. \(D_{2}\) _contains a smooth curve of positive genus._ _Then \(D_{1}=0\) and the surface \(S\) is a \(\mathbb{P}^{1}\)-bundle over a smooth elliptic curve._ **Lemma 2.4**.: _Let \(S\) be a ruled surface over a smooth elliptic curve \(B\). Suppose that \(S\) admits an elliptic fibration \(\tau\colon S\to\mathbb{P}^{1}\). If \(h^{0}\big{(}S,\mathcal{O}_{S}(-K_{S})\big{)}\geq 3\), then \(S\simeq B\times\mathbb{P}^{1}\)._ Proof.: Since \(S\) is a \(\mathbb{P}^{1}\)-bundle over a smooth elliptic curve \(B\) and \(S\) admits an elliptic fibration, we have \(S=\mathbb{P}(\mathcal{V})\) by [12, Theorem 5], where \(\mathcal{V}\) is one of the following: 1. \(\mathcal{V}\) is the unique indecomposable rank-\(2\) vector bundle of degree \(1\) on \(B\); 2. \(\mathcal{V}=\mathcal{O}_{B}\oplus\mathcal{L}\), where \(\mathcal{L}\) is a (possibly trivial) torsion line bundle. In the first case, let \(\ell\) be a fibre of the ruling and let \(\theta_{i}\) be a section with minimal self-intersection, i.e. \(\theta_{i}^{2}=1\). Then \(-K_{S}\sim 2\theta_{i}-\ell\). By [12, Theorem 5(iii)], the elliptic fibration is given by the linear system \(|4\theta_{i}-2\ell|\). Hence, \[h^{0}\big{(}S,\mathcal{O}_{S}(-K_{S})\big{)}=1.\] In the second case, we have \[h^{0}\big{(}S,\mathcal{O}_{S}(-K_{S})\big{)}=h^{0}(B,S^{2}\mathcal{V}\otimes \mathcal{L}^{*})=h^{0}(B,\mathcal{L}\oplus\mathcal{L}^{*}\oplus\mathcal{O}_{ B})\leq 3,\] with equality if and only if \(\mathcal{L}=\mathcal{O}_{B}\). Since \(h^{0}\big{(}S,\mathcal{O}_{S}(-K_{S})\big{)}\geq 3\) by assumption, we obtain \(\mathcal{V}=\mathcal{O}_{B}\oplus\mathcal{O}_{B}\). Thus \(S\simeq B\times\mathbb{P}^{1}\) ### Results on threefolds We will first prove the following two lemmas which give geometric restriction on the threefold with nef anticanonical divisor, when we consider certain types of extremal contractions on the threefolds. **Lemma 2.5**.: _Let \(X\) be a smooth projective threefold with \(-K_{X}\) nef. Let \(A\coloneqq\operatorname{Fix}|-K_{X}|\). Consider a \(K_{X}\)-negative extremal contraction \(\varphi\colon X\to Y\). Assume that \(\varphi\) is a divisorial contraction which contracts an irreducible component of \(A\) to a smooth curve or a smooth point. Then \(\kappa(Y,-K_{Y})=\kappa(X,-K_{X})\) and \(\nu(Y,-K_{Y})=\nu(X,-K_{X})\)._ Proof.: Denote by \(E\) the exceptional divisor of \(\varphi\). Then \[\varphi^{*}(-K_{Y})=-K_{X}+mE\] with \(m=1\) or \(2\). Since \(\mathcal{O}_{X}(E)\hookrightarrow\mathcal{O}_{X}(A)\hookrightarrow\mathcal{O} _{X}(-K_{X})\), one has \[\kappa(X,-K_{X})\leq\kappa(X,-K_{X}+mE)\leq\kappa(X,-(m+1)K_{X})=\kappa(X,-K_{ X}).\] By [14, Proposition V.2.7(1)], one has \[\nu(X,-K_{X})\leq\nu(X,-K_{X}+mE)\leq\nu(X,-(m+1)K_{X})=\nu(X,-K_{X}).\] Hence, \(\kappa(Y,-K_{Y})=\kappa(X,-K_{X})\) and \(\nu(Y,-K_{Y})=\nu(X,-K_{X})\). **Lemma 2.6**.: _Let \(X\) be a smooth projective threefold with \(|-K_{X}|=A+|kF|\), where \(A\coloneqq\operatorname{Fix}|-K_{X}|\neq 0\), \(F\) is a prime divisor, and \(k\geq 2\) is an integer. Suppose that there exists an \(\epsilon A\)-flop, where \(\epsilon>0\) satisfies that the pair \((X,\epsilon A)\) is log-canonical. Then \(A\) has multiplicity at least \(k\) along the flopping curve._ Proof.: By assumption, there exists an \(\epsilon A\)-flop: \[\psi\colon X\dasharrow X^{+},\] where \(X^{+}\) is again smooth by [13, Theorem 2.4]. Since \(\psi\) induces an isomorphism in codimension one, the anticanonical system \(|-K_{X^{+}}|\) has a non-empty fixed divisor \(A^{+}\coloneqq\psi_{*}(A)\) and we can write \[|-K_{X^{+}}|=A^{+}+|kF^{+}|,\] where \(F^{+}\coloneqq\psi_{*}(F)\) and \(|kF^{+}|\) is the mobile part of the anticanonical system. Since \(\psi\) is a flop, there exists a common resolution: such that \(g^{*}(K_{X})=h^{*}(K_{X^{+}})\). Moreover, by [10, Proposition 5-1-11], one has \[K_{\widehat{X}}=g^{*}(K_{X}+\epsilon A)+\sum_{i}a_{i}E_{i}=h^{*}(K_{X^{+}}+ \epsilon A^{+})+\sum_{i}a_{i}^{+}E_{i}, \tag{2}\] where \(a_{i}^{+}\geq a_{i}\), and \(a_{i}^{+}>a_{i}\) if and only if \(g(E_{i})\) is contained in the flopping locus. Since \(|-K_{X}|=A+|kF|\) and \(|-K_{X^{+}}|=A^{+}+|kF^{+}|\), the equality (2) gives \[h^{*}(kF^{+})-g^{*}(kF)=(1-\epsilon)\big{(}g^{*}(A)-h^{*}(A^{+})\big{)}+\sum_{i} (a_{i}^{+}-a_{i})E_{i}=\frac{1}{\epsilon}\sum_{i}(a_{i}^{+}-a_{i})E_{i}\] is effective. Since \(F\) and \(F^{+}\) are Cartier divisors, we can write \[h^{*}(kF^{+})-g^{*}(kF)=\sum_{i}kn_{i}E_{i}\] with \(n_{i}\in\mathbb{N}\). Since \(g^{*}(A)=h^{*}(A^{+})+h^{*}(kF^{+})-g^{*}(kF)\), we obtain that \[g^{*}(A)-\tilde{A}-\sum_{i}kn_{i}E_{i}\] is effective, where \(\tilde{A}\coloneqq g_{*}^{-1}(A)\) is the strict transform of \(A\) in \(\tilde{X}\). Therefore, \(A\) has multiplicity at least \(k\) along the flopping curve. More generally, we will need the following result by Wilson on the classification of crepant contractions of an extremal ray on a threefold. **Proposition 2.7**.: ([14, Theorem 2.2],[14]; [14, Proposition 3.1]) _Let \(X\) be a smooth projective threefold and let \(\phi\colon X\to Y\) be a crepant contraction of an extremal ray, contracting some irreducible surface \(E\subset X\) to a curve \(C\subset Y\). Then \(C\) is a smooth curve and \(\phi\colon E\to C\) is a conic bundle over \(C\) such that one of the following holds:_ 1. \(E\) _is normal and a general fibre of_ \(\phi\colon E\to C\) _is a smooth conic;_ 2. \(E\) _is non-normal and a general fibre of_ \(\phi\colon E\to C\) _is two lines meeting at one point._ _For a general fibre \(l\) of \(\phi\colon E\to C\), one has \(E\cdot l=-2\). A singular fibre of \(\phi\colon E\to C\) is either two \(\mathbb{P}^{1}\)'s intersecting at one point, or a double line._ _Furthermore, if \(E\) is normal, then the possible singularities of \(E\) are \(A_{n}\) singularities at the point where distinct components of a singular fibre meet, or \(A_{1}\) singularities appearing as a pair on some double fibre._ ### General setup and basic results Let us first point out an important special case under our Setup 1.3. **Lemma 2.8**.: _In Setup 1.3, suppose that the relative anticanonical divisor \(-K_{X/\mathbb{P}^{1}}\) is nef. Then_ \[X\simeq F\times\mathbb{P}^{1}.\] Proof.: Since \(-K_{X/\mathbb{P}^{1}}\) is nef, the fibration \(f\) is locally trivial in the Euclidean topology by [13, Theorem A.12] and [17, Proposition 2.8]. Applying the latter proposition, we further obtain \(X\simeq F\times\mathbb{P}^{1}\); we explain how to apply the proposition in our case, as follows. Note that in [17, Proposition 2.8] the assumption is different. Instead of assuming (H1): _the relative anticanonical divisor is nef_, they assume (H2): _there exists an \(f\)-very ample line bundle \(L\) such that \(f_{*}(mL)\) is a numerically flat vector bundle for every integer \(m\leq 1\)._ However, when the fibration is over a smooth curve, [14, Proposition A.11] shows that (H1) implies (H2). Since in our case, the fibration is over \(\mathbb{P}^{1}\) which is simply connected, [10, Proposition 2.8] gives \(X\simeq F\times\mathbb{P}^{1}\). **Lemma 2.9**.: _In Setup 1.3, assume that \(A_{h}\) is \(f\)-relatively nef. Then for any \(f\)-vertical curve \(\ell\) contained in \(A_{h}\), one has \(K_{X}\cdot\ell=A_{h}\cdot\ell=0\)._ Proof.: Suppose by contradiction that there exists a \(f\)-vertical curve \(\ell\subset A_{h}\) such that \(A_{h}\cdot\ell>0\). Let \(F_{0}\) be the special fibre of \(f\) which contains \(\ell\). Since for a general fibre \(F\) of \(f\), \[A_{h}\cdot(F|_{A_{h}})=(A_{h}|_{F})^{2}=0,\] we have that \(A_{h}\cdot(F_{0}|_{A_{h}})=0\). Hence we can write \[F_{0}\cap A_{h}=m\ell+\ell^{\prime}\] with \(m>0\) and \(A_{h}\cdot\ell^{\prime}<0\). This contradicts the fact that \(A_{h}\) is \(f\)-relatively nef. Now in Setup 1.3, consider a \(K_{X}\)-negative extremal contraction \(\varphi\colon X\to Y\). Let \(\Gamma\) be an extremal ray contracted by \(\varphi\). Recall that the length of an extremal ray \(\Gamma\) is defined by \[l(\Gamma)=\min\{-K_{X}\cdot Z\mid[Z]\in\Gamma\}.\] Let \(\ell\) be a rational curve such that \([\ell]\in\Gamma\) and \(-K_{X}\cdot\ell=l(\Gamma).\) If \(\varphi\) is birational, we denote the exceptional divisor of \(\varphi\) by \(E\). In the remainder of this section, we will describe all the possible contractions \(\varphi\). #### 2.4.1 Birational extremal contractions We first describe all the divisorial contractions of a \(K_{X}\)-negative extremal ray on the threefold \(X\) in Setup 1.3. **Proposition 2.10**.: _In Setup 1.3, assume that \(A_{v}=0\). Let \(\varphi\colon X\to Y\) be a divisorial \(K_{X}\)-negative extremal contraction and let \(E\) be the exceptional divisor. Then \(\varphi\) is the blow-up of a smooth curve \(C\) in \(Y\) and \(E\) is a ruled surface. Denote by \(C_{0}\) a canonical section of the tautological line bundle \(\mathcal{O}_{\mathbb{P}(V)}(1)\), where \(V\) is the normalisation of \(N_{C/Y}^{*}\). Then one of the following cases occurs._ * _The divisor_ \(E\) _is an irreducible component of_ \(A\) _and_ \(\varphi\) _contracts_ \(E\) _horizontally (i.e. every fibre of_ \(\varphi\) _is an_ \(f\)_-horizontal curve) to a smooth curve._ * _The fibration_ \(f\) _factors as_ \(f=f^{\prime}\circ\varphi\)_, which gives a fibration_ \(f^{\prime}\colon Y\to\mathbb{P}^{1}\)_. Let_ \(A_{Y}\coloneqq\varphi(A)\) _and_ \(F_{Y}\coloneqq\varphi(F)\)_. We are in one of the following cases:_ * \(f^{\prime}\) _maps the blow-up curve_ \(C\) _onto_ \(\mathbb{P}^{1}\)_, and_ \(\varphi|_{F}\colon F\to F_{Y}\) _is the blow-down of some_ \((-1)\)_-curves in_ \(F\)_. In this case,_ \(A_{Y}\simeq A\)_,_ \(-K_{F_{Y}}\) _is nef and big, and_ \(-K_{Y}\) _is nef and big._ * \(f^{\prime}\) _maps the blow-up curve_ \(C\) _to a point. In this case,_ \(A|_{E}\equiv C_{0}\)_,_ \(A_{Y}\simeq A\) _and_ \(F_{Y}\simeq F\) Proof.: **Case \(A\cdot\ell=0\).** In this case, we have \(F\cdot\ell=1\) and \(-K_{X}\cdot\ell=A\cdot\ell+kF\cdot\ell=k=2\). Hence \(\varphi\) is the blow-up of a smooth point on \(Y\), with exceptional divisor \(E\simeq\mathbb{P}^{2}\). As \(E\) is not fibered, it is contained in a fiber of \(f\) and thus \(F\cdot E=0\). This contradicts the fact that \(F\cdot\ell=1\). **Case \(A\cdot\ell<0\).** In this case, we have \(F\cdot\ell>0\) and \(E\) is an irreducible component of \(A\) as the contraction is divisorial. Moreover, as a curve in \(A\cdot F\) is \(K_{X}\)-trivial, we obtain that \(\varphi\) contracts \(E\) horizontally to a curve. **Case \(A\cdot\ell>0\).** In this case, \(F\cdot\ell=0\) since otherwise \(-K_{X}\cdot\ell>2\), which contradicts the classification of Mori, see [13, Section 3]. _Step 1._ We show in this step that \(E\) is not contracted to a point. Suppose by contradiction that \(E\) is contracted to a point. Since \(F\cdot\ell=0\), we have \(F\cdot E=0\). Thus \(E\) is contained in some fibre of \(f\). Note that \(E\) is not an irreducible component of \(A\), otherwise \(E\) is contracted to a curve. Hence \(A\cdot E\) is a non-zero effective \(1\)-cycle contained in some fibre of \(f\). By Lemma 2.9, \(A\cdot(A\cdot E)=0\) and thus \(A\cdot\ell=0\). This contradicts the fact that \(A\cdot\ell>0\). _Step 2._ By _Step 1_, we deduce that \(\varphi\) contracts \(E\) to a curve \(C\). Then \(F\cdot\ell=0\), \(A\cdot\ell=1\) and \(E=\mathbb{P}(N_{C/Y}^{*})\) is a ruled surface. Let \(V=N_{C/Y}^{*}\otimes\mathcal{L}\) with \(\mathcal{L}\in\operatorname{Pic}(C)\) be a normalisation of \(N_{C/Y}^{*}\), see [10, Chapter V, Proposition 2.8]. Let \(\mu\coloneqq\deg\mathcal{L}\) and let \(C_{0}\) be a canonical section of the tautological line bundle \(\mathcal{O}_{\mathbb{P}(V)}(1)\) such that \(C_{0}^{2}=-e=c_{1}(V)\). Then \[-K_{X}|_{E}\equiv C_{0}+m\ell\] with \(m\in\mathbb{N}\), \[N_{E/X}\equiv-C_{0}+\mu\ell,\] \[K_{Y}\cdot C=-m-\mu,\] and \(-K_{Y}\) is nef if and only if \(m+\mu\geq 0\). Let \(g\) be the genus of the curve \(C\). Then by the adjunction formula, \[K_{E}^{2}=\big{(}(K_{X}+E)|_{E}\big{)}^{2},\] and thus \(8(1-g)=4m-4e-4\mu\), i.e. \(\mu=m-e+2g-2\). Since \((-K_{X})^{3}=0\), we have \[(-K_{Y})^{3}=3(-K_{X}|_{E})^{2}+3K_{Y}\cdot C-2E^{3}=2g-2+2(2m-e).\] As \(-K_{X}|_{E}\) is nef, we have \((-K_{X}|_{E})\cdot C_{0}\geq 0\), i.e. \(m\geq e\). We have two different subcases: either \(f|_{E}\colon E\to\mathbb{P}^{1}\) is surjective, or \(f(E)\) is a point on \(\mathbb{P}^{1}\). _Step 3._ In this step, we consider the case when \(f|_{E}\colon E\to\mathbb{P}^{1}\) is surjective. Then \(E\cdot F\) is a non-zero effective \(1\)-cycle. Note that \(E\) is not an irreducible component of \(A\), as \(\ell\) is \(f\)-vertical and \(A\cdot\ell>0\) by assumption. Since \(F\cdot\ell=0\), one has \(E\cdot F=b\ell\) with \(b\in\mathbb{N}^{*}\), and a fibre of \(\varphi\) must be contained in some fibre of \(f\). In particular, by the rigidity lemma [12, Lemma 1.15], the fibration \(f\) factors as \(f=f^{\prime}\circ\varphi\) such that \(f^{\prime}\colon Y\to\mathbb{P}^{1}\), and the morphism \(\varphi\) contracts \(E\) to a smooth curve \(C\subset Y\) which surjects to \(\mathbb{P}^{1}\) by \(f^{\prime}\). Restricted to a general fibre \(F\), we have that \(\varphi|_{F}\) blows down some \((-1)\)-curves in \(F\) and each of them meets the unique member in \(|-K_{F}|\) transversally at one point. _Claim._\(-K_{Y}\) is nef and big. Since \(-K_{X}|_{E}\sim A|_{E}+kF|_{E}\equiv A|_{E}+kb\ell\) and \(-K_{X}|_{E}\equiv C_{0}+m\ell\), we have \[A|_{E}\equiv C_{0}+(m-kb)\ell.\] Since \(A\cdot E\) is an effective \(1\)-cycle and \(-K_{X}|_{E}\) is nef, we have \((-K_{X}|_{E})\cdot(A|_{E})\geq 0\). Hence, \[(C_{0}+m\ell)\cdot\big{(}C_{0}+(m-kb)\ell\big{)}\geq 0,\] i.e. \(2m-e\geq kb\). We obtain \[m+\mu=2m-e+2g-2\geq kb-2\geq 0,\] and thus \(-K_{Y}\) is nef. Moreover, \[(-K_{Y})^{3}=2g-2+2(2m-e)\geq-2+2kb\geq 2.\] Therefore, \(-K_{Y}\) is nef and big. This proves the claim. _Step 4._ In this step, we consider the case when \(f(E)\) is a point. Then \(E\) is contained in a special fiber \(F_{0}\) of \(f\) and \(E\cdot F=0\). Hence \(\varphi\) is an isomorphism outside \(F_{0}\). Moreover, the fibration \(f\colon X\to\mathbb{P}^{1}\) factors as \(f=f^{\prime}\circ\varphi\) such that \(f^{\prime}\colon Y\to\mathbb{P}^{1}\). We have \[A|_{E}\sim-K_{X}|_{E}\equiv C_{0}+m\ell.\] Since \(E\) is contained in some fiber of \(f\), \(A\cdot E\) is an effective \(f\)-vertical \(1\)-cycle. Hence \(A\cdot(A\cdot E)=0\) by Lemma 2.9. It remains to prove \(m=0\). Suppose that \(m>0\). Since \[0=A\cdot(C_{0}+m\ell)=A\cdot C_{0}+m,\] one has \(A\cdot C_{0}<0\). Hence the curve \(C_{0}\) is contained in \(A\). But \(C_{0}\) is \(f\)-vertical, this contradicts Lemma 2.9 and proves \(m=0\). #### 2.4.2 Non-birational contractions Finally, we describe all the contractions of fibre type of a \(K_{X}\)-negative extremal ray on the threefold \(X\) in Setup1.3. **Proposition 2.11**.: _In Setup 1.3, let \(\varphi\colon X\to Y\) be a non-birational \(K_{X}\)-negative extremal contraction. Then \(Y\) is a smooth rational surface, and one of the following cases occurs:_ 1. \(Y\simeq F\) _and_ \(X\simeq F\times\mathbb{P}^{1}\)_;_ 2. \(\varphi\colon X\to Y\) _is a conic bundle and_ \(f\) _factors as_ \(f\colon X\xrightarrow{\varphi}Y\to\mathbb{P}^{1}\)_._ Proof.: **Case \(\dim Y=2\).** In this case, we have \(-K_{X}\cdot\ell\in\{1,2\}\), the morphism \(\varphi\colon X\to Y\) is a conic bundle, and \(Y\) is a smooth rational surface. Since \([\ell]\) is a movable class, we have \(A\cdot\ell\geq 0\). We first consider when \(F\cdot\ell>0\). Then \(F\cdot\ell=1\) and \(-K_{X}\cdot\ell=A\cdot\ell+kF\cdot\ell\geq 2\). We deduce that \(k=2\), the morphism \(\varphi\) is a \(\mathbb{P}^{1}\)-bundle and \(\varphi|_{F}\) is birational. Consider the product map \(p\coloneqq f\times\varphi\colon X\to\mathbb{P}^{1}\times Y\) which is generically one to one. _Claim._\(p\) is an isomorphism. Suppose that there exists a curve \(R\subset X\) contracted by \(p\), then \(R\) is also contracted by \(\varphi\). Hence \(F\cdot R>0\) and \(R\) is a fibre \(\varphi^{-1}(y_{0})\) of \(\varphi\), where \(y_{0}\in Y\). By the rigidity lemma [1, Lemma 1.15], there exists a neighbourhood \(Y_{0}\subset Y\) of \(y_{0}\) and a factorisation \(f|_{\varphi^{-1}(Y_{0})}\colon\varphi^{-1}(Y_{0})\xrightarrow{\varphi}Y_{0} \to\mathbb{P}^{1}\), which implies that \(f(R)\) is a point. This contradicts \(F\cdot R>0\) and proves the claim. Therefore, \(X\simeq\mathbb{P}^{1}\times Y\simeq\mathbb{P}^{1}\times F\) and \(\varphi\) is the second projection. Now, we consider when \(F\cdot\ell=0\). This implies \(F=\varphi^{*}(B)\) for some irreducible curve \(B\) on \(Y\), which gives a factorisation \(f\colon X\xrightarrow{\varphi}Y\to\mathbb{P}^{1}\), where \(Y\) is a smooth rational surface. **Case \(\dim\,Y=1\).** In this case, we have \(Y\simeq\mathbb{P}^{1}\) and \(\rho(X)=2\). This contradicts the fact \(\rho(X)\geq 3\) by [1, Corollary 7.3]. ## 3 Classification result The aim of this section is to prove Theorem 1.4. In the whole section, we consider the following setup. **Setup 3.1**.: _Under Setup 1.3, assume that the general fibre \(F\) is non-rational._ **Remark 3.2**.: When \(F\) is non-rational, \(X\) cannot be a product (i.e. \(X\not\simeq F\times\mathbb{P}^{1}\)), since otherwise \(X\) is not rationally connected. By [1, Proposition 1.6], we have \(F=\mathbb{P}(\mathcal{E})\), where \(\mathcal{E}\) is a rank-2 vector bundle over an elliptic curve defined by an extension \[0\to\mathcal{O}\to\mathcal{E}\to\mathcal{L}\to 0\] with \(\mathcal{L}\) a line bundle of degree 0 and either 1. \(\mathcal{L}=\mathcal{O}\) and the extension is non-split, or 2. \(\mathcal{L}\) is not torsion. The structure of the unique element in \(|-K_{F}|\) is either 1. \(2C\), where \(C\) is a smooth elliptic curve, or 2. \(C_{1}+C_{2}\), where \(C_{1}\) and \(C_{2}\) are smooth elliptic curves which do not meet. ### Structure of the fixed part In this subsection, we will describe the fixed divisor of the anticanonical system and prove the following result. **Proposition 3.3**.: _In Setup 3.1, \(A_{v}=0\) and \(A_{h}\) is not a prime divisor. We are in one of the following cases:_ 1. \(-K_{F}=2C\)_, where_ \(C\) _is a smooth elliptic curve. Then_ \(A_{h}=2D\)_, where the restriction of_ \(f\) _to_ \(D\) _induces an elliptic fibration._ 2. \(-K_{F}=C_{1}+C_{2}\)_, where_ \(C_{1}\) _and_ \(C_{2}\) _are smooth elliptic curves which do not meet. Then_ \(k=2\) _and_ \(A_{h}=D_{1}+D_{2}\)_, where for_ \(i=1,2\)_,_ \(D_{i}\simeq C_{i}\times\mathbb{P}^{1}\) _and_ \(f|_{D_{i}}\) _is the second projection._ We start by studying the \(f\)-horizontal part of the fixed divisor \(A\) of \(|-K_{X}|\). **Lemma 3.4**.: _In Setup 3.1, \(A_{h}\) is not a prime divisor and one of the following cases occurs:_ 1. \(-K_{F}=2C\)_, where_ \(C\) _is a smooth elliptic curve, then_ \(A_{h}=2D\)_, where the restriction of_ \(f\) _to_ \(D\) _induces an elliptic fibration._ 2. \(-K_{F}=C_{1}+C_{2}\)_, where_ \(C_{1}\) _and_ \(C_{2}\) _are smooth elliptic curves which do not meet, then_ \(A_{h}=D_{1}+D_{2}\)_, where the restriction of_ \(f\) _to_ \(D_{i}\) _induces an elliptic fibration for_ \(i=1,2\)_, and_ \(D_{1}\cap D_{2}\) _is contained in some fibres of_ \(f\)_._ Proof.: Suppose by contradiction that \(A_{h}\) is a prime divisor. By the adjunction formula, we have \[-K_{A_{h}}\sim(-K_{X}-A_{h})|_{A_{h}}\sim(A_{v}+kF)|_{A_{h}}.\] Clearly \(F|_{A_{h}}\) is nef and \(F\cap A_{h}\) contains a smooth elliptic curve. Moreover, \((F|_{A_{h}})^{2}=0\) and \(k\geq 2\), so we can apply Lemma 2.3 to the surface \(A_{h}\), which implies that \(A_{v}|_{A_{h}}=0\), \(-K_{A_{h}}\sim kF|_{A_{h}}\) and \(A_{h}\) is a \(\mathbb{P}^{1}\)-bundle over a smooth elliptic curve. On the other hand, the restriction of \(f\) to \(A_{h}\) induces a fibration on \(A_{h}\) such that the general fibre \(F|_{A_{h}}\) is either * \(2C\), or * \(C_{1}+C_{2}\) with \(C\), \(C_{1}\) and \(C_{2}\) smooth elliptic curves. The first case is impossible by the generic smoothness of the \(\mathbb{P}^{1}\)-bundle \(A_{h}\). In the second case, \(-K_{A_{h}}\sim kF|_{A_{h}}\) with \(k\geq 2\) contains at least \(4\) elliptic curves (counted with multiplicity). Let \(\ell\subset A_{h}\) be a fibre of the ruling. Then \(-K_{A_{h}}\cdot\ell\geq 4\), which contradicts \(-K_{A_{h}}\cdot\ell=2\) for a \(\mathbb{P}^{1}\)-bundle over a smooth curve. **Lemma 3.5**.: _In case \((ii)\) of Lemma 3.4, we have \(k=2\), \(A_{v}=0\) and \(A=D_{1}+D_{2}\), where \(D_{1}\) and \(D_{2}\) are disjoint with \(D_{i}\simeq C_{i}\times\mathbb{P}^{1}\), \(i=1,2\)._ Proof.: For \(i,j=1,2\) with \(i\neq j\), the adjunction formula gives \[-K_{D_{i}}\sim(-K_{X}-D_{i})|_{D_{i}}\sim(A_{v}+D_{j}+kF)|_{D_{i}}.\] Recall that \(F|_{D_{i}}\) is an elliptic curve, \((F|_{D_{i}})^{2}=0\) and \(k\geq 2\), then by Lemma 2.3, we have \(A_{v}|_{D_{i}}=D_{j}|_{D_{i}}=0\), \(-K_{D_{i}}\sim kF|_{D_{i}}\), and \(D_{i}\) is a \(\mathbb{P}^{1}\)-bundle over a smooth elliptic curve. Hence \(D_{1}\cdot D_{2}=0\), and thus \(D_{1}\) and \(D_{2}\) are disjoint. Moreover, the support of a divisor in \(|-K_{X}|\) is connected in codimension one by [20, Lemma 2.3.9]. As \(A_{v}\) does not meet \(F\) and \(A_{v}\cdot D_{1}=A_{v}\cdot D_{2}=0\), we obtain \(A_{v}=0\). Thus \(A=A_{h}\). Since \(D_{i}\) is a \(\mathbb{P}^{1}\)-bundle over the smooth elliptic curve \(C_{i}\), and the restriction of \(f\) to \(D_{i}\) induces an elliptic fibration with \[h^{0}\big{(}D_{i},\mathcal{O}_{D_{i}}(-K_{D_{i}})\big{)}=h^{0}\big{(}D_{i},(f| _{D_{i}})^{*}\mathcal{O}_{\mathbb{P}^{1}}(k)\big{)}=k+1\geq 3,\] we deduce \(D_{i}\simeq C_{i}\times\mathbb{P}^{1}\) by Lemma 2.4, and thus \(k=2\). In the remainder of this subsection, we study case (i) of Lemma 3.4 and show that \(A_{v}=0\) in this case. **Lemma 3.6**.: _In Setup 3.1, assume that \(A_{h}=2D\), where the restriction of \(f\) to \(D\) induces an elliptic fibration. Then after performing possibly a finite sequence of flops \(\psi\colon X\dasharrow X^{\prime}\) such that \(f\) factors as \(f^{\prime}\circ\psi\) and \(f^{\prime}\) gives a fibration \(f^{\prime}\colon X^{\prime}\to\mathbb{P}^{1}\), we obtain that \(A^{\prime}_{h}=\psi_{*}(A_{h})\) is \(f^{\prime}\)-relatively nef and_ \[|-K_{X^{\prime}}|=A^{\prime}_{h}+A^{\prime}_{v}+|kF^{\prime}|,\] _where \(A^{\prime}_{v}\coloneqq\psi_{*}(A_{v})\) and \(F^{\prime}\coloneqq\psi_{*}(F)\simeq F\) is a general fibre of \(f^{\prime}\)._ _Moreover, \(A^{\prime}_{h}|_{F^{\prime}}=-K_{F^{\prime}}\), \(A^{\prime}_{v}|_{F^{\prime}}=0\) and we can write \(A^{\prime}_{h}=2D^{\prime}\) with \(D^{\prime}=\psi_{*}(D)\) such that the restriction of \(f^{\prime}\) on \(D^{\prime}\) induces an elliptic fibration._ Proof.: Suppose that \(D\) is not \(f\)-relatively nef. Then there exists an \(f\)-vertical curve \(\gamma\) such that \(D\cdot\gamma<0\). Since \(X\) is smooth, for sufficiently small \(\epsilon>0\), the pair \((X,\epsilon D)\) is log-canonical. _Claim._ There exists a \((K_{X}+\epsilon D)\)-negative extremal ray \(\Gamma\) such that \(D\cdot\Gamma<0\) and \(F\cdot\Gamma=0\). By the Cone Theorem, we can write \[\gamma=\sum_{i}\lambda_{i}\Gamma_{i}+R,\] where * \(\lambda_{i}>0\); * all the \(\Gamma_{i}\) are \((K_{X}+\epsilon D)\)-negative extremal rays; * \((K_{X}+\epsilon D)\cdot R\geq 0\). Suppose that every \((K_{X}+\epsilon D)\)-negative extremal ray is \(D\)-nonnegative. Then \(D\cdot\Gamma_{i}\geq 0\) for all \(i\). Therefore, \[0>D\cdot\gamma=\sum_{i}\lambda_{i}D\cdot\Gamma_{i}+D\cdot R\geq D\cdot R,\] i.e. \(D\cdot R<0\). Since \((K_{X}+\epsilon D)\cdot R\geq 0\), we have \[K_{X}\cdot R\geq-\epsilon D\cdot R>0,\] which contradicts the fact that \(-K_{X}\) is nef. Hence, we may assume \(D\cdot\Gamma_{1}<0\). Since \(F\) is nef and \(F\cdot\gamma=0\), we have \[0=F\cdot\gamma=\sum_{i}\lambda_{i}F\cdot\Gamma_{i}+F\cdot R\geq 0\] and thus for all \(i\), \(F\cdot\Gamma_{i}=F\cdot R=0\). Hence, \(\Gamma_{1}\) is an extremal ray satifying the assumption. This proves the claim. Let \(c_{\Gamma}\) be the contraction of the extremal ray \(\Gamma\) and let \(\ell\) be a contracted curve. Then \(D\cdot\ell<0\) and \(\ell\) is \(f\)-vertical. Since a general fibre of \(f|_{D}\colon D\to\mathbb{P}^{1}\) is a smooth elliptic curve which is \(D\)-trivial, we obtain that \(\ell\) is contained in some singular fibre of \(f|_{D}\) and the contraction \(c_{\Gamma}\) is small. This implies that \(K_{X}\cdot\ell=0\), since there is no small \(K_{X}\)-negative extremal contraction for smooth threefolds. Hence there exists a flop \(\psi^{+}\colon X\dasharrow X^{+}\) of \(c_{\Gamma}\) and the flopped threefold \(X^{+}\) is smooth by [14, Theorem 2.4]. Since a flopping curve is contained in some fibre of \(f\), the map \(f\) factors as \(f=f^{+}\circ\psi^{+}\) such that \(f^{+}\colon X^{+}\to\mathbb{P}^{1}\) is a fibration. Moreover, \(F^{+}\coloneqq\psi_{*}^{+}(F)\) is isomorphic to \(F\). Since \(\psi^{+}\) is an isomorphism in codimension one, we have \[|-K_{X^{+}}|=A_{h}^{+}+A_{v}^{+}+|kF^{+}|,\] where \(A_{h}^{+}\coloneqq\psi_{*}^{+}(A_{h})\) and \(A_{h}^{+}|_{F^{+}}=-K_{F^{+}}\), \(A_{v}^{+}\coloneqq\psi_{*}^{+}(A_{v})\) and \(A_{v}^{+}|_{F^{+}}=0\). By repeating the above argument and by the termination of three-dimensional flops, see [13, Corollary 6.19], we deduce that there exists a finite sequence of flops \(\psi\colon X\dashrightarrow X^{\prime}\) such that \(f\) factors as \(f=f^{\prime}\circ\psi\), where \(f^{\prime}\colon X^{\prime}\to\mathbb{P}^{1}\) is a fibration and \(D^{\prime}\coloneqq\psi_{*}(D)\) is \(f^{\prime}\)-relatively nef. In order to show that \(A_{v}=0\) in case (i) of Lemma 3.4, we will assume by contradiction that \(A_{v}\) is non-zero and describe the geometry of the anticanonical system \(|-K_{X}|\) in the following two lemmas. **Lemma 3.7**.: _In case \((i)\) of Lemma 3.4, assume that \(A_{v}\) is non-zero. Then \(D\simeq C\times\mathbb{P}^{1}\) after performing possibly a finite sequence of \(f\)-relative \(D\)-flops. Moreover, if \(\ell\simeq\mathbb{P}^{1}\) is a fibre of the first projection \(pr_{1}\colon D\to C\), then \(A_{v}^{3}=0\) and one of the following cases occurs:_ * \(D\cdot\ell=-1\) _and thus_ \([\ell]\) _generates a_ \(K_{X}\)_-negative extremal ray. In this case, we have_ \(k=2,A_{v}|_{D}\sim C\) _and_ \(D|_{D}\sim-C\)_._ * \(D\cdot\ell=-2\) _and thus_ \([\ell]\) _generates a_ \(K_{X}\)_-trivial extremal ray. In this case, we have_ \(D|_{D}\sim-2C\) _and either_ * \(k=2,A_{v}|_{D}\sim 2C\)_, or_ * \(k=3,A_{v}|_{D}\sim C\)_._ Proof.: By Lemma 3.6, we may assume that \(D\) is \(f\)-nef (by performing possibly a finite sequence of \(D\)-flops). We first note that \(A\) is not nef: otherwise the relative anticanonical divisor \(-K_{X/\mathbb{P}^{1}}\) is nef and thus \(f\colon X\to\mathbb{P}^{1}\) is a product by Lemma 2.8, which gives a contradiction by Remark 3.2. Since \(X\) is smooth, for sufficiently small \(\epsilon>0\), the pair \((X,\epsilon A)\) is log-canonical. By Lemma [20, Lemma 2.5], there exists a \((K_{X}+\epsilon A)\)-negative extremal ray \(\Gamma\) such that \(\epsilon A\cdot\Gamma<0\). Let \(c_{\Gamma}\) be the contraction of the extremal ray \(\Gamma\) and let \(\ell\) be an integral curve contracted by \(c_{\Gamma}\). Then \(A\cdot\ell<0\) and thus \(\ell\) is contained in an irreducible component of \(A\). Since \(A\) is \(f\)-nef, we obtain that \(\ell\) is \(f\)-horizontal and thus \(\ell\subset\operatorname{Supp}A_{h}\), \[F\cdot\ell\geq 1\text{ and }A_{v}\cdot\ell\geq 0. \tag{3}\] (A) If \(c_{\Gamma}\) is a divisorial contraction, then \(D\) is the exceptional divisor of \(c_{\Gamma}\). Since \(f_{h}\coloneqq D\cap F\) is an integral curve with \(F\cdot f_{h}=0\), we have \([f_{h}]\not\in\Gamma\). Hence \(c_{\Gamma}\) contracts the prime divisor \(D\) horizontally to a curve that we denote by \(C^{\prime}\). _Step A1._ In this step, we consider the case \(K_{X}\cdot\ell<0\). Then \(c_{\Gamma}\) is a \(K_{X}\)-negative extremal contraction and \(D\) is a ruled surface with fibre \(\ell\) by the classification of Mori, see [13, Section 3]. Then \(D\cdot\ell=-1\) and \(-K_{X}\cdot\ell=1\), i.e. \((2D+A_{v}+kF)\cdot\ell=1\) with \(k\geq 2\) and \(A_{v}\neq 0\). Moreover, \(\ell\) moves on the surface \(D\) and thus \(A_{v}\cdot\ell>0\). Hence, we obtain \(A_{v}\cdot\ell=1,F\cdot\ell=1\) and \(k=2\). _Step A2._ In this step, we consider the case \(K_{X}\cdot\ell=0\). Then \(c_{\Gamma}\) is an extremal crepant contraction. We will prove in this step that \(D\) is a ruled surface with fibre \(\ell\). By Proposition 2.7, \(c_{\Gamma}|_{D}\colon D\to C^{\prime}\) is a conic bundle whose possible singular fibre is two lines. Now suppose by contradiction that there exists a singular fibre \(\ell_{0}\) of \(c_{\Gamma}|_{D}\colon D\to C^{\prime}\), consisting of two lines \(\ell_{1}\) and \(\ell_{2}\). Then \[D\cdot\ell_{1}=D\cdot\ell_{2}=-1.\] On the other hand, since \([\ell_{i}]\in\Gamma\) for \(i=1,2\), we have \(F\cdot\ell_{i}\geq 1\) and \(A_{v}\cdot\ell_{i}\geq 0\) by (3). Since \[2D\cdot\ell_{i}+A_{v}\cdot\ell_{i}+kF\cdot\ell_{i}=-K_{X}\cdot\ell_{i}=0,\] we obtain \(A_{v}\cdot\ell_{i}+kF\cdot\ell_{i}=2\) and thus \(A_{v}\cdot\ell_{i}=0\) and \(k=2\). Therefore \(A_{v}\cdot\ell=0\). We have \(A_{v}\cdot A_{h}=0\): otherwise \(A_{h}\cdot A_{v}\) is a non-zero effective \(1\)-cycle which is \(f\)-vertical, and thus the divisor \((c_{\Gamma})_{*}(A_{v})\) contains the curve \(C_{0}\), which contradicts the fact that \(A_{v}\cdot\ell=0\). Since \(A_{v}\cdot F=0\), \(A_{v}\cdot A_{h}=0\) and \(-K_{X}\sim A_{v}+A_{h}+kF\) is nef, we obtain that \(A_{v}\) is nef and thus \(A_{v}=0\), which contradicts the assumption of the lemma. We conclude that \(c_{\Gamma}|_{D}\colon D\to C^{\prime}\) is a regular conic bundle and thus \(D\) is a ruled surface with fibre \(\ell\). Therefore, \(D\cdot\ell=-2\) and \(-K_{X}\cdot\ell=0\), i.e. \((2D+A_{v}+kF)\cdot\ell=0\) with \(k\geq 2\) and \(A_{v}\neq 0\). Moreover, \(\ell\) moves on the surface \(D\) and thus \(A_{v}\cdot\ell>0\). Hence, we obtain \(F\cdot\ell=1\) and either \(A_{v}\cdot\ell=1\), \(k=3\) or \(A_{v}\cdot\ell=2\), \(k=2\). _Step A3._ In both _Steps A1_ and _A2_, the curve \(C^{\prime}\) is a smooth elliptic curve as \(F\cdot\ell=1\). Thus \(D\) is a ruled surface over a smooth elliptic curve whose fibres are \(f\)-horizontal. Moreover, \(f|_{D}\colon D\to\mathbb{P}^{1}\) induces an elliptic fibration. In the remainder of this step, we will show that \(D\simeq C\times\ell\), where \(\ell\simeq\mathbb{P}^{1}\) and \(C=F|_{D}\) is a smooth elliptic curve. Denote by \(Y\) the threefold obtained by the contraction \(c_{\Gamma}\). Then \(D=\mathbb{P}(N^{*}_{C^{\prime}/Y})\). Let \(V=N^{*}_{C^{\prime}/Y}\otimes\mathcal{L}\) with \(\mathcal{L}\in\operatorname{Pic}(C^{\prime})\) be a normalisation of \(N^{*}_{C^{\prime}/Y}\), see [10, Chapter V, Proposition 2.8]. Let \(\mu\coloneqq\deg\mathcal{L}\) and let \(C_{0}\) be a canonical section of the tautological line bundle \(\mathcal{O}_{\mathbb{P}(V)}(1)\) such that \(C_{0}^{2}=-e=c_{1}(V)\). Then \(-K_{D}\equiv 2C_{0}+e\ell\), 1. \(N_{D/X}\equiv-C_{0}+\mu\ell,-K_{X}|_{D}\equiv C_{0}+m\ell\) with \(m\geq e\); 2. \(N_{D/X}\equiv-2C_{0}+\mu\ell,-K_{X}|_{D}\equiv m\ell\) with \(m\geq 0\). Note that \(e=0\) or \(-1\) by [11, Theorem 5], since \(D\) is a ruled surface over a smooth elliptic curve and admits an elliptic fibration. By the adjunction formula, we have \[-K_{D}\sim(-K_{X}-D)|_{D}\sim(D+A_{v}+kF)|_{D},\] i.e. \(2C_{0}+e\ell\equiv 2C_{0}+m\ell-\mu\ell\). Hence \(\mu=m-e\) and 1. \(2C_{0}+e\ell\equiv-C_{0}+(m-e)\ell+A_{v}|_{D}+kF|_{D}\), i.e. \(3C_{0}+(2e-m)\ell\equiv A_{v}|_{D}+kF|_{D}\) with \(m\geq e\); 2. \(2C_{0}+e\ell\equiv-2C_{0}+(m-e)\ell+A_{v}|_{D}+kF|_{D}\), i.e. \(4C_{0}+(2e-m)\ell\equiv A_{v}|_{D}+kF|_{D}\) with \(m\geq 0\). Since \(F|_{D}\) and \(A_{v}|_{D}\) are non-zero effective \(f\)-vertical \(1\)-cycles, we obtain in both cases \(2e-m\geq 0\) and thus \(e\geq 0\). Hence \(e=0\) and \(m=0\). Since moreover \(A_{v}|_{D}\) is non-zero and \(k\geq 2\), we obtain that \(A_{v}|_{D}-C_{0}\) is effective, \(F|_{D}\sim C_{0}\) and thus \(C_{0}\) moves on the surface \(D\). Therefore, \[h^{0}\big{(}D,\mathcal{O}_{D}(-K_{D})\big{)}=h^{0}\big{(}D,\mathcal{O}_{D}(2C_{ 0})\big{)}\geq 3.\] By Lemma 2.4, we conclude that \(D\) is a product. (B) If \(c_{\Gamma}\) is a small contraction, then \(K_{X}\cdot\ell=0\) and \(\ell\) is a flopping curve. Since \(\ell\) is an \(f\)-horizontal curve contained in \(A_{h}=2D\) and \(D|_{F}\) is a smooth elliptic curve, \(A\) has multiplicity \(2\) along \(\ell\) and thus \(k=2\) by Lemma 2.6. The remainder of the proof is devoted to show \(D\simeq C\times\mathbb{P}^{1}\). _Step B1._ We first prove that \(-K_{D}\cdot\ell\leq 1\). Assume by contradiction that \(-K_{D}\cdot\ell\geq 2\). Since a general fibre of \(f|_{D}\colon D\to\mathbb{P}^{1}\) is smooth and the curve \(\ell\subset D\) is \(f\)-horizontal, we obtain that \(\ell\) intersects the smooth locus of \(D\). Hence the smooth rational curve \(\ell\) deforms on the surface \(D\) by [13, Chapter 2, Theorem 1.14]. This contradicts the fact that the extremal ray \(\Gamma=\mathbb{R}_{+}[\ell]\) contains only a finite number of curves. _Step B2._ We show that \[D\cdot\ell=-1,A_{v}\cdot\ell=0\text{ and }F\cdot\ell=1. \tag{4}\] By the adjunction formula, since \(F\cdot\ell\geq 1\), we have \[-K_{D}\cdot\ell=(-K_{X}-D)\cdot\ell=D\cdot\ell+A_{v}\cdot\ell+2F\cdot\ell \geq D\cdot\ell+A_{v}\cdot\ell+2.\] Since \(-K_{D}\cdot\ell\leq 1\), we have \(D\cdot\ell+A_{v}\cdot\ell\leq-1\). On the other hand, since \(-K_{X}\cdot\ell=0\), we have \(-D\cdot\ell=-K_{D}\cdot\ell\leq 1\), i.e. \(D\cdot\ell\geq-1\). Together with \(D\cdot\ell<0\) and \(A_{v}\cdot\ell\geq 0\), we obtain \(D\cdot\ell=-1\) and \(A_{v}\cdot\ell=0\). Since \(-K_{X}=2D+A_{v}+2F\) and \(-K_{X}\cdot\ell=0\), we obtain \(F\cdot\ell=1\). _Step B3._ We prove that \(-K_{D}\) is nef. Assume first that \(D\) is a product of \(\mathbb{P}^{1}\) and a smooth elliptic curve, then we have \(-K_{D}\sim 2F|_{D}\) is nef. It remains to consider the case when \(D\) is not a product. Let \(B\) be an integral curve contained in \(D\). If \(B\) is \(f\)-vertical, then \(D\cdot B=0\) by Lemma 2.9. By the adjunction formula, we have \[-K_{D}\cdot B=(-K_{X}-D)\cdot B=-K_{X}\cdot B\geq 0.\] If \(B\) is \(f\)-horizontal, then by the Cone Theorem we can write \[B=\sum_{i}m_{i}\ell_{i}+R, \tag{5}\] where * \(m_{i}\in\mathbb{Q}^{+}\); * each \(\ell_{i}\) is \(D\)-negative (and thus \((K_{X}+\epsilon D)\)-negative) and generates a \((K_{X}+\epsilon D)\)-negative extremal ray \(\Gamma_{i}=\mathbb{R}_{+}[\ell_{i}]\) for \(\epsilon>0\) sufficiently small such that \((X,\epsilon D)\) is a log-canonical pair; * \(R\) is \(D\)-nonnegative. Note that each \(\ell_{i}\) in (5) is \(K_{X}\)-trivial: otherwise \(\ell_{i}\) generates a \(K_{X}\)-negative extremal ray \(\Gamma_{i}\) with \(D\cdot\ell_{i}<0\), thus \(D\) is the exceptional divisor of the contraction associated to \(\Gamma_{i}\) and \(\ell_{i}\) is \(f\)-horizontal. Thus by _Step A1_, we obtain \(D\simeq C\times\ell_{i}\) with \(C=F|_{D}\) a smooth elliptic curve. This contradicts the assumption that there exists a flopping curve \(\ell\) contained in \(D\). Note also that _Step B1_ and _Step B2_ hold for all \(\ell_{i}\), which satisfies that \(\Gamma_{i}=\mathbb{R}_{+}[\ell_{i}]\) is a \(K_{X}\)-trivial extremal ray and \(D\cdot\ell_{i}<0\). Hence by (4), we obtain \[D\cdot\ell_{i}=-1,F\cdot\ell_{i}=1. \tag{6}\] Thus, \[-K_{D}\cdot B =-K_{X}\cdot B-D\cdot B\] \[=D\cdot B+A_{v}\cdot B+2F\cdot B\] \[=\sum_{i}m_{i}D\cdot\ell_{i}+D\cdot R+A_{v}\cdot B+2\sum_{i}m_{i}F \cdot\ell_{i}+2F\cdot R \text{by (\ref{eq:K_D})}\] \[=\sum_{i}m_{i}+D\cdot R+A_{v}\cdot B+2F\cdot R \text{by (\ref{eq:K_D})}\] \[\geq 0.\] _Step B4._ In this step we conclude that \(D\) is a product of \(\mathbb{P}^{1}\) and a smooth elliptic curve. Assume that \(D\) is not a product. Then there exists an \(f\)-horizontal flopping curve \(\ell_{0}\) contained in \(D\), thus \(D\) is an elliptic fibration over \(\mathbb{P}^{1}\) with a section and \(-K_{D}\) is nef. Let \(\nu\colon\widetilde{D}\to D\) be the normalisation of the surface \(D\) and let \(\mu\colon\widetilde{D}\to\widehat{D}\) be the minimal resolution of the surface \(\widehat{D}\). Denote by \(\pi\coloneqq\nu\circ\mu\) the composition map. Then \[-K_{\widetilde{D}}=\pi^{*}(-K_{D})+E_{0},\] where \(E_{0}\) is an effective Weil divisor. Moreover, the divisor \(E_{0}\) is \(f\)-vertical since a general fibre of \(f|_{D}\colon D\to\mathbb{P}^{1}\) is smooth. Let \(h\colon D_{\min}\to\mathbb{P}^{1}\) be a relative minimal model of the induced elliptic fibration \(\widetilde{D}\to\mathbb{P}^{1}\), i.e. we take the successive blow-down \(g\colon\widetilde{D}\to D_{\min}\) of \((-1)\)-curves contained in fibres. Then \[-K_{\widetilde{D}}+E^{\prime}=g^{*}(-K_{D_{\min}}),\] where \(E^{\prime}\) is an effective divisor. Since \(f|_{D}\colon D\to\mathbb{P}^{1}\) has a section, \(D_{\min}\) is a smooth surface with relatively minimal elliptic fibration over \(\mathbb{P}^{1}\) having a section that we denote by \(\Theta_{\min}\). By [10, 8.3], we have \[-K_{D_{\min}}\sim h^{*}\mathcal{O}_{\mathbb{P}^{1}}\big{(}2-\chi(\mathcal{O}_ {D_{\min}})\big{)}.\] Moreover, since \(-K_{D}\) is nef and non-trivial, together with [10, 8.3], we are in one of the following cases: * \(\chi(\mathcal{O}_{D_{\min}})=0\) and thus \(D_{\min}\) is a product, or * \(\chi(\mathcal{O}_{D_{\min}})=1\) and thus \(D_{\min}\) is a rational surface with \(-K_{D_{\min}}\sim h^{*}\mathcal{O}_{\mathbb{P}^{1}}(1)\). In the first case, we have \[g^{*}\big{(}h^{*}\mathcal{O}_{\mathbb{P}^{1}}(2)\big{)}\sim-K_{\widetilde{D}} +E^{\prime}\sim\pi^{*}(-K_{D})+E_{0}+E^{\prime}.\] If \(\pi^{*}(-K_{D})\sim g^{*}\big{(}h^{*}\mathcal{O}_{\mathbb{P}^{1}}(2)\big{)}\), then \(E_{0}=0\) and \(E^{\prime}=0\). Thus \(D\) is a rational normal surface with at worst rational singularities and its minimal resolution \(\widetilde{D}\) is a product. We conclude that \(D\) is also a product, which contradicts our assumption. If \(\pi^{*}(-K_{D})\sim g^{*}\big{(}h^{*}\mathcal{O}_{\mathbb{P}^{1}}(1)\big{)}\), then \(E_{0}+E^{\prime}\sim g^{*}\big{(}h^{*}\mathcal{O}_{\mathbb{P}^{1}}(1)\big{)}\) is a fibre of the elliptic fibration \(\widetilde{D}\to\mathbb{P}^{1}\) and \(E_{0}+E^{\prime}\) is obtained from a smooth elliptic curve \(B\) by successively blowing up points. In particular, \(E_{0}+E^{\prime}\) is a reduced tree of smooth curves so that after contracting any irreducible component, we still obtain a reduced tree of smooth curves. Since moreover \(E^{\prime}\) consists of smooth rational curves, we have \(B\subset\operatorname{Supp}E_{0}\) and thus \(\pi_{*}(E^{\prime}+E_{0})\) is a reduced tree of smooth rational curve. This contradicts the fact that all fibre of \(f|_{D}\) has arithmetic genus one. In the second case, we obtain that \[g^{*}\big{(}h^{*}\mathcal{O}_{\mathbb{P}^{1}}(1)\big{)}\sim-K_{\widetilde{D}}+ E^{\prime}\sim\pi^{*}(-K_{D})+E_{0}+E^{\prime}\] is a fibre of the elliptic fibration \(\widetilde{D}\to\mathbb{P}^{1}\). Note that \(g^{*}\big{(}h^{*}\mathcal{O}_{\mathbb{P}^{1}}(1)\big{)}\) is not divisible in \(\operatorname{Pic}(\widetilde{D})\). This is because the elliptic fibration \(\widetilde{D}\to\mathbb{P}^{1}\) has a section that we denote by \(\widetilde{\Theta}\) and \[g^{*}\big{(}h^{*}\mathcal{O}_{\mathbb{P}^{1}}(1)\big{)}\cdot\widetilde{\Theta }=h^{*}\mathcal{O}_{\mathbb{P}^{1}}(1)\cdot\Theta_{\min}=1.\] Now since \(\pi^{*}(-K_{D})\) is non-zero, effective and nef, and \(E_{0}+E^{\prime}\) is an effective \(f\)-vertical divisor, we obtain \[E_{0}=0,E^{\prime}=0\text{ and }-K_{D}\sim F|_{D}.\] Hence \(D\) is a rational normal surface with at worst rational singularities, and it has a relatively minimal elliptic fibration induced by \(f|_{D}\colon D\to\mathbb{P}^{1}\). By the adjunction formula, \[F|_{D}\sim-K_{D}\sim D|_{D}+A_{v}|_{D}+2F|_{D},\] and we obtain \[(D|_{D})^{2}=(-F|_{D}-A_{v}|_{D})^{2}=(A_{v}|_{D})^{2},\] as \(F^{2}=F\cdot A_{v}=0\). On the other hand, since \(D|_{D}\) is an \(f\)-vertical \(1\)-cycle contained in \(D\) and \(D\) is \(f\)-relatively nef by assumption, we have \((D|_{D})^{2}=0\) by Lemma 2.9. Hence \((A_{v}|_{D})^{2}=0\) and thus by [2, Corollary 2.6], \(A_{v}|_{D}\sim mF|_{D}\) for some \(m>0\). Therefore, \(A_{v}\cdot\ell_{0}=m>0\), which contradicts the fact \(A_{v}\cdot\ell_{0}=0\). **Lemma 3.8**.: _In the setting of Lemma 3.7, let \(A_{0}\) be a connected component of \(A_{v}\) contained in some fibre \(F_{0}\) of \(f\) and let \(A_{1}\) be an irreducible component of \(A_{0}\) such that \(A_{1}\cap D=F_{0}\cap D=:C_{0}\), where \(C_{0}\) is a smooth elliptic curve. Then \(A_{0}=A_{1}\) is a \(\mathbb{P}^{1}\)-bundle over a smooth elliptic curve such that_ \[-K_{A_{1}}\sim 2C_{0}\] _is nef._ Proof.: We first note that \(D\cap F_{0}\) is reduced: by Lemma 3.7 and using the same notation, we have \(F_{0}|_{D}\sim mC\) with \(m\in\mathbb{N}^{*}\) and \(F_{0}|_{D}\cdot\ell=F|_{D}\cdot\ell=1\), i.e. \(mC\cdot\ell=1\) on the smooth surface \(D\) and thus \(m=1\). By the adjunction formula, \[-K_{A_{1}}\sim(-K_{X}-A_{1})|_{A_{1}}\sim(2D+A_{0}-A_{1})|_{A_{1}}=2C_{0}+(A_ {0}-A_{1})|_{A_{1}}.\] Since \(D\cap A_{0}=C_{0}\) and \(A_{1}|_{D}=C_{0}\), \(A_{1}-A_{0}\) does not contain \(A_{0}\) as component. Thus the \(1\)-cycle \((A_{0}-A_{1})|_{A_{1}}\) on \(A_{1}\) is effective and does not meet \(C_{0}\). On the surface \(A_{1}\), we have \[C_{0}^{2}=(D|_{A_{1}})^{2}=D^{2}\cdot A_{1}=0,\] since \(D^{2}\equiv-C\) or \(-2C\) by Lemma 3.7. Therefore, by Lemma 2.3, \((A_{0}-A_{1})|_{A_{1}}=0\) and the surface \(A_{1}\) is a \(\mathbb{P}^{1}\)-bundle over a smooth elliptic curve. Moreover, \(A_{0}-A_{1}=0\) as \(A_{0}\) is connected. Now, it remains to exclude the two cases described in Lemma 3.7. We start by excluding the first case. **Lemma 3.9**.: _Case (i) of Lemma 3.7 cannot happen._ Proof.: In case (i) of Lemma 3.7, \(A_{v}|_{D}\sim C\). Thus \(A_{v}\) has a unique connected component and it is a \(\mathbb{P}^{1}\)-bundle over a smooth elliptic curve by Lemma 3.8. Suppose that \(D\) is horizontally contracted to a smooth elliptic curve by a \(K_{X}\)-negative extremal contraction \(\varphi\colon X\to Y\). Then by Lemma 2.5, we obtain that \(-K_{Y}\) is nef, and \[\kappa(Y,-K_{Y})=\kappa(X,-K_{X})=1,\quad\nu(Y,-K_{Y})=\nu(X,-K_{X})=2.\] As \(\varphi\) contracts the curves meeting \(F\) (resp. \(A_{v}\)) transversally at one point, \(G\coloneqq\varphi(F)\simeq F\), and \(A^{\prime}_{v}\coloneqq\varphi(A_{v})\simeq A_{v}\) is a \(\mathbb{P}^{1}\)-bundle over a smooth elliptic curve by Lemma 3.8. Moreover, we have \[-K_{Y}\sim A^{\prime}_{v}+2G,\] and two general members in \(|G|\) (resp. \(A^{\prime}_{v}\) and a general member in \(|G|\)) intersect along the smooth elliptic curve \(C^{\prime}\coloneqq\varphi_{*}(D)\). Note that \(|-K_{Y}|\) must have a non-zero fixed divisor and thus \(A^{\prime}_{v}\) is the fixed divisor: otherwise \(A^{\prime}_{v}\) is divisible by two in \(\operatorname{Pic}(Y)\) by [20, Theorem 1.1], and thus \(A^{\prime}_{v}|_{G}\) is divisible by two in \(\operatorname{Pic}(G)\), which contradicts the fact that \(A^{\prime}_{v}|_{G}\sim C^{\prime}\) is a section of the \(\mathbb{P}^{1}\)-bundle \(G\). Therefore, \[|-K_{Y}|=A^{\prime}_{v}+|2G|.\] Since \(G\cdot C^{\prime}=C^{\prime 2}=0\), where the last intersection number is computed on the surface \(G\), we deduce that \(G\) is nef but \(G^{2}\neq 0\). This contradicts Theorem 1.2. Finally, we will exclude the second case described in Lemma 3.7 and conclude our proof by contradiction that \(A_{v}\) is indeed zero. **Lemma 3.10**.: _Consider case (ii) of Lemma 3.7. Let \(\varphi\colon X\to Y\) be a divisorial \(K_{X}\)-negative extremal contraction and let \(E\) be the exceptional divisor. Then \(E\) is contained in some special fibre of \(f\) which contains an irreducible component of \(A_{v}\) and \(E\) is not a component of \(A_{v}\)._ _Moreover, \(E\) is contracted by \(\varphi\) to a smooth curve of positive genus. Thus \(Y\) is smooth with \(-K_{Y}\) nef, the fibration \(f\) factors as \(f=f^{\prime}\circ\varphi\) such that \(f^{\prime}\colon Y\to\mathbb{P}^{1}\) gives \(Y\) the fibration structure and \(\varphi(F)\simeq F\), \(\varphi(D)\simeq D\), \(\varphi(A_{v})\simeq A_{v}\)._ Proof.: Let \(\Gamma\) be the \(K_{X}\)-negative extremal ray corresponding to the contraction \(\varphi\). Let \(\gamma\) be a rational curve such that \([\gamma]\) generates \(\Gamma\) and \(-K_{X}\cdot\gamma=l(\Gamma)\). **Case \(A\cdot\gamma=0\).** In this case, we have \(F\cdot\gamma=1\) and \(-K_{X}\cdot\gamma=A\cdot\gamma+kF\cdot\gamma=k=2\). Hence \(\varphi\) is the blow-up of \(Y\) at a smooth point, with exceptional divisor \(E\simeq\mathbb{P}^{2}\). As \(E\) is not fibred, it is contained in a fibre of \(f\) and thus \(F\cdot E=0\). This contradicts the fact that \(F\cdot\gamma=1\). **Case \(A\cdot\gamma<0\).** Since the contraction is divisorial and \(F\cdot\gamma>0\), \(E\) is an irreducible component of \(A_{h}\), i.e. \(E=D\). Since \(D\simeq C\times\mathbb{P}^{1}\), where \(C\) is a smooth elliptic curve, is the exceptional divisor of an \(K_{X}\)-trivial extremal contraction by assumption of the lemma, this contradicts the fact that \(E\) is the exceptional divisor of a \(K_{X}\)-negative contraction. **Case \(A\cdot\gamma>0\).** In this case, we have \(F\cdot\gamma=0\) since otherwise \(-K_{X}\cdot\gamma>2\), which contradicts the classification of Mori. _Step 1._ We first note that \(E\) is not an irreducible component of \(A_{h}\) by the same argument as in the previous case. In the remainder of this step, we will show that \(E\) is not an irreducible component of \(A_{v}\). Suppose by contradiction that \(E\) is an irreducible component of \(A_{v}\) contained in \(F_{0}\). Then \(E\) is a \(\mathbb{P}^{1}\)-bundle over a smooth elliptic curve by Lemma 3.8 and thus \(E\) contracted to a smooth elliptic curve by \(\varphi\) and \(-K_{Y}\) is nef with \(\kappa(Y,-K_{Y})=\kappa(X,-K_{X})=1\), \(\nu(Y,-K_{Y})=\nu(-X,K_{X})=2\) by Lemma 2.5. We have \(G\coloneqq\varphi(F)\simeq F\), \(D^{\prime}\coloneqq\varphi(D)\simeq D\), and the fibration \(f\) factors as \(f=f^{\prime}\circ\varphi\) such that \(f^{\prime}\colon Y\to\mathbb{P}^{1}\) gives \(Y\) the fibration structure. Since \(E\subset A_{v}\) intersects \(D\) along a smooth elliptic fibre of \(D\) by Lemma 3.8, \(D^{\prime}\) is now the exceptional divisor of a \(K_{Y}\)-negative extremal contraction which contracts \(D^{\prime}\) horizontally to a smooth elliptic curve. Hence \(D^{\prime}\) is not a component of the fixed divisor of \(|-K_{Y}|\) by Lemma 3.9. If \(k=3\), then \(|-K_{Y}|=|2D^{\prime}+3G|\) has no fixed divisor, as \(D^{\prime}\) is not a fixed divisor of \(|-K_{Y}|\) and \(h^{0}\big{(}Y,\mathcal{O}_{Y}(G)\big{)}=h^{0}\big{(}\mathbb{P}^{1},\mathcal{O }_{\mathbb{P}^{1}}(1)\big{)}=2\). Since \(G\) is not divisible by two in \(\operatorname{Pic}(Y)\), this case cannot happen by [20, Theorem 1.1]. If \(k=2\), then \(-K_{Y}\sim A^{\prime}_{v}+2D^{\prime}+2G\), where \(A^{\prime}_{v}\) is the image of the other irreducible component of \(A_{v}\), which is a \(\mathbb{P}^{1}\)-bundle over a smooth elliptic curve by Lemma 3.8. Note that \(|-K_{Y}|\) must have a non-zero fixed divisor and thus \(A^{\prime}_{v}\) is the fixed divisor: otherwise \(A^{\prime}_{v}\) is divisible by \(2\) in \(\operatorname{Pic}(Y)\) by [20, Theorem 1.1]; but \(-K_{D^{\prime}}\sim 2G|_{D^{\prime}}\) as \(D^{\prime}\simeq D\) is a product, and by the adjunction formula, \[-K_{D^{\prime}}\sim(-K_{Y}-D^{\prime})|_{D^{\prime}}\sim(A^{\prime}_{v}+D^{ \prime}+2G)|_{D^{\prime}}\sim A^{\prime}_{v}|_{D^{\prime}}+G|_{D^{\prime}},\] and thus we obtain that \(G|_{D^{\prime}}\sim A^{\prime}_{v}|_{D^{\prime}}\) is divisible by two in \(\operatorname{Pic}(D^{\prime})\), which contradicts the fact that \(G|_{D^{\prime}}\) is a fibre of the product \(D^{\prime}\). Hence \(|-K_{Y}|=A^{\prime}_{v}+|2D^{\prime}+2G|\). Since \(D^{\prime}+G\) is nef but \[(D^{\prime}+G)^{2}=D^{\prime 2}+2D^{\prime}\cdot G=D^{\prime}\cdot G\] is a non-zero effective \(1\)-cycle, this cannot happen by Theorem 1.2. _Step 2._ Denote by \(A_{1}\) an irreducible component of \(A_{v}\). Since \(A_{1}\) (resp. \(D\) and resp. \(F\)) is a \(\mathbb{P}^{1}\)-bundle over a smooth elliptic curve, we deduce that \(E\cap A_{1}\) (resp. \(E\cap D\) and resp. \(E\cap F\)) does not contain \(\gamma\) or any of its deformations: otherwise \(\gamma\) moves on the surface \(A_{1}\) (resp. \(D\) and resp. \(F\)), which gives a contradiction. Hence, we obtain the following conclusions: * \(E\) is contained in some special fibre \(F_{0}\) of \(f\), thus \(\varphi\) is an isomorphism outside \(F_{0}\) and \(\varphi(F)\simeq F\); * \(E\) is contracted to a smooth curve and thus \(A\cdot\gamma=1\). This is because \(E\) must meet \(A=2D+A_{v}\), but \(E\cap D\) (resp. \(E\cap A_{v}\)) does not contain \(\gamma\) or any of its deformations. Therefore, \(D\cdot\gamma\geq 0\) and \(A_{v}\cdot\gamma\geq 0\). Since \(A\cdot\gamma=(2D+A_{v})\cdot\gamma=1\), we obtain \(D\cdot\gamma=0\) and \(A_{v}\cdot\gamma=1\). From now on, we consider \(A_{1}\) as the irreducible component of \(A_{v}\) contained in \(F_{0}\). Since \(\gamma\) meets \(A_{1}\) transversally at one point, we have \(\varphi(A_{1})\simeq A_{1}\). As \(E\) does not meet other connected component of \(A_{v}\), we have \(\varphi(A_{v})\simeq A_{v}\). Since \(D\cdot\gamma=0\) and \(D\) does not contain \(\gamma\) or any of its deformations, we deduce that \(E\subset F_{0}\) is disjoint from \(D\). Thus \(\varphi(D)\simeq D\) and the restriction \(E|_{A_{1}}\) is disjoint from \(C_{0}\coloneqq A_{1}\cap D\). Moreover, \(A_{1}\) is a \(\mathbb{P}^{1}\)-bundle over a smooth elliptic curve, thus \(E\) is contracted to a smooth curve of genus at least one. Hence by [13, Proposition 3.3], \(-K_{Y}\) is nef. **Corollary 3.11**.: _In case (i) of Lemma 3.4, we have \(A_{v}=0\)._ Proof.: Suppose that \(A_{v}\) is non-zero. It remains to exclude case (ii) of Lemma 3.7. In the setting of case (ii) of Lemma 3.7, after finitely many steps of divisorial contractions, we may assume, by Lemma 3.10, that \(X\) satisfies the following: * \(X\) is smooth and rationally connected, and it has a fibration structure \(f\colon X\to\mathbb{P}^{1}\) with general fibre \(F\), where \(F\) is a \(\mathbb{P}^{1}\)-bundle over a smooth elliptic curve such that \(-K_{F}\) is nef and not semi-ample; * \(-K_{X}\) is nef and \(-K_{X}\sim 2D+A_{v}+kF\), where \(k\geq 2\), \(f|_{D}\colon D\simeq C\times\mathbb{P}^{1}\to\mathbb{P}^{1}\) is the second projection and \(C\) is a smooth elliptic curve, \(A_{v}\) is contained in some fibres of \(f\) and every connected component of \(A_{v}\) is a \(\mathbb{P}^{1}\)-bundle over a smooth elliptic curve. * If \(\varphi\colon X\to Y\) is a \(K_{X}\)-negative extremal contraction, then \(\varphi\) is non-birational. Let \(\Gamma\) be the \(K_{X}\)-negative extremal ray corresponding to the contraction \(\varphi\). Let \(\gamma\) be a rational curve such that \([\gamma]\) generates \(\Gamma\) and \(-K_{X}\cdot\gamma=l(\Gamma)\). Note that \(\varphi\) is not a del Pezzo fibration. Otherwise we have \(Y\simeq\mathbb{P}^{1}\), \(-K_{X}\cdot\ell\in\{1,2,3\}\), and every fibre of \(\varphi\) is integral, with general fibre isomorphic to a smooth del Pezzo surface. Since \([\gamma]\) is a movable class, we have \(D\cdot\gamma\geq 0\) with equality if and only if \(D\) is a fibre of \(\varphi\). Since \(-K_{D}\) is not ample, \(D\) is not a fibre of \(\varphi\) and one has \(D\cdot\gamma>0\). If \(-K_{X}\cdot\gamma\in\{1,2\}\), this implies \(F\cdot\gamma=0\) and thus \(F=\varphi^{*}(p)\) with \(p\in Y\). Therefore, \(f\) and \(\varphi\) coincide. If \(-K_{X}\cdot\ell=3\), then \(\varphi\) is a \(\mathbb{P}^{2}\)-bundle. As \(\mathbb{P}^{2}\) is not fibred, \(F\) restricted to a \(\mathbb{P}^{2}\) is trivial. Hence again, the two fibrations \(\varphi\) and \(f\) coincide. This contradicts the fact that \(-K_{F}\) is not ample. Since \(X\not\simeq F\times\mathbb{P}^{1}\), by Proposition 2.11, \(\varphi\colon X\to Y\) is a conic bundle such that \(f\) factors as \(f=f^{\prime}\circ\varphi\) and \(Y\) a smooth rational surface. Then any fibre of \(\varphi\) is an \(f\)-vertical curve and we have \(F\cdot\gamma=0\), \(D\cdot\gamma>0\). Moreover, since \(-K_{X}\cdot\gamma=(2D+A_{v}+kF)\cdot\gamma\in\{1,2\}\) and \(A_{v}\cdot\gamma\geq 0\), we have \(A_{v}\cdot\gamma=0\) and \(D\cdot\gamma=1\). Hence \(\varphi\) induces a birational morphism from \(D\) to \(Y\). We obtain a contradiction, since \(q(D)=1\) and \(q(Y)=0\). Proof of Proposition 3.3.: It follows from Lemma 3.4, Lemma 3.5 and Corollary 3.11. ### Running the Minimal Model Program In order to achieve the classification in Theorem 1.4, we will start by running the Minimal Model Program. We first consider a birational contraction. **Proposition 3.12**.: _In Setup 3.1, assume that there exists a birational \(K_{X}\)-negative extremal contraction \(\varphi\colon X\to Y\). Then \(A=D_{1}+D_{2}\) with \(D_{1}\), \(D_{2}\) disjoint and \(\varphi\colon X\to Y\) factors as \(f=f^{\prime}\circ\varphi\). Furthermore, \(Y\) satisfies again Setup 3.1 with \(|-K_{Y}|=D_{1}^{\prime}+D_{2}^{\prime}+|2G|\), \(D_{1}^{\prime}\simeq D_{1}\), \(D_{2}^{\prime}\simeq D_{2}\), \(G\simeq F\), and \(\varphi\) is the blow-up of \(Y\) along a smooth elliptic curve in some fibre of \(f^{\prime}|_{D_{i}^{\prime}}\), for \(i\in\{1,2\}\)._ Proof.: We use the same notation as in Proposition 3.3. Let \(\Gamma\) be the \(K_{X}\)-negative extremal ray corresponding to the contraction \(\varphi\). Let \(\ell\) be a rational curve such that \([\ell]\) generates \(\Gamma\) and \(-K_{X}\cdot\ell=l(\Gamma)\). Let \(E\) be the exceptional divisor of \(\varphi\). By Proposition 2.10, \(\varphi\) contracts \(E\) to a smooth curve, and one of the following cases occurs. (i) The divisor \(E\) is an irreducible component of \(A\) and \(\varphi\) contracts \(E\) horizontally to a smooth curve. * If \(A=2D\), then \(E=D\) is a \(\mathbb{P}^{1}\)-bundle over a smooth elliptic curve and \(\varphi\) contracts \(E\) to a smooth elliptic curve. This implies that \(Y\) is smooth with \(-K_{Y}\) nef. In this case, we have \(D\cdot\ell=-1\), \(-K_{X}\cdot\ell=2D\cdot\ell+kF\cdot\ell=1\), and thus \(F\cdot\ell=1\), \(k=3\). As we contract the curves meeting \(F\) transversally, we conclude that \(G\coloneqq\varphi(F)\simeq F\). Since \[-K_{Y}=\varphi_{*}(-K_{X})=\varphi_{*}(A+3F)=3\varphi_{*}(F)=3G,\] we see that \(|-K_{Y}|=|3G|\) has no fixed divisor. This contradicts the fact that \(-K_{Y}\) is divisible by two in \(\operatorname{Pic}(Y)\) by [13, Theorem 1.1]. * If \(A=D_{1}+D_{2}\), then \(E=D_{i}\) with \(i=1\) or \(2\) and \(\varphi\) contracts \(E\) to a smooth elliptic curve. Thus \(Y\) is smooth and \(-K_{Y}\) is nef. In this case, we have \(D_{i}\cdot\ell=-1\) and \(F\cdot\ell=1\). As we contract the curves meeting \(F\) transversally, we conclude that \(G\coloneqq\varphi(F)\simeq F\) and two general members in \(|G|\) meet along a smooth elliptic curve \(C^{\prime}\coloneqq\varphi_{*}(D_{i})\). We have \[-K_{Y}=\varphi_{*}(-K_{X})=\varphi_{*}(A+2F)=D^{\prime}+2G,\] where \(D^{\prime}\coloneqq\varphi(D_{j})\) with \(j\neq i\). Note that \(|-K_{Y}|\) has a non-zero fixed part and thus \(D^{\prime}\) is the fixed part: otherwise \(D^{\prime}\) is divisible by two in \(\operatorname{Pic}(Y)\) and thus \(D^{\prime}|_{G}\) is divisible by two in \(\operatorname{Pic}(G)\). Since \(D^{\prime}|_{G}\) is a smooth elliptic curve, it is a section of the \(\mathbb{P}^{1}\)-bundle \(G\). This contradicts the fact that \(D^{\prime}|_{G}\) is divisible by two. Therefore, \[|-K_{Y}|=D^{\prime}+|2G|.\] Now on the surface \(G\), one has \(C^{\prime 2}=0\). We conclude that \(G^{2}=C^{\prime}\) is non-zero, and \(G\) is nef as \(G\cdot C^{\prime}=0\). This cannot happen by Theorem 1.2. (ii) The fibration \(f\) factors as \(f=f^{\prime}\circ\varphi\), which gives a fibration \(f^{\prime}\colon Y\to\mathbb{P}^{1}\). Since there is no \((-1)\)-curve in \(F\), we deduce that \(E\) is contained in some fibre \(F_{0}\) of \(f\). We have \(F\cdot\ell=0\) and \(A\cdot\ell=1\). * If \(A=2D\), then \(D\cdot\ell=\frac{1}{2}\), which contradicts the fact that \(D\) is a Cartier divisor. * If \(A=D_{1}+D_{2}\), then \(A\cap E=C_{i}\), where \(i=1\) or \(2\). Hence \(E\) is contracted to a smooth elliptic curve by \(\varphi\) and thus \(-K_{Y}\) is nef. We have \(G\coloneqq\varphi(F)\simeq F\), \(D^{\prime}_{1}\coloneqq\varphi(D_{1})\simeq D_{1}\) and \(D^{\prime}_{2}\coloneqq\varphi(D_{2})\simeq D_{2}\). In this case, \(Y\) satisfies again Setup 3.1. Indeed, \(-K_{Y}\) is not semi-ample, since otherwise \(-K_{G}\) is semi-ample. The linear system \(|-K_{Y}|\) has a non-zero fixed divisor and thus \(D^{\prime}_{1}+D^{\prime}_{2}\) is the fixed divisor, since otherwise \(D^{\prime}_{1}+D^{\prime}_{2}\) is divisible by two in \(\operatorname{Pic}(Y)\) and thus \(-K_{G}\) is divisible by two in \(\operatorname{Pic}(G)\), which contradicts the fact \(-K_{G}=C_{1}+C_{2}\) with \(C_{1}\) not linearly equivalent to \(C_{2}\). Now we consider a contraction of fibre type. **Proposition 3.13**.: _In Setup 3.1, assume that there exists a non-birational \(K_{X}\)-negative extremal contraction \(\varphi\colon X\to Y\). Then \(|-K_{X}|=2D+|2F|\) and \(\varphi\colon X\to Y\) is a \(\mathbb{P}^{1}\)-bundle. Moreover, \(Y\) is isomorphic to \(\mathbb{P}^{2}\) blown up in \(9\) points such that \(-K_{Y}\) is nef and base-point-free (thus induces an elliptic fibration \(\pi\colon Y\to\mathbb{P}^{1}\)), \(D=\mathbb{P}\big{(}\mathcal{O}_{Y}(K_{Y})\big{)}\), and \(X=\mathbb{P}(\mathcal{V})\), where \(\mathcal{V}\) is a rank-two vector bundle which is a non-split extension_ \[0\to\mathcal{O}_{Y}\to\mathcal{V}\to\mathcal{O}_{Y}(K_{Y})\to 0.\] _Furthermore, \(f\) factors as \(X\stackrel{{\varphi}}{{\to}}Y\stackrel{{\pi}}{{\to}} \mathbb{P}^{1}\)._ Proof.: Let \(\Gamma\) be the \(K_{X}\)-negative extremal ray corresponding to the contraction \(\varphi\). Let \(\ell\) be a rational curve such that \([\ell]\) generates \(\Gamma\) and that \(-K_{X}\cdot\ell=l(\Gamma).\) By Proposition 2.11, \(\varphi\colon X\to Y\) is a conic bundle and \(Y\) is a smooth rational surface. Moreover, since \(F\) is not rational, \(X\) is not a product and \(f\) factors as \(f\colon X\xrightarrow{\varphi}Y\xrightarrow{\pi}\mathbb{P}^{1}\). We have \(F\cdot\ell=0\) and thus \(F=\varphi^{*}(R)\) with \(R\) an irreducible curve on \(Y\). On the other hand, as \(F=\mathbb{P}(\mathcal{E})\) where \(\mathcal{E}\) is a rank-2 vector bundle over a smooth elliptic curve, we deduce that \(R\) is a smooth elliptic curve and that the fibration \(\varphi|_{F}\) coincides with the \(\mathbb{P}^{1}\)-bundle structure \(\mathbb{P}(\mathcal{E})\to R\) on \(F\). Let \(\Delta\) be the discriminant locus of the conic bundle \(\varphi\). Then \(\Delta\) is contained in some special fibres of \(\pi\colon Y\to\mathbb{P}^{1}\). As \(\varphi\) is an extremal contraction, by [10, page 83, Remark], every non singular rational curve in \(\Delta\) must meet the other components of \(\Delta\) in at least two points. This implies that \(\Delta\) is empty. Therefore, \(\varphi\colon X\to Y\) is a \(\mathbb{P}^{1}\)-bundle, and \(-K_{X}\cdot\ell=A\cdot\ell=2\). We can write \(X\simeq\mathbb{P}(\mathcal{V})\), where \(\mathcal{V}\) is a rank-2 vector bundle over \(Y\), and we have \(\mathcal{V}|_{R}\simeq\mathcal{E}\). * If \(A=2D\), then \(D\cdot\ell=1\). Since \(D\) is a rational section, we have an extension \[0\to\mathcal{O}_{Y}\to\mathcal{V}\to\mathcal{I}_{Z}\otimes\det\mathcal{V}\to 0,\] where \(\mathcal{I}_{Z}\) is the ideal sheaf of a length-\(c_{2}(\mathcal{V})\) subscheme \(Z\) on \(Y\) and \(D=\mathbb{P}(\mathcal{I}_{Z}\otimes\det\mathcal{V})\). We have that \(-K_{Y}\) is nef by [11, Proposition 3.1], and that \(\pi\colon Y\to\mathbb{P}^{1}\) induces an elliptic fibration on \(Y\). Hence, \(Y\) is isomorphic to \(\mathbb{P}^{2}\) blown up at 9 points such that \(-K_{Y}\) is nef and semi-ample (with some multiple of \(-K_{Y}\) defining the elliptic fibration \(\pi\)), and thus \(-K_{Y}\sim\alpha R\), where \(R\) is a general elliptic fibre of \(\pi\) and \(\alpha\leq 1\). Hence, \((-K_{Y})^{2}=0.\) Now since \(D\) is isomorphic to \(\operatorname{Bl}_{Z}(Y)\) and \((-K_{D})^{2}=(D|_{D}+kF|_{D})^{2}=0\) as \(A^{3}=A^{2}\cdot F=0\), we deduce that \(Z=\emptyset\), \(c_{2}(\mathcal{V})=0\), and \(D\simeq Y\). Since \[-K_{X}\sim\varphi^{*}\big{(}-K_{Y}-c_{1}(\mathcal{V})\big{)}+2D,\] and \(-K_{X}\sim 2D+kF\), we deduce that \(\varphi^{*}\big{(}c_{1}(\mathcal{V})\big{)}\sim-(k-\alpha)F\). By the Grothendieck relation, one has \(D^{2}\sim D\cdot\varphi^{*}\big{(}c_{1}(\mathcal{V})\big{)}\sim-(k-\alpha)D \cdot F.\) Denote by \(e\) the smooth elliptic curve \(D\cap F\). Then, \[(-K_{X})|_{D}\sim(2D+kF)|_{D}\sim(2\alpha-k)e.\] Since \(-K_{X}\) (and thus \(-K_{X}|_{D}\)) is nef, and \(\alpha\leq 1\), \(k\geq 2\), we deduce \(\alpha=1\) and \(k=2\). Therefore, \(-K_{X}\sim 2D+2F\) and \(\mathcal{V}\) is an extension \[0\to\mathcal{O}_{Y}\to\mathcal{V}\to\mathcal{O}_{Y}(K_{Y})\to 0.\] * If \(A=D_{1}+D_{2}\), then \(D_{1}\cdot\ell=D_{2}\cdot\ell=1\) and \(D_{1},D_{2}\) are birational to \(Y\). We obtain a contradiction, since \(q(Y)=0\) and \(q(D_{1})=q(D_{2})=1\). Finally, we are ready to prove Theorem 1.4. Proof of Theorem 1.4.: For part (A) of the theorem, it remains to show that \(\mathcal{V}\) is indecomposable. The other statements follow from Propositions 3.12 and 3.13. We first notice that \[\operatorname{Ext}^{1}(\mathcal{O}_{Y}(K_{Y}),\mathcal{O}_{Y})\simeq H^{1}(Y, \mathcal{O}_{Y}(-K_{Y}))=\mathbb{C},\] where the last equality follows from the Riemann-Roch formula. Now suppose by contradiction that \(\mathcal{V}=\mathcal{O}_{Y}\oplus\mathcal{O}_{Y}(K_{Y})\). Then the quotient \[\mathcal{V}\to\mathcal{O}_{Y}\to 0\] gives a section \(D^{\prime}\) of \(\varphi\colon X=\mathbb{P}(\mathcal{V})\to Y\) such that \(D^{\prime}\in|D-\varphi^{*}(K_{Y})|=|D+F|\). Therefore, \(-K_{X}\sim 2D^{\prime}\) and \(D^{\prime}\neq D\), which contradicts the fact that \(2D\) is the fixed divisor of \(|-K_{X}|\). Now we prove part (B) of the theorem. Let \(Y\) be \(\mathbb{P}^{2}\) blown up at \(9\) points such that \(-K_{Y}\) is nef, base-point-free and thus defines an elliptic fibration \(\pi\colon Y\to\mathbb{P}^{1}\). Let \(R\) be a general fibre of \(\pi\). Then \(-K_{Y}\sim R\) and \(F=\varphi^{*}(R)\). Since \(D\) is a tautological divisor of \(\mathbb{P}(\mathcal{V})=X\), we have \[-K_{X}\sim 2D+\varphi^{*}(-K_{Y}-\det(\mathcal{V}))\sim 2D+2F.\] _Step 1._ We show that \(F=\mathbb{P}(\mathcal{E})\), where \(\mathcal{E}\) is a rank-\(2\) vector bundle over \(R\), which is a non-split extension \[0\to\mathcal{O}_{R}\to\mathcal{E}\to\mathcal{O}_{R}\to 0. \tag{7}\] Indeed, as \(F=\varphi^{*}(R)\), we have \(F=\mathbb{P}(\mathcal{E})\) with \(\mathcal{E}\simeq\mathcal{V}|_{R}\). Restricting the short exact sequence (1) to \(R\), we obtain \[0\to\mathcal{O}_{R}\to\mathcal{V}|_{R}\to\mathcal{O}_{R}\to 0,\] as \(\mathcal{O}_{R}(K_{Y})\simeq\mathcal{O}_{R}\). Let \(s\) be a non-zero element in \(\operatorname{Ext}\big{(}\mathcal{O}_{Y}(K_{Y}),\mathcal{O}_{Y}\big{)}\simeq H ^{1}\big{(}Y,\mathcal{O}_{Y}(-K_{Y})\big{)}\simeq H^{1}\big{(}Y,\pi^{*} \mathcal{O}_{\mathbb{P}^{1}}(1)\big{)}\). Since \(H^{1}\big{(}\mathbb{P}^{1},\pi_{*}(\pi^{*}\mathcal{O}_{\mathbb{P}^{1}}(1)) \big{)}=0\), \[H^{1}\big{(}Y,\pi^{*}\mathcal{O}_{\mathbb{P}^{1}}(1)\big{)}\simeq H^{0}\big{(} Y,R^{1}\pi_{*}(\pi^{*}\mathcal{O}_{\mathbb{P}^{1}}(1))\big{)}\] by the Leray spectral sequence. As \(\pi^{*}\mathcal{O}_{\mathbb{P}^{1}}(1)\simeq\omega_{Y/\mathbb{P}^{1}}\), one has \[R^{1}\pi_{*}\big{(}\pi^{*}\mathcal{O}_{\mathbb{P}^{1}}(1)\big{)}\simeq R^{1} \pi_{*}\omega_{Y/\mathbb{P}^{1}}\simeq\mathcal{O}_{\mathbb{P}^{1}}\] by [11, Proposition 7.6]. Hence, \(\operatorname{Ext}\big{(}\mathcal{O}_{Y}(K_{Y}),\mathcal{O}_{Y}\big{)}\simeq H ^{0}\big{(}\mathbb{P}^{1},R^{1}\pi_{*}(\pi^{*}\mathcal{O}_{\mathbb{P}^{1}}(1) )\big{)}\simeq H^{0}(\mathbb{P}^{1},\mathcal{O}_{\mathbb{P}^{1}})\simeq \mathbb{C}\). Denote by \(R_{t}\subset Y\) the fibre over \(t\in\mathbb{P}^{1}\). Then the natural map \[R^{1}\pi_{*}(\pi^{*}\mathcal{O}_{\mathbb{P}^{1}}(1))\otimes\mathbb{C}(t)\to H ^{1}(R_{t},\mathcal{O}_{R_{t}})\simeq\operatorname{Ext}(\mathcal{O}_{R_{t}}, \mathcal{O}_{R_{t}})\] is an isomorphism, see for example [10, III, Corollary 12.9]. Therefore, the non-zero element \(s\in\operatorname{Ext}\big{(}\mathcal{O}_{Y}(K_{Y}),\mathcal{O}_{Y}\big{)}\) corresponds to a non-zero element \(s_{t}\in\operatorname{Ext}(\mathcal{O}_{R_{t}},\mathcal{O}_{R_{t}})\). Thus \(\mathcal{E}\) is a non-split extension (7). _Step 2._ We show that \(-K_{X}\) is nef. It is enough to check \(-K_{X}\cdot C\geq 0\) for any integral curve \(C\subset D\), as \(-K_{X}\sim 2D+2F\) and \(F\) is nef. Let \(C\subset D\) be an integral curve. We have \[D^{2}\sim\varphi^{*}\big{(}c_{1}(\mathcal{V})\big{)}\cdot D\sim-D\cdot F\] by the Grothendieck relation. Thus \[-K_{X}\cdot C=(2D+2F)\cdot C=(2D+2F)|_{D}\cdot C=0.\] _Step 3._ It remains to show that \(-K_{X}\) is not semi-ample and that \(2D\) is the fixed divisor of \(|-K_{X}|\) Since \((-K_{X})^{2}\sim(2D+2F)^{2}\sim(-4D\cdot F+8D\cdot F)=4D\cdot F\) is not numerically zero and \[(-K_{X})^{3}=8(D+F)\cdot D\cdot F=0,\] one has \(\nu(X,-K_{X})=2\). Since \(2D|_{F}\sim-K_{F}\) by the adjunction formula, and \(\kappa(F,-K_{F})=0\), we obtain \[|-mK_{X}|=2mD+|2mF|\] for any integer \(m\geq 1\). Thus, \(2D\) is the fixed divisor of \(|-K_{X}|\) and \[\kappa(X,-K_{X})=\kappa(X,F)=1,\] where the last equality follows from \(\mathcal{O}_{X}(F)\simeq f^{*}\mathcal{O}_{\mathbb{P}^{1}}(1)\). ## 4 The cone of effective curves In this section, we consider the threefold \(X\) defined as in Theorem 1.4 and we study the \(K_{X}\)-trivial curves. We will describe the cone of curves \(\overline{\mathrm{NE}}(X)\) and prove Proposition 1.5. In the whole section, we consider \(X\) defined as follows. **Setup 4.1**.: _Let \(Y\) be a minimal rational elliptic surface \(\pi\colon Y\to\mathbb{P}^{1}\), i.e. \(Y\) is isomorphic to the blow-up of \(\mathbb{P}^{2}\) at the \(9\) base points of a cubic pencil. Let \(\mathcal{V}\) be a rank-\(2\) vector bundle over \(Y\) defined by a non-split extension_ \[0\to\mathcal{O}_{Y}\to\mathcal{V}\to\mathcal{O}_{Y}(K_{Y})\to 0\] _and let \(\varphi\colon X\coloneqq\mathbb{P}(\mathcal{V})\to Y\)._ _Then \(-K_{X}\) is nef and not semi-ample,_ \[|-K_{X}|=2D+|2F|,\] _where_ * \(D\simeq Y\) _is a section of the_ \(\mathbb{P}^{1}\)_-bundle_ \(\varphi\colon X\to Y\) _corresponding to the quotient_ \(\mathcal{V}\to\mathcal{O}_{Y}(K_{Y})\to 0\)_,_ * \(F\) _is a general fibre of the fibration_ \(f\coloneqq\pi\circ\varphi\colon X\to\mathbb{P}^{1}\)_, thus is a_ \(\mathbb{P}^{1}\)_-bundle over the general fibre of the elliptic fibration_ \(\pi\colon Y\to\mathbb{P}^{1}\)_._ **Lemma 4.2**.: _In Setup 4.1, we have \(-K_{X}|_{D}=0\)._ Proof.: By the adjunction formula, \(-K_{D}\sim-K_{X}|_{D}-D|_{D}\), i.e. \(F|_{D}\sim(2D+2F)|_{D}-D|_{D}\). Hence \(D|_{D}\sim-F|_{D}\) and we have \(-K_{X}|_{D}=0\). **Remark 4.3**.: Since we want to locate all the \(K_{X}\)-trivial curves in the threefold \(X\) obtained in Theorem 1.4, in view of the above lemma, we describe here all the extremal rays of the cone of curves \(\overline{\mathrm{NE}}(D)\), see [13, Theorem 8.2]. By the Cone Theorem, the subcone \(\mathrm{NE}(D)\cap K_{D}^{<0}\) is closed and all the \(K_{D}\)-negative extremal rays of \(\overline{\mathrm{NE}}(D)\) are spanned by \((-1)\)-curves. Note that there are infinitely many \((-1)\)-curves on the surface \(D\). Now as \(-K_{D}\) is nef, it remains to describe \(\overline{\mathrm{NE}}(D)\cap K_{D}^{\perp}\). Since all extremal rays of \(\overline{\mathrm{NE}}(D)\cap K_{D}^{\perp}\) are spanned by either \(-K_{D}\) or a \((-2)\)-curve, and there are only finitely many \((-2)\)-curves (they are the irreducible components of reducible fibres of \(f|_{D}\colon D\to\mathbb{P}^{1}\)), the cone \(\mathrm{NE}(D)\cap K_{D}^{\perp}\) is rational polyhedral. We conclude that the cone \(\mathrm{NE}(D)\) is closed and that every extremal ray of \(\mathrm{NE}(D)\) is spanned by either a \((-1)\)-curve, or a \((-2)\)-curve (or \(-K_{D}\), if the elliptic fibration \(f|_{D}\colon D\to\mathbb{P}^{1}\) has no reducible fibre). **Lemma 4.4**.: _In Setup 4.1, the cone \(\operatorname{NE}(X)\cap K_{X}^{\perp}\) is closed and every extremal ray of this cone is spanned by one of the following curves:_ * \(a\) \((-1)\)_-curve on_ \(D\)_: in this case the contraction of the extremal ray is a simple flopping contraction, i.e. the flopping curve has normal bundle isomorphic to_ \(\mathbb{O}_{\mathbb{P}^{1}}(-1)^{\oplus 2}\)_;_ * \(a\) \((-2)\)_-curve on_ \(D\)_, i.e. an irreducible component of a reducible fibre of the elliptic fibration_ \(f|_{D}\colon D\to\mathbb{P}^{1}\)_: in this case the contraction of the extremal ray contracts a divisor isomorphic to_ \(\mathbb{P}^{1}\times\mathbb{P}^{1}\) _to a curve;_ * _a smooth elliptic fibre on_ \(D\) _if the elliptic fibration_ \(f|_{D}\colon D\to\mathbb{P}^{1}\) _has no reducible fibre: in this case there is no smooth rational curve class in the extremal ray._ Proof.: Consider the morphism \[\phi\colon\operatorname{NE}(D)\to\operatorname{NE}(X)\] induced by the inclusion \(D\hookrightarrow X\). Since the cone \(\operatorname{NE}(D)\) is closed by Remark 4.3, we will show that \(\phi\big{(}\operatorname{NE}(D)\big{)}=\operatorname{NE}(X)\cap K_{X}^{ \perp}\), which implies that the cone \(\operatorname{NE}(X)\cap K_{X}^{\perp}\) is also closed. We will follow the same strategy as in the proof of [13, Lemma 2.6]. Let \(\gamma\) be an integral \(K_{X}\)-trivial curve, then \[(2D+2F)\cdot\gamma=0. \tag{8}\] Let \(\gamma^{\prime}=\varphi(\gamma)\subset Y\) and let \(S\coloneqq\varphi^{-1}(\gamma^{\prime})\). Then \(S\) is a \(\mathbb{P}^{1}\)-bundle over the curve \(\gamma^{\prime}\). Denote by \(\ell\) a fibre of \(\varphi|_{S}\colon S\to\gamma^{\prime}\). _Case 1._ If \(F\cdot\gamma=0\), then \(\gamma\) is \(f\)-vertical and \(D\cdot\gamma=0\) by (8). In this case, \(\gamma^{\prime}\) is either an integral fibre of the ellitpic fibration \(\pi\colon Y\to\mathbb{P}^{1}\), or an irreducible component of some reducible fibre, i.e. a \((-2)\)-curve on \(Y\). Denote by \(\tilde{\gamma}\) the integral curve \(D\cap S\), then \(\tilde{\gamma}\simeq\gamma^{\prime}\) as \(\varphi|_{D}\colon D\to Y\) is an isomorphism. We will show that \([\gamma]\in\mathbb{R}_{+}[\tilde{\gamma}]\) in this case. (i) If \(\gamma^{\prime}\) is a \((-2)\)-curve on \(Y\), then \[\mathcal{V}|_{\gamma^{\prime}}\simeq\mathbb{O}_{\mathbb{P}^{1}}\oplus \mathbb{O}_{\mathbb{P}^{1}}\] and thus \(S=\mathbb{P}(\mathcal{V}|_{\gamma^{\prime}})\simeq\mathbb{P}^{1}\times\mathbb{ P}^{1}\). We have \[S\cdot\gamma=\varphi^{*}(\gamma^{\prime})\cdot\gamma=\gamma^{\prime}\cdot \varphi(\gamma)=\gamma^{\prime 2}=-2.\] By the adjunction formula, \(-K_{S}\cdot\gamma=-K_{X}\cdot\gamma-S\cdot\gamma=2\) and thus \(\gamma\) is the ruling of \(S\simeq\mathbb{P}^{1}\times\mathbb{P}^{1}\) other than \(\ell\). Since \(S\) is not nef, by [16, Lemma 2.5], there exists a \((K_{X}+S)\)-negative extremal ray \(\Gamma\) such that \(S\cdot\Gamma<0\). Moreover, \(S\simeq\mathbb{P}^{1}\times\mathbb{P}^{1}\) and \(S\cdot\ell=0\), we deduce that \(\Gamma=\mathbb{R}_{+}[\gamma]\) and the contraction of \(\Gamma\) contracts \(S\) to a curve. Since \(\tilde{\gamma}\subset S\simeq\mathbb{P}^{1}\times\mathbb{P}^{1}\), \(-K_{X}\cdot\tilde{\gamma}=-K_{X}\cdot\gamma=0\) and \(-K_{X}\cdot\ell>0\), we obtain that \([\gamma]\) and \([\tilde{\gamma}]\) are proportional. More precisely, since \(S\cdot\tilde{\gamma}=(S|_{D})^{2}=-2=S\cdot\gamma\), we have \([\gamma]=[\tilde{\gamma}]\). (ii) If \(\gamma^{\prime}\) is an integral fibre (i.e. a smooth elliptic curve or a singular rational curve) of \(\pi\colon Y\to\mathbb{P}^{1}\), then \(S\sim F\) and \(-K_{S}\sim(-K_{X}-S)|_{S}\sim 2D|_{S}\) is an effective and \(f\)-vertical 1-cycle. Since in Setup 4.1, the fixed divisor of \(|-K_{X}|\) has no \(f\)-vertical part, we have that \(D\) is \(f\)-relatively nef and we can apply Lemma 2.9 with \(A_{h}=2D\). Then \(D\cdot\tilde{\gamma}=0\). Thus \((-K_{S})^{2}=0\) and \(-K_{S}\) is nef. Since \(\rho(S)\)=2 and \(-K_{S}\cdot\ell>0\), we have \([\gamma]\in\mathbb{R}_{+}[\tilde{\gamma}]\). Note that there is no smooth rational curve contained in \(S\), whose class is in \(\mathbb{R}_{+}[\tilde{\gamma}]\): assume that there exists such a rational curve \(\gamma_{rat}\simeq\mathbb{P}^{1}\). Since \(-K_{X}\cdot\gamma_{rat}=0\), the curve \(\gamma_{rat}\) is not contracted by \(\varphi\) and thus \(\varphi|_{\gamma_{rat}}\) maps \(\gamma_{rat}\) surjectively to its image. But \(\varphi(\gamma_{rat})\subset\varphi(S)=\gamma^{\prime}\) is a smooth elliptic curve or an integral non-normal rational curve, this cannot happen. Therefore, if the elliptic fibration \(\pi\colon Y\to\mathbb{P}^{1}\) has no reducible fibre (and thus the elliptic fibration \(f|_{D}\colon D\to\mathbb{P}^{1}\) has no reducible fibre), then every fibre of \(f|_{D}\colon D\to\mathbb{P}^{1}\) is in the same class \([\tilde{\gamma}]\) and there is no smooth rational curve whose class is in \(\mathbb{R}_{+}[\tilde{\gamma}]\). Note moreover that in this case, the class \([-K_{D}]=[\tilde{\gamma}]\in\operatorname{NE}(X)\) is not a non-negative linear combination of the classes of \((-1)\)-curves on \(D\). This is because by considering the nef divisor \(F\subset X\), the curve \(-K_{D}\) is \(F\)-trivial, but any \((-1)\)-curves on \(D\) is \(F\)-positive. _Case 2_. If \(F\cdot\gamma>0\), then \(\gamma\) is \(f\)-horizontal and \(D\cdot\gamma<0\) by (8). Thus \(\gamma\subset D\). By Lemma 4.2 and Remark 4.3, the curve \(\gamma\) is a non-negative linear combination of \((-1)\)-curves and \((-2)\)-curves (or \(-K_{D}\)) as curve classes. Now by _Case 1_, it suffices to show that the ray spanned by the class a \((-1)\)-curve on \(D\) is an extremal ray of \(\operatorname{NE}(X)\) and to describe the corresponding contraction of this extremal ray. Assume that \(\gamma\) is a \((-1)\)-curve on \(D\). Since \(\varphi|_{D}\colon D\to Y\) is an isomorphism, \(\gamma^{\prime}\) is also a \((-1)\)-curve on \(Y\). Hence, \[\mathcal{V}|_{\gamma^{\prime}}\simeq\mathcal{O}_{\mathbb{P}^{1}}\oplus \mathcal{O}_{\mathbb{P}^{1}}(-1)\] and thus \(S=\mathbb{P}(\mathcal{V}|_{\gamma^{\prime}})\simeq\mathbb{F}_{1}\). We also have \[S\cdot\gamma=\varphi^{*}(\gamma^{\prime})\cdot\gamma=\gamma^{\prime}\cdot \varphi(\gamma)=\gamma^{\prime 2}=-1.\] By the adjunction formula, \(-K_{S}\cdot\gamma=-K_{X}\cdot\gamma-S\cdot\gamma=1\) and thus \(\gamma\) is the section with minimal self-intersection number of the ruled surface \(S\simeq\mathbb{F}_{1}\). Therefore, the normal bundle sequence \[0\to N_{\gamma/S}\to N_{\gamma/X}\to N_{S/X}|_{\gamma}\to 0\] splits, and we obtain \(N_{\gamma/X}\simeq\mathcal{O}_{\mathbb{P}^{1}}(-1)^{\oplus 2}\). Since \(S\) is not nef, by [10, Lemma 2.5], there exists a \((K_{X}+S)\)-negative extremal ray \(\Gamma\) such that \(S\cdot\Gamma<0\). Moreover, \(S\) is a ruled surface and \(S\cdot\ell=0\), we deduce that \(\Gamma=\mathbb{R}_{+}[\gamma]\). Since \(S\cdot\gamma<0\) and \(\gamma\) is the minimal section of \(S\simeq\mathbb{F}_{1}\), the contraction of \(\Gamma\) is small and contracts precisely the curve \(\gamma\). The corresponding flop blows up the curve \(\gamma\) in \(X\) with exceptional divisor isomorphic to \(\mathbb{P}^{1}\times\mathbb{P}^{1}\), and blows down the exceptional divisor in the other direction onto some smooth threefold \(X^{+}\). Combining the above two cases, we have shown that the class of any effective \(K_{X}\)-trivial curve is a non-negative linear combination of the classes of \((-1)\)-curves and \((-2)\)-curves (or \(-K_{D}\)) on \(D\). Together with Remark 4.3, we obtain \(\phi\big{(}\operatorname{NE}(D)\big{)}=\operatorname{NE}(X)\cap K_{X}^{\perp}\). Proof of Proposition 1.5.: Since \(\operatorname{NE}(X)\cap K_{X}^{\leq 0}\) is closed by the Cone Theorem and \(\operatorname{NE}(X)\cap K_{X}^{\perp}\) is closed by Lemma 4.4, the cone \(\operatorname{NE}(X)\) is closed. The other statements follow from Lemma 4.4. **Remark 4.5**.: In Proposition 1.5, the cone \(\operatorname{NE}(X)\) has a unique \(K_{X}\)-negative extremal ray spanned by the class of a fibre of the \(\mathbb{P}^{1}\)-bundle \(X=\mathbb{P}(\mathcal{V})\to Y\). This is because \(-K_{X}\) is divisible by two in \(\operatorname{Pic}(X)\), which implies that the only possible birational \(K_{X}\)-negative contraction contracts a divisor to a smooth point by the classification of Mori, see [12, Section 3]; then we conclude by Propositions 2.10 and 2.11.
2308.05757
OrcoDCS: An IoT-Edge Orchestrated Online Deep Compressed Sensing Framework
Compressed data aggregation (CDA) over wireless sensor networks (WSNs) is task-specific and subject to environmental changes. However, the existing compressed data aggregation (CDA) frameworks (e.g., compressed sensing-based data aggregation, deep learning(DL)-based data aggregation) do not possess the flexibility and adaptivity required to handle distinct sensing tasks and environmental changes. Additionally, they do not consider the performance of follow-up IoT data-driven deep learning (DL)-based applications. To address these shortcomings, we propose OrcoDCS, an IoT-Edge orchestrated online deep compressed sensing framework that offers high flexibility and adaptability to distinct IoT device groups and their sensing tasks, as well as high performance for follow-up applications. The novelty of our work is the design and deployment of IoT-Edge orchestrated online training framework over WSNs by leveraging an specially-designed asymmetric autoencoder, which can largely reduce the encoding overhead and improve the reconstruction performance and robustness. We show analytically and empirically that OrcoDCS outperforms the state-of-the-art DCDA on training time, significantly improves flexibility and adaptability when distinct reconstruction tasks are given, and achieves higher performance for follow-up applications.
Cheng-Wei Ching, Chirag Gupta, Zi Huang, Liting Hu
2023-08-05T04:19:35Z
http://arxiv.org/abs/2308.05757v1
# OrcoDCS: An IoT-Edge Orchestrated Online Deep Compressed Sensing Framework ###### Abstract Compressed data aggregation (CDA) over wireless sensor networks (WSNs) is task-specific and subject to environmental changes. However, the existing compressed data aggregation (CDA) frameworks (e.g., compressed sensing-based data aggregation, deep learning(DL)-based data aggregation) do not possess the flexibility and adaptivity required to handle distinct sensing tasks and environmental changes. Additionally, they do not consider the performance of follow-up IoT data-driven deep learning (DL)-based applications. To address these shortcomings, we propose OrcoDCS, an IoT-Edge orchestrated online deep compressed sensing framework that offers high flexibility and adaptability to distinct IoT device groups and their sensing tasks, as well as high performance for follow-up applications. The novelty of our work is the design and deployment of IoT-Edge orchestrated online training framework over WSNs by leveraging an specially-designed asymmetric autoencoder, which can largely reduce the encoding overhead and improve the reconstruction performance and robustness. We show analytically and empirically that OrcoDCS outperforms the state-of-the-art DCDA on training time, significantly improves flexibility and adaptability when distinct reconstruction tasks are given, and achieves higher performance for follow-up applications. Deep Compressed Sensing, Internet of Things, IoT-Edge Orchestration, Online Training, Deep Learning, Data Reconstruction, Asymmetric Autoencoder ## I Introduction Internet-of-Things (IoT) networks typically consist of a vast number of devices, sensors, and actuators that generate a constant stream of data. Before transmitting this data to the cloud center for supporting various IoT applications, it needs to be gathered and aggregated at edge nodes. Compressed data aggregation (CDA) offers an efficient way to reduce the volume of collected data by leveraging compressed sensing techniques [1]. Compressed sensing provides two mappings that separate encoding and decoding into independent measurement and reconstruction processes, facilitating the efficient communication of sensing data. The traditional CDA framework consists of three stages. First, data aggregators collect raw sensing data from IoT devices via wireless sensor networks. Second, encoding mapping is applied to obtain measurements of raw sensing data, which have far smaller dimensions than the raw sensing data, and the measurements are transmitted from data aggregators to edge servers. Finally, edge servers apply decoding mapping to reconstruct the sensing data using the measurements. While CDA has been shown to improve transmission efficiency in sensor networks [1, 2], the decoding mappings in traditional CDA frameworks typically use computationally intensive algorithms because the reconstruction problem from measurements is a convex optimization [2, 3]. Moreover, the reconstruction performance is highly limited by the dimension and sparsity of measurements. To address this issue, deep CDA (DCDA) has been proposed, which incorporates end-to-end deep learning (DL) models into traditional CDA. DCDA uses deep learning models, such as autoencoders, to replace the traditional encoding and decoding mappings [3, 4]. To train an autoencoder, DCDA leverages historical raw sensing data to train a neural network-based encoder and decoder in the cloud. The encoder learns to map the raw sensing data into latent spaces, while the decoder learns to map the latent spaces into reconstructed data. The learning objective for the autoencoder is to minimize the l2 norm-based reconstruction error between the original data and reconstructed data. Instead of using randomly generated Gaussian or Bernoulli measurements, DCDA employs a learned encoder to extract latent features from the original data and a learned decoder to reconstruct data with these features. This enables reconstruction performance not to be limited by the dimension and sparsity of measurements [3]. Existing DCDA frameworks typically utilize an offline-training scheme, where a predefined deep learning model is trained on historical data and hyperparameters in the cloud. This approach transfers the entire training overhead from IoT devices to the cloud, but it has two downsides. First, it lacks flexibility. IoT devices in a large sensor network may have different sensing tasks [5, 6], each with distinct data characteristics requiring specific deep learning models and hyperparameters to achieve better reconstruction performance. An offline-training scheme cannot provide distinct models for various tasks quickly. Second, it has low adaptivity. An offline-training scheme only utilizes historical sensing data to train a deep learning model, and it cannot adapt to new sensing data due to environmental changes [7, 8]. A new model adapted to new data must be trained from scratch in the cloud. Additionally, existing DCDA frameworks do not consider follow-up applications, which increasingly relies on IoT data-driven DL models [9, 10]. The goal of the existing DCDA frameworks is to minimize the reconstruction error between original and reconstructed data, which may not improve the performance of DL-based follow-up applications. Deploying a DCDA framework that enables online training over wireless sensor networks and improves follow-up IoT data-driven DL-based applications poses two main challenges. The first challenge is how to perform online training on IoT devices with limited computational resources and short battery life. Although an online training scheme does not require IoT devices to transmit their sensing data to the cloud, performing computationally intensive model training solely on IoT devices is nearly impossible [3, 11]. The second challenge is how to enhance the performance of DL-based follow-up applications. Typically, techniques such as data augmentation and adversarial examples are employed to improve DL models [12, 13]. However, as it is unlikely to know a priori how and what DL models will be used for follow-up applications, it is intractable to specialize the reconstructions. This paper introduces OrcoDCS, an online IoT-Edge orchestrated deep compressed sensing framework. OrcoDCS is innovative in that it leverages IoT-Edge orchestration to implement online training for DCDA. Unlike existing DCDA frameworks, OrcoDCS emphasizes the important role of edge servers. It involves IoT devices and edge servers working together to train an asymmetric autoencoder in an online manner. To reduce training overhead on IoT devices, IoT devices focus on training a shallow encoder while edge servers handle the training overhead of a deep decoder. Furthermore, OrcoDCS integrates Gaussian noise into the online training process to enhance the robustness of reconstructions. The remainder of this paper is organized as follows: Section II elaborates on the problem this paper aims to address. Section III details the design of OrcoDCS. In Section IV, we present the experiment results for OrcoDCS. Finally, Section V concludes the paper and discusses future directions for research. ## II Problem Formulation A cluster consisting of \(N\) IoT devices and a data aggregator is considered. Each IoT device \(i\in[N]\) regularly transmit its raw sensing data \(x_{i}\) to the data aggregator, which then forwards it to the edge server for further analysis. _Our primary objective is to develop an encoder that minimizes the transmission cost of raw sensing data from the data aggregator to the edge server_. Additionally, edge servers usually train DL models (such as object classifiers) for follow-up data analysis, so _our secondary objective is to design a decoder that maximizes reconstruction performance and improves the performance of the follow-up DL models_. Lastly, given the limited computational resources and battery life of IoT devices, _our last objective is to minimize the training overhead on data aggregators_. ## III Design The OrcoDCS framework is designed to meet three primary goals: * Low overhead. It achieves low training overhead on IoT devices. * High robustness. It generates more robust reconstructions for follow-up DL-based applications. * Adaptivity. It can adapt to different reconstruction tasks and sensing environmental changes for sensor networks. To achieve these goals, the OrcoDCS framework has three major procedures: _intra-cluster raw data aggregation_, _IoT-Edge orchestrated asymmetric autoencoder_, and _data aggregation of OrcoDCS over IoT networks_, as illustrated in Figure 1. ### _Intra-cluster Raw Data Aggregation_. We focus on a cluster consisting of \(N\) IoT devices and a data aggregator, where the IoT devices must send raw sensing data to the data aggregator without compression, so the data aggregator can perform training procedures with edge servers using the data. To aggregate the raw sensing data from the IoT devices to the data aggregator, we employ multi-hop hybrid compressed sensing aggregation [1]. This technique generates a data aggregation tree with the data aggregator as the root, spanning \(N\) IoT devices. Each node transmits its data to the root, along with the data aggregation tree, and parent nodes aggregate and forward their child nodes' data to the next hops until the root receives all N data from each node. The multi-hop hybrid compressed sensing aggregation technique has two benefits: (i) reducing the energy consumption of nodes farther from the cluster head, and (ii) mitigating collisions, thereby enhancing network efficiency. ### _IoT-Edge Orchestrated Asymmetric Autoencoder_ **Encoder.** OrcoDCS aims to minimize the transmission cost by developing an encoder that is suitable for IoT devices with limited computational resources. To achieve this goal, the data aggregator employs a one fully-connected layer encoder that transforms the raw sensing data from \(N\) IoT devices into \(M\)-dimensional latent vectors. Let \(X=[x_{1},x_{2},\cdots,x_{N}]^{T}\) denote the stacked vector of raw sensing data from \(N\) IoT devices. The data aggregator applies the following encoding mapping to transform the raw sensing data into latent vectors: \[y=\sigma(W_{e}\cdot X+b), \tag{1}\] where \(W_{e}\in\mathbb{R}^{M\times N}\) is a \(N\)-by-\(M\) weight matrix, \(b_{e}\in\mathbb{R}^{M}\) is a \(M\)-dimensional bias vector, \(\sigma(\cdot)\) represents the activation function, and \(y\in\mathbb{R}^{M}\) is a \(M\)-dimensional latent vector. It is important to note that the dimension of \(M\) is a hyperparameter that can be adjusted to suit the reconstruction tasks and desired compression ratio, providing higher adaptivity compared to DCDA. By applying the mapping, the data aggregator can attain a stacked latent vectors \(Y=[y_{1},y_{2},\cdots,y_{M}]^{T}\) from the raw sensing data \(X\). Fig. 1: The OrcoDCS architecture. First, IoT devices send raw sensing data \(X\) to the data aggregator through performing the intra-cluster raw data aggregation (\(\blacklozenge\)). Next, the data aggregator and the edge server train an asymmetric autoencoder using the training procedure of the IoT-Edge orchestrated asymmetric autoencoder (\(\blacklozenge\)). Once the training procedure finishes, IoT devices can send compressed data \(Y\) to the data aggregator through the data aggregation of OrcoDCS over IoT networks (\(\blacklozenge\)). **Latent vectors with Gaussian noises.** OrcoDCS aims to maximize the reconstruction performance and improve the performance of follow-up applications. To enhance the robustness of the decoder, the approach taken is inspired by [14, 15] where the decoder is trained with noise to improve the ability to reconstruct diverse data. In OrcoDCS, Gaussian noise is added to the latent vectors to further improve the robustness of the reconstructions. This is achieved by using the following equation to add noise to the latent vectors: \[\hat{Y}=Y+\mathcal{N}(0,\sigma^{2}), \tag{2}\] where \(\mathcal{N}(0,\sigma^{2})\) is a \(M\)-dimensional Gaussian distribution vector with a mean of \(0\) and a variance of \(\sigma^{2}\). It is important to note that the mean of the Gaussian noise is set to \(0\) to ensure that the latent vectors are not biased. **Decoder.** The decoder, which runs on edge servers, is responsible for decoding the latent vectors. Similar to equation (1), the edge server applies the following mapping to reconstruct the raw sensing data: \[X_{r}=\sigma(W_{d}\cdot\hat{Y}+b_{d}), \tag{3}\] where \(W_{d}\in\mathbb{R}^{N\times M}\) is a \(N\)-by-\(M\) weight matrix, \(b_{e}\in\mathbb{R}^{N}\) is a \(N\)-dimensional bias vector, \(\sigma(\cdot)\) represents the activation function, and \(X_{r}\) is the reconstructed data. By applying the mapping, the edge server can attain a set of reconstructed sensing data \(X_{r}\) from the latent vectors with Gaussian noises \(\hat{Y}\). It is important to note that equation (3) represents a decoding mapping applied to a one-layer fully-connected decoder. However, for different reconstruction tasks, the number of layers and the structure of the decoder can be increased to achieve better performance. **Reconstruction error.** Reconstruction error for an autoencoder can be intuitively measured by the L2 norm between the raw sensing data \(X\) and the reconstructed data \(X_{r}\)[3]. However, this might not satisfy our second objective, so we instead use the Huber loss [16] as the reconstruction error for OrcoDCS. Let \(\|\cdot\|_{1},\|\cdot\|_{2}\) denote the L1 norm and L2 norm operators, respectively. The Huber loss is defined as follows: \[\mathcal{L}(X,X_{r})=\begin{cases}\frac{1}{2}\|X-X_{r}\|_{2}^{2},& \text{if }\|X-X_{r}\|_{1}\leq\delta,\\ \delta\|X-X_{r}\|_{1}-\frac{1}{2}\delta^{2},&\text{otherwise},\end{cases} \tag{4}\] where \(\delta\) is a hyperparameter that controls the direction of the loss. Huber loss combines the advantages of L1 norm and L2 norm, which makes the reconstructions more robust [16, 17]. We can then train the asymmetric autoencoder with stochastic gradient descent to minimize the average Huber loss-based reconstruction error between raw sensing data and reconstructed data as formulated as follows: \[\min_{\theta_{e},\theta_{d}}\sum_{i\in[N]}\mathcal{L}(x^{i},x_{r}^{i}), \tag{5}\] where \(x^{i},x_{r}^{i}\) are the raw sensing data and reconstructed data from IoT device \(i\), and \(\theta_{e},\theta_{d}\) are the parameters (i.e., weight matrices and bias vectors) of the encoder and decoder, respectively. The training objective in equation (5) is to find the parameters for the encoder and the decoder. **Training procedure.** OrcoDCS adopts an IoT-Edge orchestration process to train the asymmetric autoencoder. Initially, the data aggregator encodes raw sensing data into latent vectors using equation (1) and adds Gaussian noise using equation (2). Then, the latent vectors are sent to the edge server, which utilizes equation (3) to generate reconstructed sensing data. Subsequently, the edge server sends the reconstructed data back to the data aggregator, which calculates the reconstruction error using equation (4). Finally, the edge server updates its decoder and the encoder in the data aggregator based on the reconstruction error. This iterative process enables efficient and collaborative training of the asymmetric autoencoder while incorporating Gaussian noise to enhance the robustness of the reconstruction. ### _Data Aggregation of OrcoDCS over IoT Networks_ **Distributing the trained encoder from data aggregators to IoT devices.** Compressed sensing relies on sampling to determine which data to transfer, making it necessary to send the trained encoder to IoT devices. Since the trained encoder contains the mapping for all \(N\) IoT devices, only a portion of the trained encoder needs to be sent to each specific IoT device. Each IoT device \(i\) only requires the \(i\)-th column vector of \(W_{e}\) and \(b_{e}\) in the trained encoder to compress the raw sensing data. Thus, the individual columns can be distributed from the data aggregator to each IoT device through a single round of broadcast over wireless sensor networks, ensuring efficient distribution of the necessary encoder information. **Intra-cluster compressed data aggregation with the trained encoder.** OrcoDCS adopts hybrid compressed sensing-based aggregation to aggregate raw sensing data from IoT devices to the data aggregator. Specifically, assuming that IoT device \(i\) has raw sensing data \(x_{i}\), it receives the column vectors \(W_{e}^{i}\) and \(b_{e}^{i}\) from the data aggregator and computes the \(i\)-th element of the latent vector using following equation: \[y_{i}=\sigma(W_{e}^{i}\cdot x_{i}+b_{e}^{i}). \tag{6}\] The resulting element is sent to another another IoT device, say \(j\), which applies the same equation but its column vector to obtain the \(j\)-th element of the latent vector \(y_{j}\), stacks the two elements, and sends it to the next IoT device. This procedure continues until the complete latent vectors \(Y=[y_{1},y_{2},\cdots,y_{M}]^{T}\) from all \(N\) IoT devices are aggregated at the data aggregator. By using this approach, OrcoDCS enables efficient and collaborative computation of the latent vectors while minimizing the transmission cost of raw sensing data. ### _Model Fine-Tuning_ To ensure the effectiveness of the trained autoencoder in the presence of potentially varied sensing data encountered by IoT devices, it is important to monitor the reconstruction performance. Therefore, the edge server periodically calculates the reconstruction error by comparing the reconstructed data with the original data. If the reconstruction error exceeds a predefined threshold, the training procedure is relaunched to further improve the performance of the asymmetric autoencoder. This monitoring and relaunching approach helps maintain the reconstruction performance, which is crucial for the subsequent data analysis. ### _Overhead Analysis_ The overhead of intra-cluster raw data aggregation can be considered almost negligible for two reasons. Firstly, the data aggregator is usually chosen based on its proximity to other IoT devices within the same cluster, allowing each IoT device to communicate with the aggregator over a short distance via wireless sensor networks [18, 19, 20]. Secondly, the raw data aggregation only needs to be performed once before subsequent training procedures. In training IoT-Edge Orchestrated asymmetric autoencoders, data aggregators are responsible for collecting raw sensing data from IoT devices and collaborating with the edge server in training the encoder. The computational and transmission overhead is minimal as the encoder has only a single dense layer by design, and the dimensions of the latent vectors that are sent to the edge server are much smaller than the original data. Meanwhile, the edge server is responsible for training the decoder and sending the reconstructions back to the data aggregators for evaluation of reconstruction errors. With a higher computational capacity compared to IoT devices, the edge server is well-equipped to handle a significant amount of the training overhead [21, 22]. Additionally, the edge server's communication with data aggregators occurs via downlink, which is much less resource-intensive compared to uplink communication [21, 22]. ## IV Evaluation ### _Experiment Setup_ **Datasets and models.** We run two categories of reconstruction tasks with two real-world datasets to evaluate our approach. * _Grayscale images:_ the MNIST dataset [23] consists of 60,000 grayscale images of 10 different classes of digits. * _Colorful images:_ the GTSRB dataset [24] consists of 43 classes of 51839 colorful traffic signs images. These images have varying light conditions and colorful backgrounds. For OrcoDCS, a single dense layer is employed for both the encoder and the decoder, with the dimension of the latent vectors set at 128 for MNIST and 512 for GTSRB. For follow-up DL-based applications, we use reconstructed data by OrcoDCS and DCSNet to train a simple 2-layer convolutional neural network as a classifier for follow-up applications. **Baseline.** DCSNet [3] is used as our baseline. It is an offline DCS framework that features a fixed model structure (a decoder that consists of 4 convolutional layers) and predefined dimension of latent vectors (1024). To compare its performance with our approach, we carry out online training of DCSNet, with the same model structure but only 50% of the training data being made accessible to it by default. **Metrics.** Our focus is on evaluating the _quality of the reconstructions_ and the _time-to-loss performance_ for two distinct reconstruction tasks. We assess the _transmission cost_ for various numbers of data being transmitted. Additionally, to determine the impact of the reconstructed data on classifier performance, we quantify the _model accuracy and loss_ of classifiers trained using the reconstructed data. ### _Quality of the Reconstructions_ Figure 2 shows the reconstruction results of OrcoDCS and DCSNet with the MNIST dataset and the GTSRB dataset. It's evident that OrcoDCS reconstructs sharper and more distinguishable data compared to DCSNet across both datasets. This is due to three key factors: first, OrcoDCS enables access to a larger training dataset through online training between IoT devices and the edge server. Secondly, OrcoDCS is able to select a suitable model structure that is best suited to the specific reconstruction task. Lastly, OrcoDCS incorporates a moderate amount of Gaussian noise to increase the learning space of the decoder. Fig. 3: Transmission cost for OrcoDCS and DCSNet. OrcoDCS can save up to \(10\times\) transmission cost than DCSNet. Fig. 2: Reconstruction results of OrcoDCS and DCSNet for three digits in MNIST (upper line) and three traffic signs in GTSRB (lower line). Clearly, the reconstruction results produced by OrcoDCS are much clearer and more similar to the original images when compared to those generated by DCSNet. ### _Transmission Cost_ Figure 3 shows the comparison of the transmission cost between OrcoDCS and DCSNet for the transmission of 1,000 and 10,000 images of MNIST and GTSRB. As seen in the figure, OrcoDCS saves up to \(10\times\) the amount of transmitted bytes compared to DCSNet. This advantage is achieved through the capability to determine the ideal dimension of the latent spaces for each specific reconstruction task, whereas DCSNet only applies a given dimension of latent vectors for each reconstruction task. ### _Time-to-Loss Performance_ IoT devices are typically limited in terms of power, making it essential to minimize the training overhead. The breakdown of the time-to-loss performance for two reconstruction tasks is shown in Figure 4, which highlights that OrcoDCS can achieve lower loss more quickly. It is because OrcoDCS can offer more energy-efficient and fast-converging models and hyperparameters for different reconstruction tasks by utilizing online training. ### _Model Accuracy and Loss of Classifiers_ Figure 5 presents the training performance of classifiers trained with the data reconstructed by OrcoDCS and DCSNet, where DCSNet-50% represents 50% of training data is accessible for DCSNet. It is clear that classifiers trained on data generated by OrcoDCS attain higher accuracy. These improvements can be attributed to two main factors: (i) the addition of Gaussian noise to the latent spaces by OrcoDCS leads to the generation of more diverse data by the decoder, and (ii) OrcoDCS has access to a larger set of training data. ### _Sensitivity Analysis._ **Impact of dimensions of latent vectors.** We evaluate OrcoDCS across dimensions of latent vectors. We observe that OrcoDCS achieves better time-to-loss performance than DCSNet across different dimensions of latent vectors (Figure 6), and having more dimensions for latent vectors receives diminishing rewards. This is because having too many dimensions (i) can cause the decoder to overfit the input data, and (ii) can result in longer training time due to the increased amount of data that needs to be transmitted between data aggregators and edge servers. In comparison to DCSNet, OrcoDCS offers greater flexibility in the dimensions of latent vectors, allowing for better customization to suit various reconstruction tasks. **Impact of amounts of noise added to latent vectors.** To improve the robustness of reconstructions, OrcoDCS adds Gaussian noise to latent vectors. We evaluate OrcoDCS's performance under noisy latent vectors and compare it with its counterparts. We add noise from the Gaussian distribution \(N(0,\sigma^{2})\) and test OrcoDCS with different values of \(\sigma\). In Figure 7, we report the time-to-loss performance after adding various amounts of noise to latent vectors. Our results demonstrate that OrcoDCS outperforms its counterparts even when the noise is substantial. Moreover, an appropriate amount of noise can indeed helps to achieve lower loss faster. In contrast to DCSNet, OrcoDCS offers more flexible layers of the decoder that can be tailored to different reconstruction tasks. **Impact of number of layers of the decoder.** We evaluate the impact of the number of layers of the decoder on the performance of OrcoDCS. Our experiments show that OrcoDCS achieves better time-to-loss performance compared to its counterparts across different numbers of layers of the decoder (as depicted in Figure 8). However, increasing the number of layers in the decoder can lead to diminishing returns in terms of performance for both datasets. This is because adding more layers to the decoder (i) may overfit the latent vectors, resulting in poor reconstruction performance, and (ii) can result in longer training times due to the increased number of layers in the decoder. ## V Conclusions Existing DCDA frameworks lack the flexibility and adaptability required to handle distinct sensing tasks and environmental changes in online-training environments. To address these shortcomings, we proposed OrcoDCS, an IoT-Edge Orchestrated online training framework that enables high flexibility and adaptability to different sensing data due to environmental changes. OrcoDCS leverages a specially-designed asymmetric autoencoder and IoT-Edge orchestration to provide an online training scheme between IoT devices and edge servers, which significantly improves flexibility, adaptability, and achieves high performance for follow-up applications. A potential avenue for future work is the optimization of training overhead on edge servers when a large number of data aggregators need to perform training procedures of OrcoDCS. Our approach has the potential to scale up to wireless sensor networks consisting of millions of IoT devices and task-specific autoencoders by exploring IoT-Edge-Cloud orchestration for scalability. We believe that OrcoDCS represents a significant step towards more flexible, adaptive, and high-performing DCDA in wireless sensor networks. ## Acknowledgment This work is supported by the National Science Foundation (NSF-OAC-23313738, NSF-CAREER-23313737, NSF-SPX-2202859).
2303.08101
Impact of Covid-19 Pandemic on Water Pollution in Indian Rivers -- A Case Study
Some of the important critical parameters for assessing the water quality like pH (Hydrogen ion concentration), DO (Dissolved Oxygen), BOD (Biological Oxygen Demand), etc., were monitored at different locations in some major Indian rivers. The results obtained from the study reveals that the critical parameters had increasing values in some monitoring locations, decreasing values, and no variation in values at some other places. It is recommended to have a pH value above 7, higher values of DO, lower values of BOD & FCC (Faecal Coliform Content) for improved water quality. Overall, the water quality improved in most Indian rivers. There was no discharge of industrial wastes, hotels/restaurants wastes, immersing of idols during religious festivals, etc., to the rivers during the COVID-19 lockdown. Therefore, enforcement of strict regulations by the Government of India for disposal of wastes produced from industrial & domestic activities can significantly reduce the water pollution levels in the Indian rivers.
Amardeepak Mahadikar, Krishna Anand, Chandra S. Reddy
2023-01-17T06:53:16Z
http://arxiv.org/abs/2303.08101v1
# Impact of Covid-19 Pandemic on Water Pollution in Indian Rivers-A Case Study ###### Abstract Some of the important critical parameters for assessing the water quality like pH (Hydrogen ion concentration), DO (Dissolved Oxygen), BOD (Biological Oxygen Demand), etc., were monitored at different locations in some major Indian rivers. The results obtained from the study reveals that the critical parameters had increasing values in some monitoring locations, decreasing values, and no variation in values at some other places. It is recommended to have a pH value above 7, higher values of DO, lower values of BOD & FCC (Faecal Coliform Content) for improved water quality. Overall, the water quality improved in most Indian rivers. There was no discharge of industrial wastes, hotetis/restaurants wastes, immersing of idoids during religious festivals, etc., to the rivers during the COVID-19 lockdown. Therefore, enforcement of strict regulations by the Government of India for disposal of wastes produced from industrial & domestic activities can significantly reduce the water pollution levels in the Indian rivers. 16 September 2022 **Accepted:** 16 October 2022 _Keywords:_ _BOD, COVID-19, DO, FCC, pH_ ## 1 Introduction COVID-19 stands for Corona Virus Infectious Disease, whose year of occurrence is 2019. It is caused by the pathogen Severe Acute Respiratory Syndrome Corona Virus-2 (SARS-COV-2) belonging to the \(\beta\)-subgroup of the Corona virus family. The disease was first diagnosed in Wuhan city, Hubei province of China, which later spread its tentacles to over 220 countries and territories around the world. The Government of India imposed a nationwide lockdown since midnight of 24th March to restrict the spread of the deadly Corona virus disease Covid-19. The World Health Organization (WHO) declared it a global pandemic of international concern on 30th Jan 2020. It is found that human-to-human transmission is mainly by close contact with an infected person through coughing, sneezing, respiratory droplets. However, there are cases reported of transmission by viral shedding via faeces [1, 2]. Some of the common symptoms of COVID-19 infection are fever, headache, fatigue, dry cough, respiratory distress, vomiting, diarrhea, etc. Water pollution is a major global problem that gives rise to water-borne diseases such as cholera, typhoid, hepatitis, etc., [3, 4]. As per estimates, every year globally, around 1.7 million children below five years of age die, and 38 million Indians suffer from various water-borne diseases. Before the COVID-19 lockdown, major Indian rivers and lakes were heavily polluted due to human activities and getting difficult to be treated [5, 6]. As reported by the Central Pollution Control Board (CPCB) of India, 40 million litres of wastewater enter the rivers and other water bodies every day. Only 37% is treated adequately [7]. The rapid urbanization has caused contamination of 70% of freshwater sources in India making them unfit for consumption. The imposition of lockdown to contain the spread of the virus resulted in restrictions on public transportation, commercial and industrial activities that positively impacted the environment and were a blessing in disguise to Mother Nature [8-10]. Improvement in water quality across the river water environment due to reduced economic activities resulted in less pollutants discharged to the rivers [11-13]. Ganga river water quality has shown significant improvement during the COVID-19 lockdown and was found suitable for bathing at most monitoring stations. The enforcement of nationwide lockdown also improved the health of other major Indian rivers. In this paper authors focused to study the effect of COVID-19 and its consequences on geographical conditions. It focuses on to study water pollution in the Indian rivers and the characteristics changes. Section 2 discusses the methodology implemented to study characteristic behaviors of water in the rivers. A simple statistical model is presented to measure the parameters. Section 3 deals the results and analysis of the work presented in this paper. Results are presented in graphical and tabular forms, before and after of the COVID period. It shows that during this period natural conditions are much improved. Section 4 presents the conclusion part of the paper. ## 2 Material and Methods For the N independent samples, X\({}_{\text{s}}\), where n = 1, 2,..., N represents independently distributed random variables tested in the experiments. Then, the mean value for this experiment can be expressed as an estimation of the average of all samples [14-18]. \[\text{i.e., }\hat{X}=\frac{1}{N}\sum_{n=1}^{N}X_{n}\] This represents the effect of estimation of the mean sample [19-21]. The estimation in terms of variance is expressed as: \[E\left[\hat{X}_{N}\right]=E\left[\frac{1}{N}\sum\nolimits_{n=1}^ {N}X_{n}\right]\] \[=\frac{1}{N}\sum\nolimits_{n=1}^{N}\left[X_{N}\right]=\hat{R}\] Since the system is unbiased, hence the measurement function for the mean estimator is further expressed in variance form as: \[E\left[\left(\hat{\hat{X}}_{N}-\hat{X}\right)^{2}\right]=E\left[ \hat{\hat{X}}_{N}^{2}-2\hat{X}\hat{\hat{X}}_{N}+\hat{X}^{2}\right]\] \[=-\hat{X}^{2}+E\left[\frac{1}{N}\sum\nolimits_{n=1}^{N}X_{n} \frac{1}{N}\sum\nolimits_{m=1}^{N}X_{m}\right]\] \[\sigma_{X_{N}}^{2}=-\hat{X}^{2}+\frac{1}{N^{2}}\sum\nolimits_{n=1 }^{N}\sum\nolimits_{m=1}^{N}E[X_{n}X_{m}]\] ## 3 Result and Discussion ### River Ganga The imposition of lockdown to control the spread of the Corona virus has resulted in a significant reduction in water pollution levels of the Ganga River by 25 to 30%. The biggest and highly contaminated river runs in northern parts if India. On average, the DO concentration increased by 20 to 30%, and BOD decreased by 35 to 40%, based on the studies conducted to monitor the water pollution levels of the sacred river. The quantity of decaying organic matter indicates the BOD level. A lower concentration of BOD and a higher level of DO indicate good water quality. A reduced level of DO in water severely affects aquatic life. The prescribed standard for the survival of aquatic life is BOD less than 3 mg/litre and DO above 5 mg/litre. The measured DO value of river Ganga on 6th March 2020 at Varanasi was 3.8 mg/litre, and on 4th April 2020, 6.8 mg/litre indicating an improvement of 79%. Table 1 shows the data collected by Central Pollution Control Board (CPCB) on 28th March 2020, indicating satisfactory levels of recorded values of DO, BOD & pH at Upstream & Downstream Ganga. The observations of river Ganga in Uttar Pradesh (UP) showed satisfactory results. Totally, 14 locations were monitored. Increasing values of DO and BOD were observed in 8 and 4 locations. Decreasing DO, BOD & FCC values were observed in 6, 9 & 10 locations, respectively. No change in BOD value was observed in 1 location. The average values of critical parameters monitored in 14 locations during pre-lockdown and lockdown are shown in Table 2 below. Graphical results are presented in Figure 1. ### River Yamuna In Himachal Pradesh (HP), the river Yamuna was monitored at 4 locations. Increasing values of DO were observed in 4 locations, decreasing values of BOD & FCC in 4 & 3 locations. No change in FCC was observed in 1 location. The average values of critical parameters measured in 4 locations are shown in Table 3. These results are presented in the form of bar charts in Figure 2. \begin{table} \begin{tabular}{c c c} \hline **Monitoring** & **Parameter** & **Measured values** \\ **Station** & pH & 7.90 \\ \hline Ganga river & DO & 8 mg/litre \\ Upstream & BOD & 2.1 mg/litre \\ & pH & 7.91 \\ Ganga river & DO & 7.90 mg/litre \\ Downstream & BOD & 1.21 mg/litre \\ \hline \end{tabular} \end{table} Table 1: CPCB data of measured values of pH, DO & BOD \begin{table} \begin{tabular}{c c c} \hline **Parameter** & **Pre-lockdown** & **Lockdown** \\ \hline pH & 5.95 & 8.05 \\ DO (mg/litre) & 9.3 & 9.4 \\ BOD (mg/litre) & 2.8 & 2.45 \\ \hline \end{tabular} \end{table} Table 2: Average values of critical parameters at river Ganga monitored in UP Figure 1: pH, DO & BOD values of Ganga river monitored in UP during pre-lockdown & lockdown Figure 2: pH, DO & BOD values of river Yamuna monitored in HP during pre-lockdown & lockdown \begin{table} \begin{tabular}{c c c} \hline **Parameter** & **Pre-lockdown** & **Lockdown** \\ \hline pH & 5.95 & 8.05 \\ DO (mg/litre) & 9.3 & 9.4 \\ BOD (mg/litre) & 2.8 & 2.45 \\ \hline \end{tabular} \end{table} Table 3: Average values of critical parameters at river Yamuna monitored in HP In Assam, the river Brahmaputra was monitored for water quality at 8 locations during pre-lockdown and 10 locations during the lockdown. Increasing values of DO, BOD & FCC were observed in 3, 1, 4 locations and decreasing values in 5, 7, 2 locations, respectively. Figure 3 represents graphical form of the results for the river Brahmaputra. No change in FCC was observed in 1 location. The average values of critical parameters recorded in Assam are shown in Table 4. ### River Godavari The river Godavari flows in the sadharan parts of India was monitored at 14 locations in Maharashtra and Andhra Pradesh to check for water quality during pre-lock down and lockdown. It was observed there were increasing values of DO, BOD & FCC in 9, 3 & 1 locations and decreasing values in 3, 10 & 4 locations, respectively. The values of DO, BOD & FCC remained unchanged in 2, 1 & 9 locations. Table 5 below shows the average values of critical parameters measured in 14 locations during the pre-lockdown & lockdown periods. These results are presented in Figure 4 for Godavari river, one of the key rivers in south India. ### River Narmada The river Narmada was monitored for water quality in Gujarat at 5 locations. Increasing values of DO and FCC were observed at 3 & 2 locations. Decreasing DO, BOD & FCC values were observed in 2, 2 & 3 locations, respectively. No change in BOD value was observed at 3 locations. The average values of critical parameters measured in 5 locations during the pre-lockdown & lockdown period are shown in Table 6. These results are presented in graphical form in Figure 5. ### River Krishna In Andhra Pradesh (AP), the water quality of the river Krishna was monitored at 8 locations in Karnataka and Andhra Pradesh. It passes through different locations in the south India in various states. The DO, BOD & FCC showed increasing values in 4, 1 & 2 locations, respectively. Decreasing values of DO were observed in 3 locations and BOD in 4 locations. No variation in DO values was observed in 1 location, BOD in 3 locations & FCC in 6 locations. The average values of critical parameters measured in 8 locations are shown in Table 7 and graphical results are presented in Figure 6. ### River Cauvery The water quality of the river Cauvery was monitored at 22 locations in Karnataka. Increasing values of DO were observed in 21 locations, decreasing values of BOD & FCC in 20 & 21 locations, respectively. No variation in DO value was observed on 1 location, BOD values in 2 locations, and FCC value in 1 location. Table 8 shows the average values of critical parameters measured in 22 locations during the pre-lockdown & lockdown periods. It is one of the key rivers in Karnataka and Tamil Nadu states of India. These results are presented in graphical form in Figure 7. ## 4 Conclusions In this manuscript, the authors attempted to address the impact of COVID-19 on water pollution in Indian rivers. The critical parameters for assessing the water quality like pH, DO & BOD in some significant Indian rivers during pre-lockdown and lockdown are studied in this work. The pollution levels reduced in the major Indian rivers due to the Covid-19 lockdown, thereby showing a remarkable improvement in water quality due to a complete halt to tourism, pilgrimage, and industrial activities. However, domestic sewage contributed to the pollution of the water bodies. The findings suggest that the critical parameters monitored during the lockdown period showed satisfactory levels. These changes may be temporary because, after the lockdown, industrial and human activities will increase, due to which more pollutants will be discharged to the water bodies. Therefore, it is necessary for governments in general/ individuals in particular, to learn from the environmental impact due to lockdown and adopt proper measures to reduce pollution on a long-term basis for the welfare of society.
2301.06755
Extracting continuous sleep depth from EEG data without machine learning
The human sleep-cycle has been divided into discrete sleep stages that can be recognized in electroencephalographic (EEG) and other bio-signals by trained specialists or machine learning systems. It is however unclear whether these human-defined stages can be re-discovered with unsupervised methods of data analysis, using only a minimal amount of generic pre-processing. Based on EEG data, recorded overnight from sleeping human subjects, we investigate the degree of clustering of the sleep stages using the General Discrimination Value as a quantitative measure of class separability. Virtually no clustering is found in the raw data, even after transforming the EEG signals of each thirty-second epoch from the time domain into the more informative frequency domain. However, a Principal Component Analysis (PCA) of these epoch-wise frequency spectra reveals that the sleep stages separate significantly better in the low-dimensional sub-space of certain PCA components. In particular the component $C_1(t)$ can serve as a robust, continuous 'master variable' that encodes the depth of sleep and therefore correlates strongly with the 'hypnogram', a common plot of the discrete sleep stages over time. Moreover, $C_1(t)$ shows persistent trends during extended time periods where the sleep stage is constant, suggesting that sleep may be better understood as a continuum. These intriguing properties of $C_1(t)$ are not only relevant for understanding brain dynamics during sleep, but might also be exploited in low-cost single-channel sleep tracking devices for private and clinical use.
Claus Metzner, Achim Schilling, Maximilian Traxdorf, Holger Schulze, Konstantin Tziridis, Patrick Krauss
2023-01-17T08:39:34Z
http://arxiv.org/abs/2301.06755v1
# Extracting continuous sleep depth from EEG data ###### Abstract The human sleep-cycle has been divided into discrete sleep stages that can be recognized in electroencephalographic (EEG) and other bio-signals by trained specialists or machine learning systems. It is however unclear whether these human-defined stages can be re-discovered with unsupervised methods of data analysis, using only a minimal amount of generic pre-processing. Based on EEG data, recorded overnight from sleeping human subjects, we investigate the degree of clustering of the sleep stages using the General Discrimination Value as a quantitative measure of class separability. Virtually no clustering is found in the raw data, even after transforming the EEG signals of each thirty-second epoch from the time domain into the more informative frequency domain. However, a Principal Component Analysis (PCA) of these epoch-wise frequency spectra reveals that the sleep stages separate significantly better in the low-dimensional sub-space of certain PCA components. In particular the component \(C_{1}(t)\) can serve as a robust, continuous'master variable' that encodes the depth of sleep and therefore correlates strongly with the 'hypnogram', a common plot of the discrete sleep stages over time. Moreover, \(C_{1}(t)\) shows persistent trends during extended time periods where the sleep stage is constant, suggesting that sleep may be better understood as a continuum. These intriguing properties of \(C_{1}(t)\) are not only relevant for understanding brain dynamics during sleep, but might also be exploited in low-cost single-channel sleep tracking devices for private and clinical use. Introduction Sleep is an essential biological behavior [1, 2] and therefore highly conserved across animal evolution [3]. In mammals, healthy sleep involves a programmed series of characteristic changes in the activity of body and brain, which include alterations of brain wave and breathing patterns, variations of blood pressure, heart beat and body temperature, as well as modulated biochemical activity. Due to the quasi-periodic structure of these changes, they can be viewed as repetitions of a sleep cycle [4], subdivided into apparently distinct stages such as Wake, REM, N1, N2 and N3. While the recognition of these sleep stages from electroencephalographic (EEG) recordings [5] and other measured biosignals was reserved for trained specialists in the past, the invention of modern machine learning and data analysis tools has quickly led to systems for automatic sleep stage detection [6, 7, 8, 4], with the promise of eventually freeing medical doctors in sleeping labs from the burden of manual sleep stage classification. However, the typical automatic classifiers used in the field of machine learning are black boxes with huge numbers of parameters, which makes their classification decisions hard to interpret and difficult to reproduce [9, 10]. Can these opaque, self-organized, hierarchical features of multi-layer machine learning models be replaced by more transparent, human-interpretable features? In a previous paper [11], we have demonstrated how to use well-defined statistical operators, such as the mean, the variance, or the kurtosis, for aggregating the raw time series of EEG signals into a few human-interpretable, time-dependent statistical variables. As it turned out, the probability distributions of these variables depend to a certain degree on the sleep stage, and this dependence could be exploited for Bayesian sleep stage detection. In this 'flat' Bayesian approach, all likelihoods and prior probabilities have a simple mathematical meaning, in contrast to the learned weights and biases of a conventional deep multi-layer classifier. On top of the practical application, our analysis of the time-dependent statistical properties of EEG signals also revealed that certain statistical variables follow continuous trends within and even across the distinct sleep stages - a finding that will be reconfirmed in the present work. Although automatic sleep stage detection has already been proven possible using a variety of different approaches [6, 7, 8, 4], the accuracies achieved by these systems are not as satisfactory than in other machine learning applications. Together with the fact that the agreement about sleep stages is relatively poor even among human specialists [12], this raises the question whether the low accuracy merely reflects flaws of the automatic classification systems, or if they are rooted in the data itself: Can human-defined sleep stages be considered as well-defined, 'natural kinds'? Could they be rediscovered by non-supervised data clustering methods without prior knowledge? Are sleep stages really distinct classes or strongly overlapping, fuzzy concepts? What would be the practical consequences of this fuzziness? We have addressed some of these more theoretical questions in another recent publication [13]. We there argued that classes are in general subjective (user-based) and goal-oriented constructs that must not necessarily reflect the objective structure of the underlying data distributions. Human-defined classes can be useful for certain purposes, even though they do not correspond to well-separated clusters in data space but overlap significantly. This overlap leads however to an upper limit for the achievable classification accuracy [13]. To test whether the sleep-stages are natural kinds and therefore can be rediscovered by purely objective data analysis, we quantitatively determined the degree of sleep stage clustering in EEG data space, using the previously developed General Discrimination Value (GDV, [14]). Finding only an extremely small degree of clustering, even after converting the EEG time series of each epoch to the frequency domain, we next investigated whether the degree of sleep stage clustering can be increased by non-supervised dimensionality reduction with an autoencoder. In this experiment we indeed observed a progressive cluster enhancement over the increasingly narrow autoencoder layers, but quantitatively the effect remained extremely small [13]. This work is a follow-up on our former publications regarding the analysis of human EEG signals during sleep [15, 16, 4, 11, 13], aiming to enhance the degree of sleep stage clustering by suitable ways of pre-processing and dimensionality reduction. As in [13], we focus on a single EEG channel (the electrode at position F4), split the time series into 30-second epochs that are sleep-stage labeled by a human specialist, and then compute the epoch-wise frequency spectra by Fast Fourier Transformation. Improving upon [13], we now additional explore the effect of'scaling' the Fourier amplitudes by taking their modulus to a power of \(\gamma\). Each of the resulting epoch-specific'spectral vectors' is considered as a point in a high-dimensional data space, and we are interested in any kind of cluster structure within this point distribution. Learning from our past experiments, we now study in advance how the degree of clustering in a data distribution (again quantified by the GDV) depends on the dimensionality of the data space and on the relative fraction of'separating' and 'non-separating' features/dimensions. This understanding will then inform the optimal selection of data dimensions. Since the autoencoder architecture used in [13] for dimensionality reduction of the spectral vectors produced relatively large reconstruction errors, we now turn to Principal Component Analysis (PCA) as an alternative method of 'unsupervised data compression'. This classical method has actually several advantages over neural-network based autoencoders, such as the absence of any tunable model parameters, full mathematical transparency and interpretability, as well as the generation of output dimensions that are mutually uncorrelated. We then carefully analyze which subset of PCA components should be best included for the GDV-based cluster analysis, comparing different scaling exponents \(\gamma\). It turns out that the best cluster separation (with a GDV of -0.211 rather than -0.047 for the uncompressed spectral vectors) is obtained by retaining only the single PCA component \(C_{1}\) with \(\gamma=1/2\). According to the 'Eigen-spectrum' of \(C_{1}\), this component basically measures the relative wave contents of the EEG signal in the frequency regimes below 3 Hz (roughly corresponding to the delta-range) and above 3 Hz (covering the theta, alpha, beta and gamma range). Interestingly, we find an unexpectedly large Pearson correlation coefficient of 0.59 between the time- (or epoch)-dependent PCA component \(C_{1}(t)\) and the 'hypnogram', a common plot of the numerical sleep (or vigilance) label \(L(t)\) versus time (with Wake=0, REM=-1, N1=-2, N2=-3 and N3=-4). We verify this result by plotting \(C_{1}(t)\) together with \(L(t)\) and conclude that \(C_{1}(t)\) can be regarded as a continuous variable for'sleep depth', as it indeed closely resembles the hypnogram. Strikingly, this sleep depth variable \(C_{1}(t)\) rises sharply whenever the subject is switching to a more'shallow' sleep stage, but it falls much more gradually whenever the subject switches to a 'deeper' sleep stage. As suggested already in [11], these results support the idea that sleep is better treated as a continuous process rather than a sequence of distinct stages. A practical application of the sleep depth variable \(C_{1}(t)\) is, of course, sleep stage prediction - the automatic sleep stage classification from single-channel EEG data. In contrast to other systems, ours is extremely simple and mathematically fully transparent from the pre-processing up to the final dimensionality reduction method. We asses the performance of the method by computing the correlation between the sleep depth variable \(C_{1}(t)\) and the hypnogram, separately for 68 independent full-night sleep recordings, obtaining Pearson correlation coefficients of up to 0.8. Finally, we test the robustness of the method by recording the same human subject simultaneously with two very different EEG devices and then extract the sleep depth variable \(C_{1}(t)\) separately from both recordings. We find that the two instances of the variable match so closely that they can even be used for a post-hoc temporal synchronization of the EEG machines. Results ### Properties of the Generalized Discrimination Value (GDV) In this work, we use the GDV [14] as a quantitative measure for cluster separation. Specifically, we investigate to which degree the clustering of data vectors from different sleep stages can be improved by optimizing the parameters of data pre-processing and subsequent unsupervised dimensionality reduction. It is therefore important to know how the GDV depends on the dimensionality of the data space and on the degree of clustering in the individual dimensions of this space. For this purpose, we generate an artificial ten-dimensional data set with two data classes. The ten components \(x_{i=0..9}\) of the data vectors \(\vec{x}\) are mutually independent (as they actually are after a PCA transformation) and normally distributed with unit variance. However, between the two classes the mean values of the Gaussians differ by certain amounts \(d_{i}\). In particular, we assume that \(d_{i}\!=\!1\) (significant separation) for the first five, but \(d_{i}\!=\!0\) (no separation) for the remaining five components. We then compute the GDV of the data set when more and more of the ten components (dimensions) are included (Fig.1(b)). We find that the GDV is monotonically decreasing for the first five (class-separating) dimensions, indicating enhanced clustering of the data points, but the GDV already shows a clear saturation in this example. Consequently, if several dimensions are available in which the data classes are well separating, it can be beneficial to include more than one of these separating dimensions. However, once the non-separating dimensions are subsequently included, the GDV is strongly increasing, indicating a progressive loss of the clustering that was already achieved before. As a general rule, it is therefore important to remove all non-separating dimensions from the data space, before the clustering structure of the data points is analyzed. ### Enhancing sleep-stage clustering Next, we apply the GDV to quantify the degree of sleep stage clustering in 70174 thirty-second long epochs of recorded one-channel EEG signals (For details about the data and pre-processing see Methods section. Three example epochs are shown in Fig.2(a)). Each of these'signal vectors' can be considered as a point in a 7680-dimensional space and has been labeled by a sleep specialist to belong to one of the five sleep stages (Wake, REM, N1, N2, and N3). The distribution of these points in their data space can be visualized in only two dimensions by using Multi-Dimensional Scaling (MDS, cf. Methods). The MDS visualization of the time-domain signal vectors shows no hints of clustering (Fig.2(d)), and this is confirmed by a generalized discrimination value (GDV, cf. Methods) of only -0.005. Next we transform the data vectors to Fourier space, keeping only the first 1050 of the real-valued frequency-dependent amplitudes (Three example epochs are shown in Fig.2(b)). The MDS visualization of the resulting spectral vectors shows now already a small degree of clustering (Fig.2(e)), corresponding to a GDV value of -0.018. The small clustering in Fourier space means that the spectral vectors are significantly different in the five sleep stages. This is confirmed by computing the average frequency spectra for each sleep stage (Fig.2(c)). Note that the high-frequency components are progressively suppressed in the 'deeper' sleep stages. Only the spectrum of the REM phase (blue) does not follow this general ordering. Next, we perform a PCA-based dimensionality reduction with the spectral vectors, keeping only the first five PCA components (For the MDS visualization, see Fig.2(f)). This results in a significant enhancement of the sleep stage clustering, yielding a GDV value of -0.047. ### Optimal subset of PCA components We have shown above that in order to enhance the degree of clustering in a multi-dimensional data space, only the class-separating dimensions should be retained. For this purpose, we next investigate how the GDV is changing when only one or two out of the first three PCA components are selected. The results can be presented in the form of a symmetric matrix, where the diagonal elements correspond to only one selected component (See heat maps in Fig.3(a-c)). In addition, we investigate the effect of changing the scaling exponent \(\gamma\) of the spectral vectors. For \(\gamma=1\), the best cluster separation is achieved when only the PCA component \(C_{2}\) is kept (Fig.3(a)). The GDV in this case is -0.122. For \(\gamma=0.7\), the best cluster separation is again achieved with only the PCA component \(C_{2}\) (Fig.3(b)). The GDV is now -0.152. For \(\gamma=0.5\), however, the best cluster separation is achieved with only \(C_{1}\) (Fig.3(c)). The GDV is then -0.211, indicating a quite significant degree of clustering. Note that values of \(\gamma\) smaller than 1/2 can lead to detrimental effects in the subsequent analysis. The quantitative GDV values above can also be visually confirmed by plotting the distributions of the single best-separating PCA components in the five sleep stages (Fig.4(a,c,e)). Note that the maxima of the peaks in the distributions are at different positions for the five sleep stages (with the exception that Wake (black) and N1 (green) are surprisingly similar), but nevertheless there is a significant overlap between all five peaks. This is the kind of 'fuzziness' mentioned in the Introduction, which eventually limits the achievable accuracy in automatic sleep stage classifiers. ### Eigenspectra of the PCA components The spectral vectors form a complex distribution of points in their data space. If this distribution, for the sake of clarity, is imagined as a spheroidal point cloud in a three-dimensional space, the center of mass of this point cloud corresponds to the mean EEG frequency spectrum \(V_{mean}(f)\), averaged over all recorded epochs. In this image, the PCA is finding the main axes of the spheroid (the orthogonal axes of maximum variation in the data distribution), each corresponding to one principal component \(i\). It places into data space a new coordinate system that consists of these main axes, with the origin located at the center of the point cloud. Moving from the origin into the direction of one of the main axes \(i\) by an amount \(C_{i}\) means to modify the frequency spectrum away from the average in a well-defined way: The modification can be mathematically expressed by adding a perturbation to the mean spectrum, namely \(C_{i}\) times the 'eigenspectrum' \(\Delta V_{i}(f)\) of PCA component \(i\). In general, a point in data space with PCA coordinates \(\vec{C}=(C_{1},C_{2},\ldots)\) corresponds to the frequency spectrum \[V(\vec{C},f)=V_{mean}(f)+\sum_{i=0}^{N_{e}}\;C_{i}\cdot\Delta V_{i}(f). \tag{1}\] In Fig.3(d,e,f), we plot the Eigenspectra for the first three PCA components, and for the three tested values of the scaling exponent \(\gamma\). In the case of \(\gamma=0.5\), the Eigenspectrum for the best separating component \(C_{1}\) (orange curve in Fig.3(f)) is negative for small frequencies between zero and about 3 Hz, and positive for larger frequencies. Consequently, the more negative PCA component \(C_{1}\) becomes, the more the low-frequency brain waves dominate over the high frequency waves in the EEG spectrum. We therefore expect \(C_{1}\) to become more negative in the 'deeper' sleep stages. ### Correlation of PCA components and sleep labels In order to test whether some PCA components indeed correlate with sleep depth, we first compute the mutual Pearson correlation coefficients between the components \(C_{0}\ldots C_{2}\) and the negative numerical sleep label (Wake=0, REM=-1, N1=-2, N2=-3 and N3=-4), which corresponds to a hypnogram when plotted over time (more precisely: over successive sleep epochs \(k\)). Since positive and negative correlations are equally interesting here, we only consider the modulus of the Pearson coefficients. They are again presented in the form of a symmetrical matrix in Fig.4(b,d,f). The mutual correlations between \(C_{0}\), \(C_{1}\) and \(C_{2}\) are zero, as it should be in PCA. More interestingly, the correlation between the hypnogram and the three PCA components (first row of each matrix) is non-zero, and it is maximal for the same PCA component that also gives the best cluster separation. In particular, for the a scaling exponent of \(\gamma=0.5\), the Pearson coefficient between \(C_{1}\) and the hypnogram reaches a surprisingly large value of \(0.59\). We therefore consider only the case \(\gamma=0.5\) for the rest of this work. ### Sleep depth variable \(C_{1}(t)\) versus hypnograms To further test the relation between \(C_{1}(t)\) and the hypnogram, we plot both quantities during the same time period. Fig.5 shows three arbitrarily chosen time periods, each with a duration of \(250\) minutes. It is obvious that \(C_{1}(t)\) (blue) resembles the hypnogram (black) quite closely, but there are also some characteristic differences: Whenever the sleep label switches upwards, to a more shallow sleep stage or to the Wake state, \(C_{1}(t)\) also shows an abrupt increase. By contrast, when the label switches downwards, to a deeper sleep stage, \(C_{1}(t)\) decreases as well, but in a very gradual way that can easily continue for more than \(30\) minutes. These results confirm that (for \(\gamma=1/2\)), the PCA component \(C_{1}(t)\) indeed encodes sleep depth in a continuous way. ### Performance of \(C_{1}(t)\) in individual data sets For potential future applications of \(C_{1}(t)\) in personal sleep tracking devices and other low-cost scenarios, it is important to analyze the degree of correlation between the sleep depth variable and the ground truth hypnogram for a larger group of full-night recordings. Moreover, it is unclear whether a better performance of \(C_{1}(t)\) can be achieved when the PCA is fit 'globally' to all available EEG data sets, or 'locally' to each individual subject. To answer these questions, we compute the Pearson correlation coefficient between \(C_{1}(t)\) and the hypnogram separately for all \(68\) available EEG data sets. The results are shown as probability distributions in Fig.6. We find a surprisingly large fraction of Person correlations in the 'good' range \([0.6,0.8]\) and in the'medium' range \([0.4,0.6]\), no matter if the PCA is fitted globally (top panel) or locally (bottom panel). However, using only person-specific information for the PCA fit results in a significantly larger fraction of 'bad' outcomes in the range \([0,0.4]\). It is therefore more appropriate to fit the PCA model to a large pool of full-night EEG recordings before the sleep depth variable is evaluated for a new individual. ### Sleep depth from different EEG devices Since the degree of sleep stage clustering was shown in this work to depend strongly on the used data pre-processing (including parameters such as the spectral scaling exponent \(\gamma\)), it is likely that using different EEG devices with distinct spectral responses and noise levels will also affect the clustering and eventually might lead to non-comparable time courses of the extracted sleep depth variable \(C_{1}(t)\). To address this potential problem, we measure the same human subject simultaneously with two different EEG setups. One of them is a clinical device with only three EEG channels and the other one is a \(64\)-channel'research' device, but we are using only the channel 'F4' for our comparison. The two machines are not mutually synchronized or coupled in any way, and they are run at different sampling rates. After recording a whole night of sleep, the F4-signal of the research device is down-sampled to the sampling frequency of the clinical device (\(128\) Hz). When computing the Fourier spectra (re-scaled with \(\gamma=1/2\)), averaged over all sleep epochs, we find that the spectral responses of the two devices are quite different (Fig.7(a)). To make the data as similar as possible, we compute a filter function in frequency space, defined as the ratio between the average spectral responses of the clinical and research device (Fig.7(b)). By multiplying each epoch-specific spectral vector of the research device with this filter function, the two responses become identical on average (Fig.7(c)). Next, we fit a PCA model to the spectral vectors of the clinical machine. Note that this pool of spectral vectors comes only from a single full-night recording. In future practical applications, it might be preferable to use a larger pool of recordings instead. Based on the given PCA model, we now extract the sleep depth variable \(C_{1}(t)\) from both machines (Fig.7(d)). As expected, the time courses do not match, because the two devices are not synchronized and have actually been switched to recording mode at different times. However, by artificially applying different time-shifts \(\Delta t\) (multiples of 30-second epochs) between the two \(C_{1}(t)\) signals and computing the Pearson correlation coefficient for each \(\Delta t\), we find that this cross-correlation has a clear global peak at 27 epochs (Fig.7(e)). Shifting the \(C_{1}(t)\) time coarse of the research device by this optimum amount, we find a remarkably close match between the two extracted sleep variables (Fig.7(f)). It is therefore possible to directly compare the \(C_{1}(t)\) signals measured with different EEG devices by using the above technique. Another interesting future application would be to compute \(C_{1}(t)\) separately for the different electrodes of a given EEG device and then to investigate if different brain regions 'fall asleep' at slightly different times or to different degrees. Discussion In this work, we continue a line of investigation that focuses on the cluster structure of human EEG data during different sleep stages. We start with the assumption that if sleep stages would be 'natural kinds', they should be discoverable by completely unsupervised methods of data analysis as distinct clusters in data space. However, using the General Discrimination Value as a quantitative measure of class separability, we only find a vanishingly small degree of clustering (GDV=-0.005) in the raw epoch-wise EEG signals. This is not surprising, because two signal vectors that have been shifted relative to each other by only a few time steps can have an arbitrarily large euclidean distance, even if they belong to the same sleep stage. In general, finding any class-separating features in time-domain EEG data is extremely difficult for a machine learning system without prior information. Even if some method of unsupervised data analysis could find well-separated clusters directly in the space of time-domain data vectors, it is doubtful whether those will correspond to the human-defined sleep stages. For this reason, we believe that a minimal human-assisted pre-processing and feature selection is required for the analysis of EEG sleep data, an approach that might be called 'weakly supervised' data analysis. In the present study, Fourier-transforming the epoch-wise time-domain vectors to the frequency domain and subsequently ignoring all phase information was sufficient to improve sleep-stage clustering significantly (GDV=-0.018), and a further improvement was possible by scaling the resulting frequency spectra by a suitable exponent \(\gamma\). It is however likely that sleep-stage clustering could be further enhanced in future studies by choosing different time windows (currently we have restricted our numerical experiments to 30-second epochs only), or by replacing the Fourier transformation by a suitable wavelet transformation. We could confirm in the present study that a substantial enhancement of clustering (up to GDV=-0.211) is possible by a suitable dimensionality reduction of the pre-processed data, provided that only the best separating dimensions (features) are retained. Surprisingly, the classical method of Principal Component Analysis (PCA) proved to be much better suited for compressing Fourier-EGG data than a formerly tested multi-layer auto-encoder. In particular, we found that the single PCA component \(C_{1}(t)\), in combination with a spectral scaling exponent of \(\gamma=1/2\), not only maximizes sleep-stage clustering, but also correlates surprisingly well (Pearson correlation of 0.59) with the hypnogram, a traditional plot of the numerical sleep stage label over time (epoch). We note that this correlation might be further enhanced by adjusting the arbitrary numerical values assigned to the five different sleep stages. An analysis of the 'eigenspectrum' of \(C_{1}(t)\) reveals that this PCA component basically measures the relative content of slow (below 3 Hz) and fast (above 3 Hz) brain waves in the momentary spectrum of the EEG signal. It is remarkable that this well-known discriminating feature emerged as being optimal for enhancing sleep-stage clustering in our weakly supervised approach. When plotting the time-course of \(C_{1}(t)\) together with the hypnogram, we find that the two curves are strikingly similar, so that \(C_{1}(t)\) can actually be viewed as a sleep depth variable. However, the two quantities differ in their behavior at transitions from one discrete sleep stage to another: When the brain is'switching' (according to the criteria of the human rater) to a more shallow sleep stage or to the wake stage, \(C_{1}(t)\) is also abruptly rising. By contrast, switches to deeper sleep stages only mark the beginning of a continuous downward trend of \(C_{1}(t)\) that can last for more than 30 minutes. During these downward trends, the EEG's frequency spectrum is gradually becoming dominated by slow brain waves, even though the sleep stage is rated as constant. In any case, the gradual evolution of the brain wave spectrum within a fixed sleep stage, together with the strongly overlapping probability distributions of \(C_{1}\) in the various sleep stages, strongly suggest that sleep is better understood as a continuum, rather than a succession of discrete phases. The sleep depth variable \(C_{1}(t)\) offers an easy, transparent and reproducible way to track this continuous brain process based on single channel EEG data. Methods ### Three-channel sleep EEG data For the main part of this paper, we are using 68 three-channel EEG data sets from the sleep laboratory of University Hospital Erlangen, each corresponding to a full-night recording of brain signals from a different human subject. The data were recorded with a sampling rate of 256 Hz, using three separate channels F4-M1, C4-M1, O2-M1. In this work, however, we are only using the first channel (F4-M1). The participants of the study included 46 males and 22 females, with an age range between 21 and 80 years. Exclusion criteria were a positive history of misuse of sedatives, alcohol or addictive drugs, as well as untreated sleep disorders. The study was conducted in the Department of Otorhinolaryngology, Head Neck Surgery, of the Friedrich-Alexander University Erlangen-Nurnberg (FAU), following approval by the local Ethics Committee (323-16 Bc). Written informed consent was obtained from the participants before the cardiorespiratory polysomnography (PSG). After recording, the raw EEG data were analyzed by a sleep specialist accredited by the German Sleep Society (DGSM), who detected typical artifacts [17] in the data and visually identified the five sleep stages (Wake, REM, N1, N2, N3) in subsequent 30-second epochs, according to the AASM criteria (Version 2.1, 2014) [18, 19]. ### Multi-dimensional scaling (MDS) A frequently used method to generate low-dimensional embeddings of high-dimensional data is t-distributed stochastic neighbor embedding (t-SNE) [20]. However, in t-SNE the resulting low-dimensional projections can be highly dependent on the detailed parameter settings [21], sensitive to noise, and may not preserve, but rather often scramble the global structure in data [22, 23]. In contrast to that, multi-Dimensional-Scaling (MDS) [24, 25, 26, 27] is an efficient embedding technique to visualize high-dimensional point clouds by projecting them onto a 2-dimensional plane. Furthermore, MDS has the decisive advantage that it is parameter-free and all mutual distances of the points are preserved, thereby conserving both the global and local structure of the underlying data. When interpreting patterns as points in high-dimensional space and dissimilarities between patterns as distances between corresponding points, MDS is an elegant method to visualize high-dimensional data. By color-coding each projected data point of a data set according to its label, the representation of the data can be visualized as a set of point clusters. For instance, MDS has already been applied to visualize for instance word class distributions of different linguistic corpora [28], hidden layer representations (embeddings) of artificial neural networks [14, 4], structure and dynamics of recurrent neural networks [29, 30, 31], or brain activity patterns assessed during e.g. pure tone or speech perception [32, 28], or even during sleep [15, 16]. In all these cases the apparent compactness and mutual overlap of the point clusters permits a qualitative assessment of how well the different classes separate. ### Generalized Discrimination Value (GDV) We used the GDV to calculate cluster separability as published and explained in detail in [14]. Briefly, we consider \(N\) points \(\mathbf{x}_{n=1.N}=(x_{n,1},\cdots,x_{n,D})\), distributed within \(D\)-dimensional space. A label \(l_{n}\) assigns each point to one of \(L\) distinct classes \(C_{l=1..L}\). In order to become invariant against scaling and translation, each dimension is separately z-scored and, for later convenience, multiplied with \(\frac{1}{2}\): \[s_{n,d}=\frac{1}{2}\cdot\frac{x_{n,d}-\mu_{d}}{\sigma_{d}}. \tag{2}\] Here, \(\mu_{d}=\frac{1}{N}\sum_{n=1}^{N}x_{n,d}\) denotes the mean, and \(\sigma_{d}=\sqrt{\frac{1}{N}\sum_{n=1}^{N}(x_{n,d}-\mu_{d})^{2}}\) the standard deviation of dimension \(d\). Based on the re-scaled data points \(\mathbf{s_{n}}=(s_{n,1},\cdots,s_{n,D})\), we calculate the _mean_ intra-class distances_ for each class \(C_{l}\) \[\bar{d}(C_{l})=\frac{2}{N_{l}(N_{l}\!-\!1)}\sum_{i=1}^{N_{l}-1}\sum_{j=i+1}^{N_{l} }d(\mathbf{s}_{i}^{(l)},\mathbf{s}_{j}^{(l)}), \tag{3}\] and the _mean inter-class distances_ for each pair of classes \(C_{l}\) and \(C_{m}\) \[\bar{d}(C_{l},C_{m})=\frac{1}{N_{l}N_{m}}\sum_{i=1}^{N_{l}}\sum_{j=1}^{N_{m}}d (\mathbf{s}_{i}^{(l)},\mathbf{s}_{j}^{(m)}). \tag{4}\] Here, \(N_{k}\) is the number of points in class \(k\), and \(\mathbf{s}_{i}^{(k)}\) is the \(i^{th}\) point of class \(k\). The quantity \(d(\mathbf{a},\mathbf{b})\) is the euclidean distance between \(\mathbf{a}\) and \(\mathbf{b}\). Finally, the Generalized Discrimination Value (GDV) is calculated from the mean intra-class and inter-class distances as follows: \[\text{GDV}=\frac{1}{\sqrt{D}}\left[\frac{1}{L}\sum_{l=1}^{L}\bar{d}(C_{l})\;- \;\frac{2}{L(L\!-\!1)}\sum_{l=1}^{L-1}\sum_{m=l+1}^{L}\bar{d}(C_{l},C_{m})\right] \tag{5}\] whereas the factor \(\frac{1}{\sqrt{D}}\) is introduced for dimensionality invariance of the GDV with \(D\) as the number of dimensions. Note that the GDV is invariant with respect to a global scaling or shifting of the data (due to the z-scoring), and also invariant with respect to a permutation of the components in the \(N\)-dimensional data vectors (because the euclidean distance measure has this symmetry). The GDV is zero for completely overlapping, non-separated clusters, and it becomes more negative as the separation increases. A GDV of -1 signifies already a very strong separation. ### Generic pre-processing of EEG epochs Each recorded single-channel epoch consists of \(30\times 256=7680\) EEG signal values, and our database of 68 person-specific data sets contains totally more than 70000 of these labeled epochs. In the following, we denote the signal value \(U\) at time step \(t\in\{1,\ldots,7680\}\) within epoch \(e\) of the personal data set \(s\in\{1,\ldots,68\}\) by \(U_{e,t}^{(s)}\). Within each of the 68 data sets, we first remove all epochs in which artifacts where detected by the sleep specialist. The remaining epochs are normalized by z-scoring, \[U_{e,t}^{(s)}\longrightarrow\frac{U_{e,t}^{(s)}-\mu^{(s)}}{\sigma^{(s)}}, \tag{6}\] where \(\mu^{(s)}\) is the average signal value and \(\sigma^{(s)}\) is the standard deviation in data set \(s\). By this way, variations of the signal amplitudes between different subjects \(s\) are suppressed, but variations between the sleep stages of each individual subject are retained. Pooling now over all subjects \(s\), we create a unified list of normalized **signal vectors**\(U_{k}(t)\). Each of them corresponds to one of the global epochs \(k\in\{1,\ldots,70174\}\), contains time steps \(t\in\{1,\ldots,7680\}\), and has a known sleep-stage label \(L_{k}\). Next, we convert the time-domain signal vectors \(U_{k}(t)\) to the frequency domain by Fast Fourier Transformation (FFT), yielding 3840 complex Fourier amplitudes \(\hat{A}_{k}(f)\) for each epoch \(k\). Discarding the phase information, we retain only the real-valued modulus, which is taken to the power of \(\gamma\) (called the scaling exponent) in order to enhance cluster separation later on. This produces a total number of 70174 **spectral vectors**, defined as \(V_{k}(f)=\left|\hat{A}_{k}(f)\right|^{\gamma}\), representing the scaled momentary frequency spectrum during the global epoch \(k\). Since our EEG device produces a strong drop of all frequency components above about 35 Hz, we keep only the entries below this cutoff, so that \(f\in\{1,\ldots,1050\}\). Finally, we perform a Principal Component Analysis (PCA) on the list of spectral vectors \(V_{k}(f)\), from which we keep only the first three components \(C_{0}\), \(C_{1}\) and \(C_{2}\). By projecting all spectral vectors into this three-dimensional subspace, we obtain 70174 **compressed vectors**\(W_{k}(C)\), each containing the'most essential information' about the momentary frequency spectrum in a given epoch. Each of the 70174 data vectors (either the signal, spectral, or compressed vectors) can be considered as a point in a vector space. In the following, we are interested in the distribution of these points, and we analyze how well they cluster with respect to the five human-defined sleep stages. ### Sleep depth variable \(C_{1}(k)\) It turns out that the best separation of sleep stages is obtained with the first PCA component \(C_{1}=\text{PCA}_{1}\) and when using the scaling exponent \(\gamma=0.5\) (compare Results section). Since the temporal development of \(C_{1}\) over subsequent sleep epochs \(k\) closely resembles the hypnogram, it can be interpreted as a'sleep depth' variable. Summing up, it is computed from the temporal EEG signal \(U_{k}(t)\) in epoch \(k\) in the following way: \[C_{1}(k)=\text{PCA}_{1}\left\{\left|\text{ FFT}\left\{U_{k}(t)\right\}\right. \right|^{1/2}\right\}. \tag{7}\] ### Sleep depth \(C_{1}(k)\) from different EEG devices For testing the robustness of \(C_{1}(k)\), we use a separate overnight data set from a different human subject, recorded simultaneously with two EEG devices in a sleeping lab of the Paracelsus Medical University, Nurnberg. The first ('clinical') device has 3 channels and was set to a sampling frequency of 128 Hz. The second ('research') device has 64 channels and was set to a larger sampling frequency, but has after the measurement been down-sampled to 128 Hz as well. In the following, we use only the signals of channel \(F4\) from both devices. As before, the EEG signals are split into subsequent 30-second epochs \(k\), yielding the signal vectors \(U_{k}(t)\). The latter are Fourier transformed, keeping only the lowest 1000 frequency components, and then scaled with the exponent \(\gamma=1/2\), yielding the spectral vectors \[V_{k}(f)=\left|\text{ FFT}\left\{U_{k}(t)\right\}\right.\left|^{1/2}.\right. \tag{8}\] By averaging these spectral vectors over all epochs \(k\), we obtain two overall frequency spectra, \(\hat{V}^{(di)}(f)\) and \(\hat{V}^{(res)}(f)\), one for each device (See Fig.7(a)). As they are too different for a direct comparison, we compute a frequency-dependent filter function as the ratio between the clinical and research frequency spectra (See Fig.7(b)) \[A(f)=\frac{\hat{V}^{(di)}(f)}{\hat{V}^{(res)}(f)}. \tag{9}\] We now multiply each of the epoch-specific spectral vectors of the research device with this filter function \[V_{k}^{(res)}(f)\;\longrightarrow\;V_{k}^{(res)}(f)\;A(f). \tag{10}\] After this filtering, the overall (epoch-averaged) frequency spectra \(\hat{V}^{(chi)}(f)\) and \(\hat{V}^{(res)}(f)\) of the two devices become identical (See Fig.7(c), where a small difference has been artificially introduced for better visibility). Next, we fit a PCA model to the spectral vectors \(V^{(di)}(f)\) of the clinical device. This single model is then used, for both devices, to compress the 1000-dimensional spectral vectors \(V_{k}(f)\) down to two-dimensional vectors \(W_{k}(C)=(C_{0}(k),C_{1}(k))\). We further consider only the PCA component \(C_{1}(k)\), because it can be interpreted as a variable for sleep depth (See Fig.7(d,e)). ## Additional Information ### Author contributions CM conceived the study, implemented the methods, evaluated the data, and wrote the paper. PK co-designed the study, discussed the results and wrote the paper. AS, HS and KT discussed the results. MT provided data. ### Funding This work was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation): grant SCHU 1272/16-1 (project number 455908056) to HS, grant TR 1793/2-1 (project number 455908056) to MT, grant SCHI 1482/3-1 (project number 451810794) to AS, and grants KR 5148/2-1 (project number 436456810), KR 5148/3-1 (project number 510395418) and GRK 2839 (project number 468527017) to PK. ### Competing interests statement The authors declare no competing interests. ### Data availability statement The complete data and analysis programs will be made available upon reasonable request. ### Ethical approval and informed consent The main part of the study (68 three-channel data sets) was conducted in the Department of Otorhindaryngology, Head Neck Surgery, of the Friedrich-Alexander University Erlangen-Nurnberg (FAU), following approval by the local Ethics Committee (323 - 16 Bc). Written informed consent was obtained from the participants before the cardiorespiratory poly-somnography (PSG). The comparison between different EEG machines (64-channel and 3-channel device) was conducted in the Department of Otorhinolaryngology, Head and Neck Surgery, Paracelsus Medical University, Nurnberg, Germany, following approval by the local Ethics Committee (103 - 20 B). Written informed consent was obtained from the participants before the cardiorespiratory poly-somnography (PSG). ### Third party rights All material used in the paper are the intellectual property of the authors.
2307.13968
Rotating traversable wormhole geometries in the presence of three-form fields
In this work, we study the rotating wormhole geometries supported by a three-form field. We demonstrate for particular choices of parameters that it is possible for the matter fields threading the wormhole to satisfy the null and weak energy conditions throughout the spacetime, when the three-form field is present. In this case, the form field is interpreted as supporting the wormhole and all the exoticity is confined to it. Thus, the three-form curvature terms, which may be interpreted as a gravitational fluid, sustain these wormhole geometries. Additionally, we also address the ergoregion of the solutions.
Takol Tangphati, Butsayapat Chaihao, Daris Samart, Phongpichit Channuie, Davood Momeni
2023-07-26T05:58:45Z
http://arxiv.org/abs/2307.13968v1
# Rotating traversable wormhole geometries in the presence of three-form fields ###### Abstract In this work, we study the rotating wormhole geometries supported by a three-form field. We demonstrate for particular choices of parameters that it is possible for the matter fields threading the wormhole to satisfy the null and weak energy conditions throughout the spacetime, when the three-form field is present. In this case, the form field is interpreted as supporting the wormhole and all the exoticity is confined to it. Thus, the three-form curvature terms, which may be interpreted as a gravitational fluid, sustain these wormhole geometries. Additionally, we also address the ergoregion of the solutions. ## I Introduction General Relativity (GR) permits the existence of traversable Lorentzian wormholes, which were first proposed by Ellis [1; 2] and Bronnikov [3]. Basically, traversable Lorentzian wormholes necessitate the existence of an exotic matter field, which is coupled to gravity. This field features a kinetic term with the reverse sign, resulting in an energy-momentum tensor that violates the null energy condition [4; 5]. In recent times, researchers have explored the inclusion of phantom fields in cosmology due to their potential to drive accelerated expansion of the Universe [6]. However, alternative scenarios involving gravity theories with higher curvature terms have been considered, which allow for the construction of wormholes without the need for the exotic field, e.g., [6; 7; 8]. Several types of exotic matter have been investigated in the context of traversable wormholes. One approach involves utilizing modified theories of gravity to create effective exotic fluids that can support the wormhole's throat. More specifically, Casimir energy has been considered as a potential source to generate a Traversable Wormhole [9]. It is used to proof the existence of negative energy which can be built in the laboratory. Its extension has been recently investigated by many authors, e.g., [10; 11; 12; 13]. From an observational astrophysical perspective, efforts have been made to search for wormholes [14; 15; 16]. These enigmatic structures have been investigated as potential gravitational lenses [17; 18], with particular attention given to studying their Einstein rings [19], and shadows [20; 21]. Most studies thus far have focused on static wormholes, although astrophysical objects typically exhibit rotation. Hence, understanding the characteristics of rotating wormholes is of great interest. Additionally, it is worth noting that the static Ellis wormholes in General Relativity (GR) are known to be unstable [22; 23; 24; 25], and the introduction of rotation might offer a possibility of stabilizing them [26; 27]. Although many aspects of these rotating wormholes have already been examined, one crucial characteristic that remains to be investigated is their stability. In this work, we consider the three-form field to be responsible for supporting the rotating wormhole geometries. It was demonstrated that all the exoticity is confined to it [38]. Moreover, three-form fields [28; 29] are widely used in the literature and seem to present viable solutions to cosmological scenarios, e.g., [30; 31; 32; 33; 34; 35; 36; 37]. As mentioned in Ref.[38], the three-form curvature terms, which may be interpreted as a gravitational fluid, sustain these wormhole geometries. Here the authors are essentially interested in finding wormhole geometries supported by three-forms, where the matter threading the wormhole satisfies the energy conditions. Noticed that very recent study on the deflection angle of light by traversable wormholes, which are supported by the three-form fields was carried out in Ref.[39]. The plan of the work is structured as follows: In Sec. II, we review the rotating traversable wormhole and its proper metric tensor form. We then introduce the three-form field to sustain rotating wormhole geometries. The properties of the wormhole; the flaring-out condition and the asymptotic flatness are investigated. Additionally, the gravitational and three-form field equations for the rotating traversable wormhole are presented. In Sec. III, we present the arbitrary functions for the wormhole construction. The null and weak energy conditions are studied. Since the traversable wormhole is rotating around the axial axis, the ergoregion has occurred at some point of rotation speed. We also discuss about this feature. In Sec. IV, we conclude our findings. ## II Rotating metric tensor & wormholes supported by three-form field In this section, we consider the spacetime describing a rotating object. As mentioned in [40], the properties of the metric tensor are stationary and axially symmetric. This also means the spacetime has a time-like Killing vector field \(\zeta^{a}\equiv(\partial/\partial t)^{a}\) which is invariant in time translation and has a space-like Killing vector field \(\psi^{a}\equiv(\partial/\partial\varphi)^{a}\) which is invariant in rotating in an azimuths axis. According to Refs.[41; 42; 43], the most general stationary and axisymmetric metric takes the form \[ds^{2}=g_{tt}dt^{2}+2g_{t\psi}dtd\psi+g_{\psi\psi}d\psi^{2}+g_{ij}dx^{i}dx^{j}, \tag{1}\] where \(i,j\) are the indices for the rest of the space-like coordinates. A time-dependent conformal factor in the Morris-Thorne wormhole metric tensor might be applied to prevent the energy conditions' violation; however, any observer travelling through this wormhole would experience the radius of wormhole increasing all directions which could not be practical traversable wormhole [40; 44; 45; 46]. In this work, we first consider the wormhole metric which describes a rotating wormhole spacetime in the spherical polar co-ordinates given by [40] \[ds^{2} = -e^{2\Phi(r)}dt^{2}+\frac{dr^{2}}{1-\frac{b(r)}{r}}+r^{2}K(r)^{2 }\left[d\theta^{2}+\sin^{2}\theta\left(d\varphi-\omega(r)dt\right)^{2}\right], \tag{2}\] where \(\Phi(r)\) is referred to as the redshift function which is associated with gravitational redshift. It is assumed to have a finite value across all points to prevent the formation of event horizons. This condition allows the wormhole to be traversable, according to references [47]. Here \(b(r)\) is denoted the shape function, as it depicts the form of the wormhole. The radial coordinate \(r\) runs from a minimum value \(r_{0}\), corresponding to the throat of the wormhole, where \(b(r_{0})=r_{0}\) at \(r=r_{0}\). A key ingredient of wormholes is the so-called flaring-out condition [4], given by \(b(r)-b^{\prime}(r)r\geq 0\), at the vicinity of the throat, where a prime denotes a derivative with respect to the radial coordinate \(r\). Additionally, \(b(r)/r\to 0\) as \(r\rightarrow\infty\). Note that the additional condition \(b(r)/r<1\) is also imposed. \(K(r)\) is a positive and non-decreasing function of \(r\). It is worth mentioning that the above metric was first used by Hartle [48; 49] in the study of relativistic rotating stars. The asymptotic flatness is still required for the metric tensor at \(r\rightarrow\infty\) where \[\Phi(r)\to 0,\quad K(r)\to 1,\quad\omega(r)\to 0. \tag{3}\] We choose the form of \(\omega(r)\) to follow the asymptotic flatness [40]: \[\omega(r)=\frac{2a}{r^{3}}+\mathcal{O}\left(\frac{1}{r^{4}}\right)\,, \tag{4}\] where \(a\) is the total angular momentum. The action of the 3-form field model to construct the wormhole reads [38] \[\mathcal{S} = \int d^{4}x\sqrt{-g}\left(\frac{R}{2\kappa^{2}}+\mathcal{L}_{A} \right)+\mathcal{S}_{m}, \tag{5}\] where \(g\) is the determinant of the metric tensor, \(\kappa^{2}\equiv 8\pi G\), \(R\) is the scalar curvature, \(\mathcal{S}_{m}\) is the action of the ordinary mass and \(\mathcal{L}_{A}\) is the Lagrangian density of the 3-form field described by \[\mathcal{L}_{A}=-\frac{1}{48}F^{2}+V(A^{2}), \tag{6}\] where \(F^{2}=F^{\mu\nu}F_{\mu\nu}\) is the contraction of all indices of the 4-form strength tensor (\(F=dA\)) \[F_{\alpha\beta\gamma\delta}=\nabla_{\alpha}A_{\beta\gamma\delta}-\nabla_{ \beta}A_{\gamma\delta\alpha}+\nabla_{\gamma}A_{\delta\alpha\beta}-\nabla_{ \delta}A_{\alpha\beta\gamma} \tag{7}\] Varying the action in Eq. (5) with respect to \(A_{\alpha\beta\gamma}\), we obtain the field equation as \[\nabla_{\alpha}F^{\alpha\beta\gamma\delta}=12\frac{\partial V}{\partial A^{2}}A^{ \beta\gamma\delta}\,. \tag{8}\] Practically, we are able to write the 3-form field \(A_{\alpha\beta\gamma}\) in term of the 1-form field (vector) \(B^{\delta}\) via \[B^{\delta}=\frac{1}{3!}\frac{1}{\sqrt{-g}}\,\epsilon^{\delta\alpha\beta\gamma}A _{\alpha\beta\gamma}, \tag{9}\] where we have considered a 4-dimensional spacetime and 3-form field, and setting \(n=4\) and \(p=3\) for this work. We can invert Eq. (9) to write the 3-form field in terms of its dual as shown \[A_{\alpha\beta\gamma}=\sqrt{-g}\epsilon_{\alpha\beta\gamma\delta}B^{\delta}. \tag{10}\] We have a choice to choose the components of the vector \(B^{\delta}\)[33] \[B^{\delta}=\frac{\zeta(r)}{\sqrt{2}}\left(0,\left(1-\frac{b(r)}{r}\right)^{1/ 2},0,\frac{1}{r\sin\theta}\right)^{T}, \tag{11}\] where \(\zeta(r)\) is an auxiliary function of the 3-form field in the metric tensor Eq. (2). We express the non-trivial components of the 3-form field \[A_{t\theta\phi}=A_{\phi t\theta}=A_{\theta\phi t}=-A_{t\phi\theta}=-A_{\theta t \phi}=-A_{\phi\theta t}=e^{\Phi(r)}r^{2}\sin\theta\,\zeta(r). \tag{12}\] The above relations allow us to express \(A^{2}\) of the 3-form fields as \[A^{2}=A_{\alpha\beta\gamma}A^{\alpha\beta\gamma}=-6\zeta^{2}(r). \tag{13}\] It is noteworthy that, regardless of the angular component in the dual vector \(B^{\delta}\) in Eq. (11), there is no effect of the angular part from metric tensor on the square of the 3-form fields in Eq. (13). Now we consider the kinetic term of the Lagrangian density of the 3-form field \(\mathbf{K}(r)\) \[\mathbf{K}(r)\equiv-\frac{1}{48}F^{2}=-\frac{1}{48}F^{\alpha\beta\gamma\delta }F_{\alpha\beta\gamma\delta}=\frac{1}{2}\left(1-\frac{b(r)}{r}\right)\left[ \zeta(r)\left(\Phi^{\prime}(r)+\frac{2}{r}\right)+\zeta^{\prime}(r)\right]^{2}. \tag{14}\] Owing to the fact that the angular part does not appear in the square of the 3-form field, the kinetic term of the 3-form field still has no angular part at all (see the Ref. [35]). Also note that the kinetic term will diminish at the throat of the wormhole \(r=r_{0}=b(r_{0})\). Now we vary the action in Eq. (5) with respect to the metric tensor \(g^{\mu\nu}\) and obtain the field equations \[G_{\mu\nu} = 8\pi T^{\rm(eff)}_{\mu\nu}=8\pi\left(T^{\rm(A)}_{\mu\nu}+T^{\rm( m)}_{\mu\nu}\right), \tag{15}\] where \(T^{\rm(A)}_{\mu\nu}\) is the energy momentum tensor of the 3-form field, \(T^{\rm(m)}_{\mu\nu}\) is the energy momentum tensor of matter. The energy momentum tensor of the 3-form field can be expressed to obtain \[T^{\rm(A)\mu}{}_{\nu}=\frac{1}{6}F^{\mu\alpha\beta\gamma}F_{\nu\alpha\beta\nu }+6\frac{\partial V}{\partial A^{2}}A^{\mu\alpha\beta}A_{\nu\alpha\beta}+ \mathcal{L}_{A}\delta^{\mu}{}_{\nu}. \tag{16}\] The energy momentum tensor of 3-form field in the rotating wormhole metric has non-trivial components as follows \[T^{\rm(A)}{}_{t} = -\rho_{A}=-V+\frac{\partial V}{\partial\zeta}\zeta-\mathbf{K}\,, \tag{17}\] \[T^{\rm(A)}{}_{r} = p_{r,A}=-V+\mathbf{K}\,,\] (18) \[T^{\rm(A)\theta}{}_{\theta} = p_{\theta,A}=-V+\frac{\partial V}{\partial\zeta}\zeta-\mathbf{K}\,,\] (19) \[T^{\rm(A)\phi}{}_{\phi} = p_{\theta,A}=-V+\frac{\partial V}{\partial\zeta}\zeta-\mathbf{K}\,. \tag{20}\] The gravitational field equations \[\rho_{\rm eff} = \rho_{m}+\rho_{A} \tag{21}\] \[= \frac{e^{-2\Phi}}{4r^{2}}\left[-b^{\prime}\left(4e^{2\phi}+r^{3} \sin^{2}\theta\,\omega\omega^{\prime}\right)+r^{2}\sin^{2}\theta\left(b\omega \omega^{\prime}+(r-b)\left(r\omega^{\prime 2}+\omega\left(8-2r\Phi^{\prime} \right)\omega^{\prime}+2r\omega^{\prime\prime}\right)\right)\right],\] \[p_{r,\rm eff} = p_{r,\rm m}+p_{r,\rm A}\] (22) \[= -\frac{b}{r^{3}}+\frac{1}{4r}\left(1-\frac{b}{r}\right)\left(8 \Phi^{\prime}+e^{-2\Phi}r^{3}\sin^{2}\theta\omega^{\prime 2}\right),\] \[p_{\theta,\rm eff} = p_{\theta,\rm m}+p_{\theta,\rm A}\] (23) \[p_{\phi,\rm eff} = p_{\phi,\rm m}+p_{\phi,\rm A}\] (24) \[= \left(1-\frac{b}{r}\right)\left(\frac{\Phi^{\prime}}{r}+\Phi^{ \prime 2}+\Phi^{\prime\prime}+\left(\frac{b-b^{\prime}r}{2r^{2}(r-b)}\right) +\left(\frac{b-b^{\prime}r}{2r(r-b)}\right)\Phi^{\prime}\right)+\Delta,\] where \[\Delta = \frac{1}{4}\left(e^{-2\Phi}\sin^{2}\theta\left[\omega^{\prime} \left(\omega\left(7b+r(b^{\prime}-8)+2r(r-b)\Phi^{\prime}\right)+3(b-r)r\omega ^{\prime}\right)+2(b-r)r\omega\omega^{\prime\prime}]\right)\,.\] The field equation of the three-form field in Eq. (8) in the rotating wormhole metric tensor reads \[2r^{2}\frac{\partial V}{\partial\zeta}+\zeta^{\prime}\left( \frac{4r^{2}-3rb-r^{2}b^{\prime}}{r}+2r(r-b)\Phi^{\prime}\right)+2r(r-b)\zeta ^{\prime\prime} \tag{25}\] \[+ \frac{\zeta}{r}\left(-4r+6b-2rb^{\prime}+r\Phi^{\prime}(b-rb)2r^ {2}(r-b)\Phi^{\prime\prime}\right)=0.\] The above relation imposes an additional constraint on the unknown functions and is significantly useful in solving explicit wormhole solutions. ## III Energy conditions and ergoregions In order to find wormhole solutions, we will specify the redshift and shape functions, and assume further a form for \(\zeta\). In this work, we follow the work done by Ref.[38]. Additionally, in Refs [50; 51; 52], the energy momentum tensor of ordinary matter holds the energy conditions where as the 3-form field involves the violation of NEC and WEC. We need to solve the five independent equations, which consist of three gravitational field equations Eqs.(21)-(24) and the equation of motion for \(\zeta\), i.e., Eq.(25). Following the notation of [53], we consider the metric functions of the form \[b(r) = r_{0}\left(\frac{r_{0}}{r}\right)^{\beta}, \tag{26}\] \[\Phi(r) = \Phi_{0}\left(\frac{r_{0}}{r}\right)^{\alpha}, \tag{27}\] where \(\beta>-1\), \(\alpha>0\), and for the \(\zeta\) function with \(\gamma>0\): \[\zeta(r)=\zeta_{0}\left(\frac{r_{0}}{r}\right)^{\gamma}\,. \tag{28}\] Note that Eq.(28) takes the value \(\zeta=\zeta_{0}\) at the throat and tends to zero at spatial infinity. The analytic solution for \(V\) takes the form \[V(r)=\frac{\gamma\zeta_{0}^{2}}{2r^{3}}\left(\left(r_{0}\left(\frac{r_{0}}{r} \right)^{\beta}-r\right)\left(\gamma-2\right)+\left(\frac{r_{0}}{r}\right)^{ \alpha}\Phi_{0}\left(-\frac{2r\alpha(1+\alpha+\gamma)}{2+\alpha+2\gamma} \right)+\frac{r_{0}\left(\frac{r_{0}}{r}\right)\alpha(3+2\alpha+\beta+2\gamma )}{3+\alpha+\beta+2\gamma}\right)+c_{1}, \tag{29}\] where \(c_{1}\) in the integrating constant. Even though the model of traversable wormholes is one of various solutions of Einstein's general relativity, it suffers the violation of the energy conditions, i.e., null and weak energy conditions [54; 12]. Then the matter that can distort the spacetime to construct traversable wormholes is called exotic matter. In this work, we focus to synthesize the rotating traversable wormhole with the 3-form field without invoking the exotic matter. The null energy condition (NEC) states that the relation \(T_{\mu\nu}k^{\mu}k^{\nu}\geq 0\) for all null vector field \(\vec{k}\). The weak energy condition (WEC) is defined based on the measurement of the matter density from an observer which cannot be negative \(T_{\mu\nu}U^{\mu}U^{\nu}\geq 0\) where \(U^{\mu}\) is any time-like vector. In Fig.(1), we demonstrate that the energy densities of specific solutions in which the matter component satisfies with both the NEC and WEC. This indicates that the presence of a three-form field is essential for maintaining the wormhole, and and all the exoticity of the object is confined to the field itself and the matter sources thread the wormhole without violating the NEC and WEC. If the speed of the wormhole's rotation is high enough, \(g_{tt}\) becomes positive in a certain area beyond the throat, suggesting the existence of an ergoregion where particles can no longer remain stationary concerning infinity. The ergoregion of the solutions is defined as the region where the time-time component of the metric is positive, \(g_{tt}>0\). Its boundary is referred to as the ergosurface where \(g_{tt}=0\). In our work, an ergoregion of a rotating wormhole can be determined when \[g_{tt}=-e^{2\Phi(r)}+r^{2}K(r)^{2}\omega(r)^{2}\sin^{2}\theta\geq 0\,, \tag{30}\] and the ergosurface by \(g_{tt}=0\)[55; 56]. The ergosurface for the metric is given by \[g_{tt}=-e^{2\Phi(r)}+r^{2}K(r)^{2}\omega(r)^{2}\sin^{2}\theta=0\,, \tag{31}\] Since the ergoregion doesn't extent up to the poles \(\theta=0\) and \(\theta=\pi\), there exist a critical angle \(\theta_{c}\), where the ergosphere exists in between \(\theta_{c}\) and \(\pi-\theta_{c}\), for all \(0<\theta_{c}\leq\pi/2\). This critical angle can determined at the throat of the wormhole using Eq.(31) as \[\sin\theta_{c}=\Big{|}\frac{e^{\Phi_{0}}}{r_{0}\,K_{0}\omega_{0}}\Big{|}\,. \tag{32}\] Moreover, the presence of the ergosphere relies on the spin parameter surpassing a crucial threshold \(a_{c}\), which corresponds to \(\sin\theta_{c}=1\) or \(\omega_{c}=2a_{c}/r_{0}^{3}=e^{\Phi_{0}}/r_{0}K_{0}\). When considering the wormhole metric (31) with \(r_{0}=1.0\), \(K_{0}=1\) and \(\Phi_{0}=-0.6\), the critical value is \(a_{c}=0.274406\), see also Lorentzian traversable wormholes [57]. Fig.2 illustrates the ergosphere's behavior in the equatorial plane by varying the angular momentum \(a\). Here the ergoregion increases with increasing \(a\). The ergosphere for these values is displayed in Fig.3. Rotating objects, e.g., black holes, are known to be spending their rotational energy on amplification of incident waves of perturbation. This phenomenon occurs also for various rotating compact bodies, such as, for example, conducting cylinders, and is called superradiance [58; 59; 60]. When considering rotating traversable wormholes, one could probably expect that the same superradiance should take place. However, it was shown that rotating axially symmetric traversable wormholes do not allow for the superradiance. The situation is similar to that of Teo's rotating wormhole example [40]. However, along the line of the present work, a phenomenology of superradiance emerging from rotating traversable wormholes is still underway. ## IV Conclusions In this work, we have investigated the solutions of the rotating traversable wormhole interacting with the three-form field. The stationary and axisymmetric metric in the spherical polar co-ordinates has been adopted in this work. We have demonstrated that the asymptotic flatness and the flaring-out condition are satisfied. We obtained the field equation of the curved spacetime in the rotating traversable wormhole geometries which is the extension from the traditional static traversable wormholes [38]. We have considered the shape function and the red shift function for the traversable wormhole and an arbitrary function for three-form field proposed by Ref.[38]. This allows us to obtain the numerical solutions. Our results showed that the energy conditions such as NEC and WEC are satisfied. This is so since the three-form field behaves as a gravitational fluid to sustain the wormhole geometries. Furthermore, we have shown that using particular choices of parameters the existence of the ergoregion of the rotating traversable wormhole is possible. We have estimated the critical value of the angular momentum \(a_{c}\) for which the ergoregion can emerge. We found that the ergoregion of a rotating wormhole increases with increasing \(a\) implying that the emergence of the ergoregion of the wormhole strongly depends on the speed of the wormhole rotation. We have displayed the ergoregions of the rotating wormhole with the three-form field using the parameter set: \(\Phi_{0}=-0.5\), \(\alpha=3.0\), and \(\beta=-0.7\) as an example. Note that all of these cases satisfy NEC and WEC. Along the line of the present work, the study of the deflection angle of light by this traversable wormholes supported by the three-form fields is possible. Additionally, the photon geodesic motion under the effective potential of the rotating wormhole Figure 2: The components of \(g_{tt}\) of the rotating wormhole with the 3-form field are presented with the variation of \(a\in[0.0,0.7]\). The diagram is split into two regions; \(r/r_{0}\geq 1\) (our universe) and \(r/r_{0}\leq-1\) (the other universe) where the non-exist region is between \(-1<r/r_{0}<1\). Note that all cases in the left panel satisfy NEC and WEC. These cause the ergoregion like the rotating black hole while the small rotating wormholes in the right panel do not cause the ergoregion (\(a=0\) and \(a=0.1\)). However, WEC and NEC are not satisfied for such cases. background is worth investigated. The radius of the photon sphere is a very useful observable used to analyze the geometrical structures of a wormhole. It is widely known that the appearance of a shadow is a phenomenon which is not restricted only to black hole spacetimes. Therefore, the shadow of this class of rotating traversable wormholes is also an interesting phenomena, see e.g., [21]. We leave these interesting topics for our ongoing investigation. ###### Acknowledgements. T. Tangphati is financially supported by Research and Innovation Institute of Excellence, Walailak University, Thailand under a contract No. WU66267. The work of P. Channuie is financially supported by Thailand NSRF via Figure 3: The ergoregions of the rotating wormhole with the three-form field are presented in the dashed color curves with the variation of \(a\in[0.3,0.8]\) using the parameter set: \(\Phi_{0}=-0.5\), \(\alpha=3.0\), and \(\beta=-0.7\). All of these cases satisfy NEC and WEC. PMU-B under grant number PCB37G660013
2310.10676
Application-layer Characterization and Traffic Analysis for Encrypted QUIC Transport Protocol
Quick UDP Internet Connection (QUIC) is an emerging end-to-end encrypted, transport-layer protocol, which has been increasingly adopted by popular web services to improve communication security and quality of experience (QoE) towards end-users. However, this tendency makes the traffic analysis more challenging, given the limited information in the QUIC packet header and full encryption on the payload. To address this challenge, a novel rule-based approach is proposed to estimate the application-level traffic attributes without decrypting QUIC packets. Based on the size, timing, and direction information, our proposed algorithm analyzes the associated network traffic to infer the identity of each HTTP request and response pair, as well as the multiplexing feature in each QUIC connection. The inferred HTTP attributes can be used to evaluate the QoE of application-layer services and identify the service categories for traffic classification in the encrypted QUIC connections.
Qianqian Zhang, Chi-Jiun Su
2023-10-10T20:09:46Z
http://arxiv.org/abs/2310.10676v1
# Application-layer Characterization and Traffic Analysis for Encrypted QUIC Transport Protocol ###### Abstract Quick UDP Internet Connection (QUIC) is an emerging end-to-end encrypted, transport-layer protocol, which has been increasingly adopted by popular web services to improve communication security and quality of experience (QoE) towards end-users. However, this tendency makes the traffic analysis more challenging, given the limited information in the QUIC packet header and full encryption on the payload. To address this challenge, a novel rule-based approach is proposed to estimate the application-level traffic attributes without decrypting QUIC packets. Based on the size, timing, and direction information, our proposed algorithm analyzes the associated network traffic to infer the identity of each HTTP request and response pair, as well as the multiplexing feature in each QUIC connection. The inferred HTTP attributes can be used to evaluate the QoE of application-layer services and identify the service categories for traffic classification in the encrypted QUIC connections. ## I Introduction Passive monitoring over the network traffic is essential for Internet service providers (ISPs) and network operators to perform a wide range of network operations and management activities [1]. Given the monitored network status, ISPs can adjust the capacity planning and resource allocation to ensure a good quality of experience (QoE). Network monitoring also facilitates intrusion detection and expedites troubleshooting to guarantee a stable service connectivity for the customers. Due to the lack of access to user applications, devices, or servers, passive monitoring is generally challenging. As concerns on the privacy violation continually grow, popular applications start to adopt encrypted protocols. For example, most prominent web-based services apply hypertext transfer protocol secure (HTTPS) to protect the security for bi-directional communications between the Internet users and servers. Consequently, encryption on the one hand protects users' privacy, but also disables the current network management mechanisms for QoE monitoring and optimization. Among all current efforts to incorporate encryption, a new transport-layer protocol, called Quick UDP Internet Connections (QUIC), has emerged to improve communication security and QoE for end-users [2]. QUIC is a UDP-based, reliable, multiplexed, and fully-encrypted protocol. As a user-space transport, QUIC can be deployed as part of various applications and enables iterative changes for application updates. Compared with Transmission Control Protocol (TCP), QUIC uses a cryptographic handshake that minimizes handshake latency, and eliminates head-of-line blocking by using a lightweight data structure called streams, so that QUIC can multiplex multiple requests/responses over a single connection by providing each with its own stream ID, and therefore loss of a single packet blocks only streams with data in that packet, but not others in the same QUIC connection. HTTP-over-QUIC is standardized as HTTP/3 and attracted wide interest from the industry [3]. Historical trend in [4] shows that over \(7\%\) of websites are already using QUIC, and QUIC is expected to grow in the mobile networks and satellite communication systems. Compared with other encryption technologies, QUIC brings tougher challenges on passive traffic monitoring. For example, TCP header provides useful information, including flags and sequence number, which enable ISPs to inspect the TCP communication status. However, the encryption applied to the QUIC headers leaves very limited information to infer their connection states. Meanwhile, in the satellite-based network systems, TCP traffic is usually optimized with Performance Enhancing Proxies (PEPs) [5]. However, QUIC's end-to-end encryption disables PEP optimizations, which results in an under-performance, compared with TCP PEP, even with QUIC's fast handshake. To address the aforementioned challenges, several recent works in [3] and [6, 7, 8, 9, 10, 11, 12] have studied the passive monitoring over encrypted network traffic. Authors in [6] and [7] investigated the HTTP request and response identification for the application-layer characterization. However, both approaches only support the TCP protocol, which cannot be easily extended to QUIC, due to the limited information in the QUIC transport header. Previous works in [8] and [9] focused on the QUIC traffic analysis for website fingerprinting and traffic classification. However, both analytic results relied on large-scale statistics of IP packets, but failed to extract the application-layer attributes. To infer the application-level information, the authors in [3] and [10, 11, 12] studied the network monitoring for HTTP-based encrypted traffic, including both TCP and QUIC. Although these works successfully modeled the application-layer QoE for video applications, their approaches cannot be applied to other types of web services, such as web browsing or bulk traffic. Therefore, existing literature shows distinct limitations in terms of QUIC traffic analysis on estimating the application-layer attributes. The main contribution of this work is, thus, a novel rule-based general-purpose framework to explore the application-level traffic attributes without using any decryption towards QUIC header or payloads, for various web services. Our key contributions include: * Based on the size, timing, and direction information visible in the encrypted QUIC packet, our proposed algorithm analyzes the associated network traffic to infer the attributes of each HTTP request and response pair, including the start and end time, size, request-response association, and multiplexing feature in each QUIC connection. Once HTTP multiplexing is detected, several requests will be matched as a group with their corresponding responses, to form a super HTTP request-response pair. * The proposed algorithm supports both online and offline estimations for HTTP request-response pairs over QUIC protocol. In the online setting, the real-time traffic is processed by a three-module state machine to determine the instant status of the HTTP request-response communication. In the offline setting, we consider all QUIC packets at the end of the connection, where the proposed approach first infers the packets corresponding to client requests, and then identifies server's responses, and finally, pairs each request with its associated response, given the network-dependent constraints of inter-packet time and round-trip time (RTT). * The proposed algorithm can identify QUIC control messages versus HTTP request/response data packets. To avoid overestimation of HTTP request/response size, a dynamic threshold on the QUIC packet length is designed to filter out the acknowledgment packets, setting packets and control information in the HTTP traffic when estimating the HTTP request/response data objects. Meanwhile, the proposed algorithm can handle special features in QUIC protocol, such as 0-RTT request. * The proposed algorithm can be applied to different applications, including video traffic, website browsing, interactive web traffic, such as user login authentication, and bulk traffic for file upload and download. We tested our algorithm under various network conditions, given different maximum transfer size (MTU) and RTT, and the proposed approach gives highly accurate estimation results in both terrestrial and satellite network systems. The rest of this paper is organized as follows. Section II provides the system overview. The algorithm design, including request estimation, response estimation, and request-response match models, is given in Section III. In Section IV, the performance evaluations are presented, and Section V discusses the limitation and future work. In the end, Section VI draws the conclusion. ## II System Architecture In this section, we first define the input and output of the traffic monitoring task, and provide a system overview of the QUIC characterization algorithm. As shown in Fig. 1, we consider a passive monitoring module implemented at a middlebox between the client and server. The middlebox should be able to perceive complete bi-directional traffic without any omission. For example, to observe the traffic of a user, the module can be placed at the user's network access point, while to study the traffic for a cluster of clients, the algorithm can be implemented at the network gateway. Relying on the discriminative attributes that is visible in the encrypted QUIC packets, we aim to identify each HTTP pair or HTTP object, consisting of an HTTP request and its corresponding response, which contains key information for the passive monitor to infer the application-layer characterization. ### _Input features_ Useful information in the encrypted QUIC packets mainly comes from the network layer and transport layer, including the source and destination IP addresses, source and destination port numbers, packet length, and the limited header information that is still visible in the encrypted packet. Meanwhile, packet arrival time and packet position in the sequence of a QUIC flow can also provide essential information for our application-layer characterization. In order to support a real-time estimation, the input features require only the information of individual QUIC packets. In our proposed approach, no window-based feature or large-scale statistical analysis is required. If needed, the window-based feature can be calculated in the post-processing stage using our estimation results. In the network trace, each QUIC connection can be identified by 6-tuples, i.e., source IP, destination IP, source port, destination port, protocol, and QUIC connection ID. Within a connection, a sequence of bi-directional QUIC packets with their timing and length information can be observed, and the network operator can extract a small set of features from the network and transport layer headers as the input for the application characterization algorithm, which includes **QUIC header type**, **QUIC packet length**, **packet arrival time**, and **packet order and position**, for upstream and downstream traffic, separately. The definition of each input and the reason for choosing these features are given as follows. #### Ii-A1 QUIC header type A QUIC packet has either a long or a short header. The most significant bit of a QUIC packet is the Header Form bit, which is set to \(1\) for long headers, and \(0\) for short headers. This header type is always available in the encrypted QUIC packets and stays invariant across QUIC versions [13]. The long header is used in the handshake stage to expose necessary information for version negotiation and Fig. 1: Passive monitoring the bi-directional QUIC packets at a middlebox to infer the application-layer metrics. establishment of 1-RTT keys between two ends. Therefore, the header type provides key information on whether handshake is finished or not. Except for 0-RTT resumption, most of the HTTP requests and responses occur only after the handshake is finished. Thus, once a QUIC packet with short header is observed in newly-built QUIC connection, soon the HTTP request and response packets are expected to arrive. #### Iii-A2 QUIC packet length The QUIC packet size can be used to infer whether a QUIC packet contains HTTP data content. First, let us define what an HTTP data packet is. Typically, HTTP communications follow a pattern of client's request first, and then server's response. Thus, the QUIC protocol for HTTP web applications always uses **client-Initiated bidirectional stream**[14], with a stream ID that is a multiple of four in decimal, i.e., 0, 4, 8, and etc. And the corresponding response will be transmitted over the stream with the same ID as its request. In this work, we call the client-Initiated bidirectional stream as data stream, and all the other kinds as non-data stream. Although the stream ID can provide accurate information to identify HTTP request and response, this information is encrypted and invisible at the middlebox. Therefore, to distinguish the QUIC packet with data content, we must rely on the explicit information, such as the QUIC packet length. After the handshake stage, a QUIC packet with HTTP data content usually has a larger length, compared with non-data packets. For example, acknowledgment (ACK) is a common type of QUIC frames which usually has a much shorter size. Thus, by setting a proper threshold to the QUIC packet length, it is possible to filter out non-data packets. Considering a QUIC packet from the server to the client, if its length is smaller than the threshold \(L_{\text{resp}}\in\mathbb{Z}^{+}\), then we consider this packet as non-HTTP-response packet, and exclude it from forming the input features of the estimation algorithm. A typical value of response length threshold is \(L_{\text{resp}}=35\) bytes. Note that, throughout this paper, the packet length specifically denotes the overall size of a QUIC packet in bytes. #### Iii-A3 Packet arrival time The packet arrival time can be used to tell whether two packets belong to the same HTTP object, and whether a QUIC connection is still active. First, an HTTP response usually consists of multiple packets. When two sequential packets are transmitted from the server to the client, the middlebox needs to tell whether they belong to the same response or not. Thus, a threshold is applied to their inter-arrival time. For example, if the inter-arrival time of two response packets is less than a threshold \(\Delta T_{\text{resp}}\), then these two packets belong to the same HTTP response; Otherwise, they are associated with two different responses. A typical value for \(\Delta T_{\text{resp}}\) is one RTT, and a similar threshold \(\Delta T_{\text{req}}\) is applied to consolidate or separate request packets. Second, given a detected request and an estimated response, we need to know whether they are associated HTTP request-response pair. Here, we propose a necessary requirement, where the time difference between the first response packet and the first request packet must be greater than one RTT, but smaller than \(20\) RTTs. If this requirement is not satisfied, then the request and response are not an associated HTTP pair. Instead, they should belong to different HTTP request-response pairs. Furthermore, in a QUIC connection, if there is no packet transmission in any direction for more than \(20\) RTTs, then, we consider the QUIC connection as idle. In the idle state, before any new request is observed, all response packets from the server to the client will be disgarded. #### Iii-A4 Packet order and position The packets' positions in the sequence of QUIC flow can provide guidelines for a middlebox to form HTTP request-response pairs, as shown in Fig. 2. For example, an HTTP response usually consists of multiple QUIC packets with a noticeable pattern. Based on our observation, the first response packet usually has a length that is slightly smaller than MTU size; then, the response is followed with a sequence of multiple MTU-sized packets; finally, the response Fig. 3: 1-RTT handshake and HTTP transmission. Fig. 2: Time and length information of each QUIC packet forms the input to estimate HTTP requests and responses. ends with a packet with much smaller size. The cause for this special pattern is that the response data content can have a much larger size than one MTU's payload, therefore the content will be separated into multiple data frames and transmitted via multiple QUIC packets. The slightly small length of the first response packet is caused by the combination of control frame and data frame into one UDP payload, while the last packet contains the left-over response content in this transmission which is usually much less than one MTU. Note that, this pattern is an empirical summary based on our observation and experience, which may not be always true. Later, we will apply this rule as a baseline to design the estimation algorithm with further details to cope with exceptions, such as the first response packet has a MTU length, or the last packet has a larger size. Therefore, based on the pattern, we can consolidate responses from a sequence of individual response packets, together with the requirement of inter-arrival time threshold, to form a HTTP response object. A similar pattern can be observed in the HTTP request as well. However, since most of HTTP requests have very limited content, whose size is smaller than MTU, thus, most of the HTTP requests consist of a single packet with length smaller than MTU but greater than the request length threshold \(L_{\text{req}}\in\mathbb{Z}^{+}\). Till now, we have introduced four types of inputs, and the rational for choosing these features is summarized in Table I. ### _Output metrics_ Given the input features, we aim to design a rule-based algorithm to estimate the **object-level** HTTP request-response information, and the **connection-level** QUIC link information, by passively monitoring the encrypted packet sequences. #### Iii-B1 HTTP object level output An HTTP pair consists of an HTTP request and its corresponding HTTP response. For the request part, our designed algorithm will output the start time, size, and the number of request packets in the estimated HTTP request. Similarly, the response output metrics include the start time, end time, size, and the number of response packets in the HTTP response. The reason to exclude the request end time from the output metric is the fact that most HTTP requests consist of single-packet, thus, the end time of a request usually coincides its start time. Since QUIC protocol supports HTTP request and response multiplexing by creating multiple streams in the same connection, thus, we will see in Fig. 4 that before the transmission of an existing HTTP response is finished, another request can be sent from the client to server using a new stream ID. In the case of multiplexing, the sequence of request or response packets belonging to different HTTP objects may be interleaved with each other, thus, it might be impossible to separate the packets for each individual HTTP object, based on their length and timing information only. In this case, we will group the interleaved HTTP request-response objects together to form a super HTTP object. And, the output meaning of an estimated super object changes slightly, where the request (or response) start time is the time stamp of the first request (or response) packet in the super object, the response end time is the time stamp of the last response packet, the request (or response) size is the total size of all request (or response) packets in the super object, and the request (or response) packet number is also the total number of all request (or response) packets. Moreover, the number of HTTP pairs denotes the number of individual HTTP request-response pairs grouped in the super object, and only in the case of multiplexing, this value is greater than one. When HTTP multiplexing happens, the response estimation can be very confusing, but the request detection is still reliable, thus the number of detected requests is counted to represent the number of individual HTTP pairs in the super object. Lastly, the length of the ACK packets contains meaningful information for packet filtering. If a packet loss is detected at the client side and the lost packet contains key information that requires re-transmission, then the client will inform the server with the loss information by sending an ACK packet. If the \begin{table} \begin{tabular}{|p{142.3pt}|p{142.3pt}|} \hline **Output type** & **Estimated output** \\ \hline \multirow{8}{*}{Object-level} & Request start time \\ & Request size \\ & Number of request packets \\ & Response start time \\ & Response end time \\ & Response size \\ & Number of response packets \\ & Number of individual HTTP request-response pairs \\ & Max length of last ten ACK packets \\ \hline \multirow{8}{*}{Connection-level} & Connection start time \\ & Connection duration \\ & Total request size \\ & Total response size \\ \cline{1-1} & Total number of request packets \\ \cline{1-1} & Total number of response packets \\ \cline{1-1} & Number of individual HTTP request-response pairs \\ \cline{1-1} & Number of estimated HTTP objects \\ \cline{1-1} & Level of multiplexing \\ \hline \end{tabular} \end{table} TABLE II: Output metrics \begin{table} \begin{tabular}{|p{142.3pt}|p{142.3pt}|} \hline **Input features** & **Purposes** \\ \hline Packet direction & Separate request and response packets. \\ \hline QUIC Header type & Check whether handshake is finished. \\ \hline QUIC Packet length & Check whether a QUIC packet contains HTTP request or response data. \\ \hline Packet arrival time & Check whether two packets belong to the same object, whether an HTTP request is associated with a response, and whether a QUIC connection is still active. \\ \hline Packet position and order & Build HTTP request-response pairs from a sequence of individual QUIC packets. \\ \hline \end{tabular} \end{table} TABLE I: Input features number of lost packets keeps increasing, the ACK frame needs to contain more information, which yields an increased packet length. Therefore, by monitoring the ACK packet length in a real-time manner, the passive observer can accurately determine the threshold for the HTTP data packets, and filter out the non-date frames properly. Usually, we keep the length information of the last ten ACK packets for both directions. #### Ii-B2 QUIC connection level output Once a QUIC connection has been quiet for more than \(20\) RTTs, we consider it as inactive, and the overall HTTP transmission will be summarized into a QUIC-connection output, and after that, all memory for this connection will be cleared. The connection-level output is shown in Table II, where the connection start time is the timestamp of the first packet from client to server, the connection duration is the time different between the first packet to the last over the QUIC connection, the total request (or response) size is the length sum of all HTTP request (or response) data packets, the total number of request (or response) packets counts the number of all HTTP request (or response) packets in the QUIC connection, the number of individual HTTP pairs equals to the number of detected requests, and the number of estimated HTTP objects equals to the number of object-level outputs estimated within this QUIC connection. For example, in Fig. 4, the number of individual HTTP pairs is three, while the number of estimated HTTP objects is only one, due to multiplexing, and in Fig. 3, the number of individual HTTP pairs and the number of estimated objects both equal to two. In the end, we define the level of multiplexing as the ratio of the number of individual HTTP pairs to the number of estimated HTTP objects. The value of the multiplexing level ranges in \([1,N_{\text{req}}]\), where \(N_{\text{req}}\in\mathbb{Z}^{+}\) denotes the maximum number of individual HTTP pairs that our algorithm can tolerant in each super object. When multiplexing happens, the level of multiplexing is greater than one; otherwise, its value equals to one. Here, the level of multiplexing helps a network operator to classify the traffic category of a QUIC connection. For example, a web-browsing link usually has a higher multiplexing level than a video link. Key information of object-level and connection-level outputs is summarized in Table II. ## III Algorithm and Approaches In this section, we aim to design a rule-based algorithm so that given the input features in Table I, we can estimate the output metrics in Table II. To this end, we design a state machine with three modules, where the request estimation module infers the client requests, the response estimation module consolidates the QUIC packets into server's responses, and a match module pairs the estimated requests with their corresponding responses, under the network-dependent constraints of inter-arrival time and RTT. Furthermore, to extend the application range and increase the robustness of our algorithm, three supporting modules are introduced to automatically adjust the threshold for data packet size, detect the MTU size, and estimate the value of RTT, respectively, so that the proposed algorithm supports an accurate estimation in various network systems under different communication conditions. ### _Request estimation_ In the QUIC protocol, a special features, called 0-RTT connection resumption, is shown in Fig. 5. Assume a client and a server had previously established a QUIC connection, then when a new connection is needed, client can send application data with the first packet of Client Hello and reuse the cached cryptographic key from previous communications. Notably this allows the client to compute the private encryption keys required to protect application data before talking to the server, thus successfully reduces the latency incurred in establishing a new connection. Thus, in the case of 0-RTT resumption, the HTTP request and response can happen before the handshake is finished, and a special detection mechanism is needed to infer the 0-RTT request packets. Given a QUIC packet with a long header, the third and fourth significant bits in the header indicate the type of this packet. If the type field shows (0x01), then the packet is a 0-RTT packet [14]. Next, to determine whether a 0-RTT packet contains HTTP request data, we propose three criteria: First, a 0-RTT request has usually a single packet; Second, the length of a 0-RTT request packet often ranges within \([100,1000]\); Third, there is only one QUIC packet in the UDP payload. If all above requirements are satisfied, we can say that this 0-RTT packet is a 0-RTT request, otherwise, this 0-RTT packet is more likely to contain control information, other than HTTP request data. Again, these Fig. 4: HTTP request-response multiplexing. Fig. 5: 0-RTT connection resumption. criteria are empirical, which may not always be true. However, according to our observation and experience, the criteria lead to a high accuracy to estimate 0-RTT requests. Once handshake is finished, QUIC packets will start to use short headers, which do not have a packet-type field anymore. But, similar to 0-RTT requests, request after handshake requires only one QUIC packet in the UDP payload. Meanwhile, the length of a request packet ranges between \(L_{\text{req}}\) and \(L_{\text{MTU}}\), where \(L_{\text{req}}\) is the length threshold for request packets, and \(L_{\text{MTU}}\) is the size of MTU. In general, the MTU value \(L_{\text{MTU}}\) is network and device dependent with a value range of \([1200,1360]\). Meanwhile, the value of \(L_{\text{req}}\) is dynamic over time. When we are inferring for the first packet of the first request, the request size threshold is set as \(L_{\text{req}}=100\) bytes. Then once the first request packet has been detected, the value of \(L_{\text{req}}\) is adjusted to \(50\) bytes. Later, as the HTTP request transmission continues, \(L_{\text{req}}\) will be dynamically adjusted based on the real-time traffic conditions. Details for adjusting \(L_{\text{req}}\) will be shown in Section III-D1. Given that an HTTP request consists of either a single packet or multiple packets, in order to consolidate a sequence of request packets into a group of request objects, we design a request estimation algorithm with a state machine shown in Fig. 6. When the client sends the first Initial Hello packet to the server, a state machine is initialized for the QUIC connection with an initial state \(-1\). During the handshake stage, if a 0-RTT request is detected, the algorithm goes to state \(-0.5\). As long as the algorithm comes to state \(-0.5\), a 0-RTT request will be output, and the estimated 0-RTT request will be given to the match module. On the other hand, if no 0-RTT request is found, the state will stay at \(-1\), until handshake is finished and a new request packet is detected. If the new request packet has a length greater than \(L_{\text{MTU}}-8\), then we consider its as a large packet, and the algorithm will move to state \(0.5\). Otherwise, if the packet's length ranges in \([L_{\text{req}},L_{\text{MTU}}-8]\), we consider it a small request packet, and the algorithm comes to state \(0\). State \(0.5\) is a waiting state, where we need the information of the next request packet to determine whether we are estimating a single-packet or multi-packet request. Therefore, at state \(0.5\), if we receive a large packet within one RTT, then, the current request is a multi-packet request, and more packets belonging to the same request might arrive soon, thus, the algorithm moves to the transmission state \(1\); otherwise, if we receive another small packet at state \(0.5\), the estimated request consists of two packets, and the algorithm goes to state \(0\), and outputs the estimated request. Meanwhile, if no new packet arrives within one RTT, then it is a single-packet request. Thus, the algorithm moves to state \(0\), and outputs the single-packet request. State \(0\) is an idle state, meaning no on-going transmission at this stage. At state \(0\), if a large request packet comes, the algorithm moves to state \(0.5\) to wait for more packets. Otherwise, the algorithm will output a single-packet request, and stays at state \(0\). Lastly, state \(1\) is a transmission state, meaning a multi-packet request is transmitting a sequence of MTU-sized packets. At state \(1\), if the arrived packet has a MTU-size, then transmission is on-going and the algorithm stays at state \(1\). If the new packet has a length smaller than MTU, then the transmission of the current request is done, so the algorithm moves to state \(0\), and outputs the estimated multi-packet request. In summary, the request estimation module monitors all QUIC packets from client to server, processes the header, time, length, and order information of each packet, and outputs the estimated request to the match module. ### _Response estimation_ Similar to the request packet, an HTTP response packet usually has only one QUIC packet in the UDP payload, and the response packet length ranges between \([L_{\text{resp}},L_{\text{MTU}}]\), where \(L_{\text{resp}}\) is a dynamic threshold with the initial value of \(L_{\text{resp}}=35\), and the updating rule will be shown in Section III-D1. To consolidate individual packets into HTTP responses, a response estimation algorithm is designed in Fig. 7. Initially, when no request is detected, the response module stays at state \(-1\). When at least one request and a new response packet are detected, the algorithm moves to state \(1\) if the response packet Fig. 6: State machine for request estimations, where -1 is initial state, 0 is idle state, 0.5 is waiting state, and 1 is transmission state. Once the algorithm comes to state -0.5 or state 0, a request is estimated and will be given to the match module. Fig. 7: State machine for response estimation, where -1 is initial state, 0 is idle state, 0.5 is waiting-to-start state, 1 is transmission state, and 1.5 is waiting-to-end state. Once the algorithm comes to state 0, a response is estimated and will be given to the match module. size is larger than \(L_{\text{MTU}}-8\), or the algorithm moves to state \(0.5\) if the packet length between \([L_{\text{resp}},L_{\text{MTU}}-8]\). State \(0.5\) is a wait-to-start state, meaning after receiving a small packet, we need to see the next packet to determine whether it is a single-packet or multi-packet response. Therefore, at state \(0.5\), if a large packet arrives within one RTT, the algorithm will move to state \(1\); if a small response packet arrives within one RTT, the algorithm stays at state \(0.5\), and groups the received small packets into one object. Due to different implementations, some servers may start the multi-packet response with more than one non-MTU packets. If no packet arrives during one RTT, the algorithm moves to state \(0\), and output an estimated response. State \(0\) is an idle state, meaning no transmitting response. At state \(0\), if a large response packet comes, the algorithm moves to state \(1\). Otherwise, the algorithm comes to state \(0.5\). State \(1\) is a transmission state, meaning a multi-packet response is transmitting a sequence of MTU-sized packets. At state \(1\), if the arrival packet has a MTU-size, then the response transmission continues and the module stays at state \(1\). Otherwise, the transmission finishes and the algorithm moves to state \(1.5\). Lastly, state \(1.5\) is a wait-to-end state. Due to re-transmission, an HTTP response can end with multiple small packets. Therefore, at state \(1\), if the middlebox observes a small packet, it waits at state \(1.5\) for one RTT, during which if more small packets arrive, the algorithm consolidates these small packets with the previous MTU sequence to form one response, and stays at the state \(1.5\) until one-RTT timeout. If no packet arrives within one RTT, the response transmission is finished, and the module moves to state \(0\) to output the estimated response. However, if a large packet arrives within one RTT, then we realize that the previous response has finished, and a large packet belonging to a new response is received. In this case, the algorithm first moves to state \(0\), output a response consisting of all packets but not including the last one, then, the newly-arrived large packet will start another response estimation, and move the algorithm to state \(1\). Thus, the response estimation module monitors all QUIC packets from server to client, processes the header, time, length, and order information of each encrypted packet, and outputs the estimated response to the match module. ### _Request-response matching_ Given the estimated requests and responses, the final step is to match each request and its corresponding response to form an HTTP pair, a.k.a. HTTP object. A state machine of the matching module is given in Fig. 8, which takes the estimated HTTP requests and responses as input and outputs the object-level HTTP information. Initially, before receiving any request, the match module will ignore all response inputs and stay at state \(-1\). After the first request is received, the algorithm comes to state \(1\). The definition for state \(1\) is that the number of requests is greater than the number of responses, meaning that some HTTP requests have been sent out, but not all of their responses are received, therefore the algorithm waits at state \(1\) for more responses to finish the request-response match. Once the match module receives enough responses so that the number of requests becomes less than or equal to the number of responses, it moves to state \(2\). However, at state \(1\), if no request or response is received within \(20\) RTTs, then the match module is timeout, and the algorithm moves to state \(0\) to output an HTTP object consisting of all received requests and responses. State \(2\) means the match module has received at least equal numbers of requests and responses, which is enough for one-to-one request-response match. However, at this moment, it is uncertain whether there will be more response objects coming, due to re-transmission and mis-estimation. Thus, the module waits at state \(2\) for one RTT. Any new response arrives within one RTT will be added into the current HTTP object. If no new response arrives within one RTT, the current module is timeout, and moves to state \(0\) to output the estimated HTTP object. If any new request is received at state 2, it will be hold until timeout, to form the next HTTP object. Lastly, state \(0\) is the idle state, in which all responses will be discarded, and if a request is received, the match module moves to state \(1\). The match module takes all estimated requests and responses as input, and generates the HTTP request-response objects as output, while the connection-level output can be calculated, combing all estimated object-level information. ### _Supporting modules_ To enable the proposed algorithm to work in different networks under various communication conditions, three supporting modules are introduced to adjust key parameters. #### Iv-D1 Dynamic threshold of data packet length For the request data packet, the initial length threshold is \(L_{\text{req}}=50\), i.e., a QUIC packet from the client to server with a length smaller than \(50\) bytes will be considered as a non-data packet. Generally, it is easy, using \(L_{\text{req}}\), to detect the non-data packet with a fixed or typical length, such as control frame. However, for ACK packets with variant sizes, a dynamic threshold is needed. Assume the downstream from server to client experiences packet loss, the client will inform the server with the packet missing information in the ACK packet. If the number of lost response packets keeps increasing, the ACK packet size from Fig. 8: State machine for match module, where -1 is initial state, 0 is idle state, 1 is waiting-for-response state, and 2 is waiting-to-output state. Once the algorithm moves over a double-line arrow, an HTTP request-response pair is estimated. the client to server will become larger. Once ACK length comes to \(50\) bytes, the initial threshold \(L_{\text{req}}\) can no longer work. Since the ACK packet size increases gradually, we can track the length change and adjust the threshold accordingly. For example, over a QUIC connection, once ten non-data packets have been detected, the middlebox can take the maximum length of the last ten small-sized packets as \(l_{ack}^{max}=\max\{l_{ack}^{1},\cdots,l_{ack}^{10}\}\), and adjust the request threshold by \(L_{\text{req}}=l_{ack}^{max}+10\). In the following communication, the maximum length of the latest ten non-data packets \(l_{ack}^{max}\) will be updated for every detected non-data packet, and the request threshold is updated accordingly. A similar rule applies to the response packet length, where the initial threshold is \(L_{\text{resp}}=35\); after ten non-data response packets are detected, the response threshold is updated by the maximum length of the latest ten non-data packets, plus ten bytes. Based on our analysis, the proposed scheme shows almost \(100\%\) accuracy to separate the ACK packets from the data packets for both QUIC request and response estimations. #### Iii-B2 Auto-detection for MTU size The MTU size of both QUIC and UDP packets depends on the network setting, server implementation, and client device type. Therefore, MTU can take different values for different QUIC connections, or over the same connection but in different communication directions. The auto-detection algorithm for MTU size is designed as follows: The initial MTU-value for QUIC packets is set to be \(L_{\text{MTU}}=1200\). Next, for each packet, the MTU value will be updated by taking the maximum out of the length of the new packet and the current MTU value. In most cases, the MTU values for both directions over a QUIC connection can be accurately detected within the handshake stage. #### Iii-B3 RTT estimation As shown both in Fig. 3 and Fig. 5, the QUIC handshake stage requires the client to starts with a Client Hello packet, and then, the server will reply with Server Hello. This round-trip pattern during the handshake stage provides a chance for RTT estimation. Especially when the QUIC connection is established without previous memory, handshake stage usually involves more than one round-trip, then the value of RTT can be calculated by averaging the time spent over these round-trips during handshake. ## IV Performance Evaluations In this section, we evaluate the performance of the proposed algorithm, using the QUIC trace collected from various network environments. In particular, we applied Chrome and Firefox as client browsers on both Windows and Linux operation systems, over HughesNet satellite system and Comcast terrestrial system, to collect QUIC traces for video traffic, web-browsing, user login authentication, file upload, and download traffic. In the small-scale collection, we used Wireshark to manually collect QUIC traffic, and decrypted packets by setting the SSLKEYLOGFILE environment variable. For the large-scale collection, we applied Puppeteer as a high-level API to control Chrome and play Youtube videos following some given playlists, and used tcpdump to collect packet-level QUIC trace. The large-scale dataset is limited to web browsing and video traffic from Youtube, over HughesNet satellite system, using Chrome as browser in the client-side Linux operation system. We run the large-scale data collection continuously for 11 days from Feb 9 to Feb 20, 2023, resulting in over \(1,000\) times of video plays with over \(11,000\) TCP connections and \(18,000\) QUIC connections between our client browser with over \(400\) server IP addresses. Table III shows the evaluation performance results over the small-scale Comcast dataset, small-scale HughesNet dataset over Chrome and Firefox, and large-scale HughesNet dataset for Youtube traffic, respectively. First, our algorithm yields a high matching accuracy of over \(85\%\), for all types of web traffic, in all environment settings. In the request estimation, other than the upload traffic, the proposed method shows an accurate estimation result, where the request start time error is \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline & **Dataset** & **Match** & **Request start** & **Request size** & **Response start** & **Response end** & **Response size** \\ & **accuracy** & **time error** & **accuracy** & **time error** & **time error** & **accuracy** \\ \hline \multirow{4}{*}{**Youtube**} & Comcast, Chrome, small-scale & \(96\%\) & 0 & \(99\%\) & \(\leq 20\%\) RTT & \(50\%\) RTT & \(97\%\) \\ & HughesNet, Chrome, small-scale & \(95\%\) & \(\leq 1\%\) RTT & \(98\%\) & \(\leq 15\%\) RTT & \(\leq 10\%\) RTT & \(98\%\) \\ & HughesNet, Firefox, small-scale & \(92\%\) & \(\leq 10\%\) RTT & \(95\%\) & \(\leq 15\%\) RTT & \(\leq 10\%\) RTT & \(99\%\) \\ & HughesNet, Chrome, large-scale & \(94\%\) & \(\leq 15\%\) RTT & \(96\%\) & \(\leq 25\%\) RTT & \(\leq 20\%\) RTT & \(97\%\) \\ \hline **Google** & Comcast, Chrome, small-scale & \(93\%\) & \(\leq\) one RTT & \(97\%\) & \(\leq 50\%\) RTT & \(\leq\) one RTT & \(95\%\) \\ **drive** & HughesNet, Chrome, small-scale & \(96\%\) & \(\leq 10\%\) RTT & \(91\%\) & \(\leq 10\%\) RTT & \(\leq 15\%\) RTT & \(99\%\) \\ **login** & HughesNet, Firefox, small-scale & \(99\%\) & \(\leq 10\%\) RTT & \(99\%\) & \(\leq 10\%\) RTT & \(\leq 15\%\) RTT & \(91\%\) \\ \hline **Google** & Comcast, Chrome, small-scale & \(87\%\) & \(\leq 1\%\) RTT & \(85\%\) & \(\leq 5\%\) RTT & \(\leq 5\%\) RTT & \(94\%\) \\ **drive** & HughesNet, Chrome, small-scale & \(88\%\) & \(\leq 50\%\) RTT & \(89\%\) & \(\leq 50\%\) RTT & \(\leq 50\%\) RTT & \(85\%\) \\ **download** & HughesNet, Firefox, small-scale & \(85\%\) & 0 & \(99\%\) & \(\leq 10\%\) RTT & \(\leq 5\%\) RTT & \(99\%\) \\ \hline **Google** & Comcast, Chrome, small-scale & \(93\%\) & 10 RTTs & \(78\%\) & 3 RTTs & one RTT & \(97\%\) \\ **drive** & HughesNet, Chrome, small-scale & \(96\%\) & \(\leq 50\%\) RTT & \(77\%\) & \(\leq 20\%\) RTT & \(\leq 30\%\) RTT & \(99\%\) \\ **upload** & HughesNet, Firefox, small-scale & \(92\%\) & \(\leq 50\%\) RTT & \(75\%\) & \(\leq 10\%\) RTT & \(\leq 1\%\) RTT & \(99\%\) \\ \hline **Facebook/** & Comcast, Chrome, small-scale & \(100\%\) & 0 & \(100\%\) & \(\leq 5\%\) RTT & 0 & \(99\%\) \\ **Instagram/** & HughesNet, Chrome, small-scale & \(100\%\) & 0 & \(97\%\) & \(\leq 15\%\) RTT & \(\leq 20\%\) RTT & \(99\%\) \\ **Google** & HughesNet, Firefox, small-scale & \(97\%\) & 0 & \(99\%\) & \(\leq 20\%\) RTT & \(\leq 10\%\) RTT & \(94\%\) \\ \hline \end{tabular} \end{table} TABLE III: Performance summary smaller than one RTT, and the request size accuracy is higher than \(85\%\). Different from other traffic types with small-sized requests and large-sized responses, the file upload shows a reversed pattern, where the traffic from client to server is much more than the data from server to client. This uncommon pattern results in a lower accuracy of \(75\%\) in the request size estimation, and up to \(10\) RTTs error in the request start time estimation. In our future work, we will further refine the algorithm design, by adding more waiting states in the request state machine, to improve the request estimation for bulk upload traffic. Similarly, the response estimation shows a satisfied result of time error small than one RTT, and size accuracy of over \(85\%\), for all web services under all settings, except for bulk upload. Note that, compared with the terrestrial Comcast network, the satellite system has a much larger RTT due to the long propagation distance between the ground terminal/gateway and the geostationary satellite. Therefore, the evaluation results prove that our proposed algorithm can work in various networks while guaranteeing an accurate estimation result. Furthermore, due to the limited space, Table III only shows six key performance metrics from Table II, for online estimation only. Note that, the offline algorithm yields a similar estimation accuracy, and the other performance metrics also show satisfied results. ## V Limitation and Future Work Given the empirical nature of the proposed algorithm, one limitation of our work is the performance degradation in face of excess packet loss. Massive packet loss yields a lot of data re-transmission, so that the typical transmission pattern cannot be recognized; also, the ACK packets with large sizes will be confused with the data packets, even with a dynamic length threshold. Meanwhile, the proposed algorithm only provides a coarse-grained estimation for interleaved HTTP request-response objects, since as an ISP with limited information visible in the encrypted QUIC packets, it is impossible to distinguish individual request-response pairs using interleaved timeline with length and order information only. Thus, grouping the multiplexed objects into a super HTTP object is the best estimation we could make. Furthermore, if client or server implementations apply padding as a countermeasure of traffic analysis, then the length of all QUIC packets will be MTU. In this case, our proposed algorithm might fail, given only time and order information available. In the future work, we will apply the estimated object-level and connection-level HTTP information for network operation and management, including the traffic classification and QoE estimation. For example, different web traffics have distinct HTTP patterns, where a video connection requests content data periodically resulting in a clear and separable request-response pattern as shown in Fig. 3, while a web-browsing connection requests different types of content at the same time, inducing interleaved requests and responses. Such pattern difference enables the ISPs to classify each QUIC connection into different application categories. Moreover, the application-layer information can be applied to infer the user's QoE over the encrypted QUIC connection. For example, the download rate per object can be calculated by response size over response duration, and the time-to-first-byte can be evaluated via the estimated request start time and the response start time. ## VI Conclusion In this work, we have analyzed the characteristics of QUIC traffic, by passively monitoring the QUIC encrypted packets to infer the application-layer attributes. To this end, we have studied the rationale of QUIC protocol design, and summarized the key pattern for HTTP request and response communications over QUIC protocol. By carefully choosing the time and size features which are still visible in the encrypted QUIC packets, we have designed a novel rule-based algorithm to estimate the attributes of HTTP requests and responses. The performance evaluation showed satisfactory results in different network systems for various web applications.
2302.00674
Improving Few-Shot Generalization by Exploring and Exploiting Auxiliary Data
Few-shot learning is valuable in many real-world applications, but learning a generalizable model without overfitting to the few labeled datapoints is challenging. In this work, we focus on Few-shot Learning with Auxiliary Data (FLAD), a training paradigm that assumes access to auxiliary data during few-shot learning in hopes of improving generalization. Previous works have proposed automated methods for mixing auxiliary and target data, but these methods typically scale linearly (or worse) with the number of auxiliary datasets, limiting their practicality. In this work we relate FLAD to the explore-exploit dilemma that is central to the multi-armed bandit setting and derive algorithms whose computational complexity is independent of the number of auxiliary datasets, allowing us to scale to 100x more auxiliary datasets than prior methods. We propose two algorithms -- EXP3-FLAD and UCB1-FLAD -- and compare them with prior FLAD methods that either explore or exploit, finding that the combination of exploration and exploitation is crucial. Through extensive experimentation we find that our methods outperform all pre-existing FLAD methods by 4% and lead to the first 3 billion parameter language models that outperform the 175 billion parameter GPT-3. Overall, our work suggests that the discovery of better, more efficient mixing strategies for FLAD may provide a viable path towards substantially improving generalization in few-shot learning.
Alon Albalak, Colin Raffel, William Yang Wang
2023-02-01T18:59:36Z
http://arxiv.org/abs/2302.00674v4
# Improving Few-Shot Generalization by Exploring and Exploiting Auxiliary Data ###### Abstract Few-shot learning is valuable in many real-world applications, but learning a generalizable model without overfitting to the few labeled datapoints is challenging. In this work, we focus on **F**ew-shot **L**earning with **A**uxiliary **D**ata (FLAD), a training paradigm that assumes access to auxiliary data during few-shot learning in hopes of improving generalization. Previous works have proposed automated methods for mixing auxiliary and target data, but these methods typically scale linearly (or worse) with the number of auxiliary datasets, limiting their practicality. In this work we relate FLAD to the explore-exploit dilemma that is central to the multi-armed bandit setting and derive algorithms whose computational complexity is independent of the number of auxiliary datasets, allowing us to scale to \(100\times\) more auxiliary datasets than prior methods. We propose two algorithms - EXP3-FLAD and UCB1-FLAD - and compare them with prior FLAD methods that either explore or exploit, finding that the combination of exploration _and_ exploitation is crucial. Through extensive experimentation we find that our methods outperform all pre-existing FLAD methods by 4% and lead to the first 3 billion parameter language models that outperform the 175 billion parameter GPT-3. Overall, our work suggests that the discovery of better, more efficient mixing strategies for FLAD may provide a viable path towards substantially improving generalization in few-shot learning. All code is available at github.com/alon-albalak/FLAD. ## 1 Introduction Few-shot learning is an attractive learning setting for many reasons: it promises efficiency in cost and time, and in some scenarios data is simply not available due to privacy concerns or the nature of the problem. However, few-shot learning is also a challenging setting that requires a delicate balance between learning the structure of the feature and label spaces while preventing overfitting to the limited training samples [1; 2; 3]. One approach to improving the generalizability of models in the few-shot setting is **F**ew-shot **L**earning with **A**uxiliary **D**ata (FLAD), where additional auxiliary datasets are used to improve generalization on the target few-shot task [4; 5; 6; 7]. However, FLAD methods introduce their own challenges, including increased algorithmic and computational complexity. Specifically, incorporating auxiliary data during training introduces a large space of design choices (e.g. how and when to train on auxiliary data). Manually designing the curriculum for training on large quantities of auxiliary data is not feasible due to the combinatorially large search space, and hand-picking which auxiliary data to use based on heuristics (e.g. from the same domain or task as the target few-shot dataset) can lead to sub-optimal results [8]. Delegating such choices to an algorithm can lead to better solutions, as demonstrated in the transfer learning [8; 9; 10], meta-learning [11; 12], multi-task learning [13; 14; 15; 16], and auxiliary learning literature [4; 17]. However, prior auxiliary learning algorithms often assume that only 1-3 related auxiliary datasets are available and design algorithms whose computational complexity grows linearly (or worse) with the number of auxiliary datasets [18; 8], motivating the search for more efficient methods as the number of auxiliary datasets grows. To overcome the challenges of prior works, we desire a FLAD algorithm that **(1)** makes no assumptions on available auxiliary data a-priori (in-domain, on-task, quality, quantity, etc.), **(2)** scales well with the number of auxiliary datasets, and **(3)** adds minimal memory and computational overhead. We design algorithms that satisfy our desiderata by drawing inspiration from the central problem in multi-armed bandit (MAB) settings: the exploration-exploitation trade-off [19; 20]. We relate the set of auxiliary datasets to the arms of a MAB and tailor the classic EXP3 [21] and UCB1 [22] algorithms to fit the FLAD framework by designing three efficient gradient-based reward signals. The combination of our MAB-based algorithms and efficient gradient-based rewards allows us to scale to \(100\times\) more auxiliary datasets than previous methods. Figure 1 provides a basic illustration of how we formulate FLAD as a MAB problem. To empirically validate our approaches, we focus on few-shot training of language models and utilize P3 [23], a readily available resource with hundreds of auxiliary language datasets. We evaluate our methods on the same held-out tasks as the T0 language model [16] and show that, when using the same collection of auxiliary datasets, our algorithms outperform a directly fine-tuned T0 by 5.6% (EXP3-FLAD) and 5.7% (UCB1-FLAD) absolute. Furthermore, incorporating all available datasets in P3 (i.e. not just those used to train T0) increases the improvement to 9.1% and 9.2%. Finally, we compare models trained with our methods against state-of-the-art few-shot methods, finding that our methods improve performance by >3%, even though one model requires a large collection of unlabeled target dataset samples. Furthermore, to the best of our knowledge, our methods lead to the first 3 billion parameter model that improves over 175B GPT-3 using few-shot in-context learning. In summary, our main contributions are: * We connect FLAD to the MAB setting and focus on the exploration-exploitation trade-off by designing two algorithms, EXP3-FLAD and UCB1-FLAD along with three reward functions that are both simple and efficient (in space and computational complexity). * We empirically validate that our methods improve few-shot performance of pretrained language models and show that strategies that employ only exploration _or_ exploitation lead to sub-optimal performance. * We perform case studies to better understand the dynamics of our reward functions and their interaction with large language model training. Figure 1: **Overview of few-shot learning with auxiliary data (FLAD) as a multi-armed bandit problem.** On the left is the learner which defines a policy \(\pi\) that determines which auxiliary dataset to sample from. On the right is the environment that includes the set of auxiliary datasets \(\mathcal{D}_{\mathcal{A}}\), target dataset \(\mathcal{D}_{\mathcal{T}}\), and the model \(f_{\theta}\). At each turn \(t\), the following five steps take place, further described in Section 3.1: **1.** The learner selects an auxiliary dataset \(\mathcal{D}_{a}\) according to its policy \(\pi\). **2.** The environment samples a batch \(\{\mathbf{x},\mathbf{y}\}\sim\mathcal{D}_{a}\). **3.** The model \(f_{\theta}\) calculates gradients for the sampled batch (\(\nabla_{a}\)) and the target dataset (\(\nabla_{\mathcal{T}}\)), then updates the parameters \(\theta\). **4.** A reward \(\mathcal{R}_{a,t}\) is calculated based on \(\nabla_{a}\) and \(\nabla_{\mathcal{T}}\). **5.** The learner updates \(\pi\) based on \(\mathcal{R}_{a,t}\). ## 2 Related work A long history of works have found success when combining auxiliary data with target data [4; 24; 6; 25; 26; 5; 18; 7; 27; 28; 8]. Some works have explored the addition of auxiliary learning objectives to aid the learning of the target task [24; 26; 25; 5; 17]. More similar to our work are methods that perform auxiliary learning by introducing additional data sources beyond the target data [4; 6; 18; 7; 27; 28; 8]. As opposed to the few-shot setting on which this work focuses, previous works have studied auxiliary learning in settings with large quantities of target data. For example, Chen et al. [18] and Verboven et al. [7] assume access to 10,000 labeled target samples, Ivison et al. [28] and Lin et al. [27] assume access to 1,000s of unlabeled target samples, and Du et al. [6] and Albalak et al. [8] assume access to 100s of labeled target samples. Additionally, many of the previous works that study auxiliary learning have only considered settings with 1-3 auxiliary datasets [6; 18; 7; 8]. For example, Verboven et al. [7] propose a task-weighting method that requires solving a system of equations that becomes underspecified with multiple auxiliary tasks, limiting their method to only a single auxiliary task. Furthermore, Chen et al. [18] experiment with 3 auxiliary tasks because their method requires learning a target-aware classifier for each source task, so the computation scales as \(O(|\mathcal{A}||\mathcal{T}|)\) where \(|\mathcal{A}|\) is the number of auxiliary tasks and \(|\mathcal{T}|\) is the number of target tasks, making it impractical to scale to large numbers of source and target tasks. In this work, we focus on improving auxiliary learning with very few target samples (20-70 samples) by scaling up the number of auxiliary datasets orders of magnitude greater than previous work. In order to scale up the learning process, efficiency is a central concern of this work, unlike prior works. Data selection studies a similar (but distinct) problem where the goal is to selectively utilize a subset of a single large dataset rather than selecting data from auxiliary datasets. Recent research on data selection has found that intelligent data selection can provide significant improvements to model performance [29; 30; 31; 32]. ## 3 Multi-armed bandits for few-shot learning with auxiliary data In this section, we first define the few-shot learning with auxiliary data (**FLAD**) setting. Then, we formulate FLAD as a multi-armed bandits (**MAB**) problem, shown in Figure 1. Next, we define reward functions that are efficient to compute and appropriate for FLAD. Finally, we describe our adaptations of two popular MAB algorithms: EXP3-FLAD and UCB1-FLAD. ### Setup FLAD problem setting.Few-shot learning with auxiliary data (FLAD) fits into the following setting: assume access to a large set of auxiliary datasets \(\mathcal{D}_{\mathcal{A}}\) where, for all \(a\in\mathcal{A}\), \(\mathcal{D}_{a}\) is an individual auxiliary dataset. Given a small quantity of data belonging to a target dataset \(\mathcal{D}_{\mathcal{T}}\), the goal of FLAD is to find parameters \(\theta\) of a model \(f_{\theta}\) that achieve high performance on the distribution underlying \(\mathcal{D}_{\mathcal{T}}\) while utilizing only the available data, \(\mathcal{D}_{\mathcal{T}}\cup\mathcal{D}_{\mathcal{A}}\). Formulating FLAD as MAB.In this work, we adopt the multi-armed bandit (MAB) setting by formulating FLAD as a Markov decision process [33] and defining a learner and environment, illustrated in Figure 1. The learner consists of a policy \(\pi\) defining a selection strategy over all \(\mathcal{D}_{a}\in\mathcal{D}_{\mathcal{A}}\). The environment consists of the target dataset \(\mathcal{D}_{\mathcal{T}}\), auxiliary datasets \(\mathcal{D}_{\mathcal{A}}\), and model \(f_{\theta}\). In this formulation the learner interacts with the environment over \(N\) rounds. At each round \(t\) the learner selects one of the environment's \(|\mathcal{A}|\) datasets \(\mathcal{D}_{a}\in\mathcal{D}_{\mathcal{A}}\). Next, the environment samples a batch \(\{\mathbf{x},\mathbf{y}\}\sim\mathcal{D}_{a}\) and calculates the gradient w.r.t. \(\theta\) using a task-appropriate loss function as \(\nabla_{a}=\nabla_{\theta}\mathcal{L}(f_{\theta},\mathbf{x},\mathbf{y})\). Then, the environment computes the target gradient \(\nabla_{\mathcal{T}}=\nabla_{\theta}\mathcal{L}(f_{\theta},\mathcal{D}_{ \mathcal{T}})\), and updates model parameters w.r.t. \(\nabla_{\mathcal{T}}+\nabla_{a}\). Finally, the learner uses a gradient-based reward \(\mathcal{R}_{a,t}(\nabla_{a},\nabla_{\mathcal{T}})\) to update its policy \(\pi\). See Appendix A and Lattimore & Szepesvari [201] for further details on multi-armed bandits. Designing the reward functions.We design the reward function \(\mathcal{R}\) with our desiderata in mind. To ensure that our algorithm adds minimal memory and computational overhead we consider rewards that utilize information intrinsic to the model and the losses being optimized, not an external model or metric (e.g. accuracy or BLEU). In this work we propose three gradient-based reward functions inspired by previous works: **gradient alignment**[6; 24; 35], **gradient magnitude similarity**[36; 37], and their aggregation. Formally, at turn \(t\) let \(\nabla_{a}\) be the gradient of the auxiliary batch and \(\nabla_{\mathcal{T}}\) be the target dataset gradient. **Gradient alignment** is defined as \(\mathcal{R}^{GA}_{a,t}=\frac{\nabla_{a}\cdot\nabla_{\mathcal{T}}}{\|\nabla_{ a}\|_{2}\|\nabla_{\mathcal{T}}\|_{2}}\), i.e. the cosine similarity between the gradients of the sampled auxiliary dataset and the whole target dataset. **Gradient magnitude similarity** is defined as \(\mathcal{R}^{GMS}_{a,t}=\frac{2\|\nabla_{a}\|_{2}\|\nabla_{\mathcal{T}}\|_{2}} {\|\nabla_{a}\|_{2}^{2}+\|\nabla_{\mathcal{T}}\|_{2}^{2}}\) so that when the two gradients have equal magnitude, this value is equal to \(1\) and as the magnitudes differ the value goes to zero. In addition to the individual reward functions, we also consider an aggregate reward. To ensure that the aggregate is not dominated by either individual reward, we normalize \(\mathcal{R}^{GA}\in[0,1]\), the same range as \(\mathcal{R}^{GMS}\) and define the aggregate to be their sum: \(\mathcal{R}^{AGG}_{a,t}=\frac{1+\mathcal{R}^{GA}_{a,t}}{2}+\mathcal{R}^{GMS}_ {a,t}\). We provide further discussion on the design of reward functions in Section 6. ### Adapting the EXP3 algorithm. EXP3 BackgroundWe base our first algorithm, EXP3-FLAD, on the EXP3 algorithm [21] ("_Exp_onential-weight algorithm for _Expl_orlation and _Expl_oitation"). EXP3 targets the adversarial MAB setting, which assumes that the reward-generating process is controlled by an adversary who is given access to the learner's policy \(\pi\) and determines the sequence of rewards, \((R_{a,t})_{t=1}^{N}\), for each arm prior to play [38]. We consider the adversarial MAB formulation due to the highly non-convex loss landscape of deep neural networks and our use of stochastic gradient descent-based optimization methods. These factors imply that we cannot guarantee our rewards to be stationary, independent, or follow any particular distribution (e.g. Gaussian). Further details on adversarial MAB are included in Appendix A and in [21]. In EXP3-FLAD, the learner selects arms according to a Gibbs distribution based on the empirically determined importance-weighted rewards of arms. To allow for exploration, we mix the Gibbs distribution with a uniform distribution [21]. Formally, let \(\mathcal{E}_{t}\) be the exploration rate at turn \(t\) and, recalling that \(K=|\mathcal{A}|\) is the number of auxiliary datasets, then \(\pi\) defines the probability of selecting a given arm \(a\in\mathcal{A}\) as the linear combination of Gibbs and uniform distributions \(\pi_{t}(a)=(1-K\mathcal{E}_{t})\frac{\exp(\mathcal{E}_{t-1}\hat{R}_{a})}{ \sum_{a^{\prime}}\exp(\mathcal{E}_{t-1}\hat{R}_{a^{\prime}})}+\mathcal{E}_{t}\) where \(\hat{R}_{a,t}\) is the importance weighted reward \(\hat{R}_{a,t}=\hat{R}_{a,t-1}+\frac{R_{a,t}}{\pi_{t-1}(a)}\). We want the learner to explore more in early training than in later stages, so we use a decaying exploration rate \(\mathcal{E}_{t}=\min\Bigl{\{}\frac{1}{K},\sqrt{\frac{\ln K}{K\cdot t}}\Bigr{\}}\) as proposed by Seldin et al. [39]. The use of an importance-weighted estimated reward compensates the rewards of actions that are less likely to be chosen, guaranteeing that the expected estimated reward is equal to the actual reward for each action. EXP3-FLAD is designed to be nearly optimal in the worst case, but due to the exploration rate it will select "bad" actions at a rate of \(\mathcal{E}_{t}\). The exploration of EXP3-FLAD combined with importance-weighting allows the policy to handle non-stationary reward-generating processes. EXP3-FLAD algorithm.At each turn, the learner first computes the current exploration rate \(\mathcal{E}_{t}\). Then, the learner samples an auxiliary dataset \(\mathcal{D}_{a}\) from the distribution defined by \(\pi_{t}(\mathcal{A})\). Next, the learner samples a batch from the selected dataset, \(\{\mathbf{x},\mathbf{y}\}\sim\mathcal{D}_{a}\), and calculates the gradient \(\nabla_{a}=\nabla_{\theta}\mathcal{L}(f_{\theta},\mathbf{x},\mathbf{y})\). Let \(G\) be the number of rounds between model updates, then the previous steps will repeat \(G\) times, at which point the learner calculates the gradient of the target dataset \(\nabla_{\theta}\mathcal{L}(f_{\theta},\mathcal{D}_{\mathcal{T}})\) and updates the model w.r.t. \(\nabla_{\mathcal{T}}+\sum_{a}\nabla_{a}\). Finally, the importance-weighted reward for each auxiliary batch is calculated using the observed rewards. Pseudocode can be found in Appendix B ### Adapting the UCB1 algorithm. UCB1 background.While EXP3-FLAD is applicable in unconstrained settings with highly stochastic and non-stationary rewards, it can be outperformed by other algorithms in settings that _are_ constrained. One such algorithm is the upper confidence bound (UCB1) algorithm [22], which was originally designed to be optimal for stationary, normally distributed reward functions. Nevertheless, variants of UCB1 have been demonstrated to be effective in a range of settings, such as those involving non-stationary, sub-Gaussian, or heavy-tailed distributions [40; 41]. The UCB1 algorithm and its variants assign each arm a value called the upper confidence bound based on Hoeffding's inequality [42] and are based on the principle of _optimism in the face of uncertainty_, meaning that with high probability the upper confidence bound assigned to each arm is an overestimate of the unknown mean reward. In UCB1-FLAD, the learner greedily selects arms according to their upper confidence bound. UCB1 is originally designed for stationary reward-generating processes, so to accommodate non-stationarity we include an exponential moving average when estimating the mean reward for a given arm. Formally, let \(R_{a,t}\) be the observed reward for arm \(a\) at turn \(t\), then we calculate the estimated mean reward as \(\hat{R}_{a}=(1-\beta)\hat{R}_{a}+\beta R_{a,t}\) where \(\beta\) is the smoothing factor. Then, we define the upper confidence bound to be \(UCB_{a,t}=\hat{R}_{a}+\sqrt{\frac{2\ln t}{n_{a}}}\). In the original MAB setting all interactions with the environment occur online, but FLAD is a unique situation where the learner can interact with the auxiliary data prior to training. To take advantage of this, rather than initializing estimated rewards with a single mini-batch, we initialize them with larger data quantities to improve the approximation of the true dataset gradients. This is done for each auxiliary dataset by calculating the gradient \(\nabla_{a}=\nabla_{\theta}\mathcal{L}(f_{\theta},\mathbf{x},\mathbf{y})\), where the number of samples in \(\{\mathbf{x},\mathbf{y}\}\) is significantly larger than a mini-batch, and can be up to the size of the full dataset. UCB1-FLAD algorithm.At each turn the learner plays the arm with largest upper confidence bound, \(a^{*}=\arg\max_{a\in\mathcal{A}}UCB_{a,t}\). Next, the learner samples a batch from the selected dataset \(\{\mathbf{x},\mathbf{y}\}\sim\mathcal{D}_{a}\), and calculates the gradient \(\nabla_{a}=\nabla_{\theta}\mathcal{L}(f_{\theta},\mathbf{x},\mathbf{y})\). As in EXP3-FLAD, we repeat the previous steps \(G\) times, at which point the learner calculates the gradient of the target dataset \(\nabla_{\theta}\mathcal{L}(f_{\theta},\mathcal{D}_{\mathcal{T}})\) and updates the model w.r.t. \(\nabla_{\mathcal{T}}+\sum_{a}\nabla_{a}\). Finally, the smoothed estimated mean reward is calculated. Pseudocode can be found in Appendix B ## 4 Experimental setup Models.For our experiments, we utilize encoder-decoder Transformer models from the T5 family of pre-trained language models [43]. Specifically, we experiment with LM-adapted T5 (T5-LM) and T0. The T5-LM model further trains the T5.1.1 model for 100,000 steps (corresponding to 100B tokens) from the C4 dataset [43] on the prefix language modeling objective [44]. The T0 model was initialized from T5-LM and further trained on a multitask mixture of prompted datasets as described by Sanh et al. [16]. We repeat each experiment with T5-LM XL (hereafter **T5-XL**) and **T0-3B** as our base model. Both models use the same architecture with 2.85 billion parameters, and we used model checkpoints from Hugging Face Transformers [45]). Target datasets.We obtain all datasets from Hugging Face Datasets1, and cast them to the text-to-text format by applying prompt templates from the Public Pool of Prompts (P3) [23] that was used to train T0. To evaluate our few-shot methods, we utilize the same held-out datasets as T0, which cover four distinct tasks: **sentence completion** (COPA [46], HellaSwag [47], Story Cloze [48]), **natural language inference** (ANLI [49], CB [50], RTE [51]), **coreference resolution** (WSC [52], Winogrande [53]), and **word sense disambiguation** (WiC [54]). For each dataset, we randomly sample five few-shot splits from their training data, containing the same number of training examples as previous works, between 20 to 70 [55, 56]. We further divide each split into equal training and validation partitions for true few-shot learning [57](e.g. 10 train and 10 validation samples for HellaSwag). Only ANLI datasets have a publicly available test set, so for all other datasets we evaluate models on the original validation set (not utilized for few-shot training or validation). Footnote 1: [https://huggingface.co/datasets](https://huggingface.co/datasets) Auxiliary datasets.We compare the performance of our methods using two sets of auxiliary data and never include any of the target datasets as part of auxiliary data. First, we use the collection of datasets used for multitask training of T0 (henceforth referred to as T0Mix), including 35 unique datasets covering question answering, sentiment analysis, topic classification, summarization, paraphrase detection and structure-to-text. Second, we utilize all datasets in P3 [23] (which forms a superset of T0Mix) and prevent data leakage by filtering out datasets that overlap with any target dataset, leading to 260 available datasets (list in Appendix G). For each auxiliary dataset, we use at most 10,000 of the dataset's examples. Baseline methods.We compare our proposed methods with several FLAD and non-FLAD baselines. **Target-Only** (non-FLAD) directly fine-tunes the base model on the target dataset (i.e. without using auxiliary data). **Explore-Only**[8] is a commonly used FLAD method which simultaneously trains on auxiliary and target data by mixing auxiliary datasets equally. We call this Explore-Only because it is equivalent to continuously exploring auxiliary data and never exploiting knowledge of its relation to the target data. **Exploit-Only** extends Explore-Only by computing gradient alignment prior to training (as in UCB1), and multitask-trains the model by mixing auxiliary datasets according to a Gibbs distribution over the alignments (similar to that in EXP3), resulting in an algorithm that exploits the relations determined prior to training but never exploring. Both explore- and exploit-only mix target and auxiliary data with a ratio of \(M\) times the highest auxiliary sampling probability. For instance, explore-only with \(M=5\) and \(\mathcal{D}_{\mathcal{A}}=\mathrm{T0Mix}\) has a \(1/35\) probability to sample auxiliary dataset \(\mathcal{D}_{a}\in\mathcal{D}_{\mathcal{A}}\) and a \(5/35\) probability for the target dataset. **Loss-Scaling**[6] is a FLAD method similar to EXP3 and UCB1; the main difference being that it scales auxiliary batch losses by their gradient alignment instead of modifying sampling probabilities. Du et al. [6] originally propose to use gradient alignment (**Loss-Scaling (\(GA\))**), but we also propose a version that scales losses by gradient magnitude similarity (**Loss-Scaling (\(GMS\))**). Training details.For the target-only baseline, we use learning rates in \(\{1e{-}4,3e{-}4\}\). For all other methods, we always use a learning rate of \(1e{-}4\). For target-, explore-, and exploit-only baselines we use batch sizes in \(\{32,128\}\). For loss-scaling, EXP3-FLAD, and UCB1-FLAD we use mini-batches of 8 samples and let \(G\) be in \(\{4,16\}\) to match the batch size of all methods. For explore- and exploit-only, we use a target dataset mixing ratio of \(M\in\{1,5,10\}\). For all experiments we use the Adafactor optimizer [58] and validation-based early stopping for model checkpoint selection. In preliminary experiments we consider rewards using gradients from various model partitions: the full model, encoder-only, decoder-only, and the weights of the output vocabulary matrix (language modeling head). We find that using the parameters from the language modeling head provides the best performance and contains only 2.3% of the full model parameters, significantly reducing memory consumption. For UCB1-FLAD we found the smoothing factor \(\beta=0.9\) to work well in preliminary experiments and initialize auxiliary dataset gradient alignment using 1,000 samples from each auxiliary dataset. Additional implementation details can be found in Appendix C Experiment procedure.The FLAD experiment process involves training a model that is specialized for each target dataset. For each proposed method and baseline, we train and evaluate a model on each of the 11 target datasets. We repeat training and evaluation on 5 random seeds and include the aggregated results in Table 1. Each cell shows the accuracy averaged across all 55 (11 target datasets, 5 random seeds) experiments. This experimental process is performed for each training method on both models and auxiliary datasets. We include the non-aggregated results in Appendix D. \begin{table} \begin{tabular}{l|c|c|c|c|c} \hline \hline & \multicolumn{2}{c|}{**Base Model**} & \multicolumn{2}{c|}{**T5-XL**} & \multicolumn{2}{c}{**T0-3B**} \\ Training Method \(\backslash\) & _Auxiliary Data_ & _T0Mix_ & _P3_ & _T0Mix_ & _P3_ \\ \hline Target-Only & & \multicolumn{2}{c|}{52.82} & \multicolumn{2}{c}{56.44} \\ Loss-Scaling [6] (\(GA\)) & & 53.22 & 55.19 & 59.47 & 60.66 \\ Loss-Scaling [6] (\(GMS\)) & & 55.98 & 56.40 & 60.47 & 60.70 \\ Explore-Only [8] & & 59.18 & 60.64 & 61.17 & 62.77 \\ Exploit-Only [8] & & 59.79 & 60.49 & 60.87 & 62.87 \\ EXP3-FLAD (\(\mathcal{R}^{GA}\)) & & 61.50 & 64.07 & 62.87 & 65.98 \\ UCB1-FLAD (\(\mathcal{R}^{GA}\)) & & 62.01 & 65.52 & 62.35 & 66.29 \\ EXP3-FLAD (\(\mathcal{R}^{GMS}\)) & & 61.72 & 65.57 & 62.78 & 65.51 \\ UCB1-FLAD (\(\mathcal{R}^{GMS}\)) & & 61.67 & 65.21 & 62.85 & 66.00 \\ EXP3-FLAD (\(\mathcal{R}^{AGG}\)) & & 62.05 & 65.47 & 62.84 & **66.84** \\ UCB1-FLAD (\(\mathcal{R}^{AGG}\)) & & **62.08** & **65.63** & **62.93** & 66.29 \\ \hline \hline \end{tabular} \end{table} Table 1: **Main results. Each cell contains the score of training a base model (top row) with auxiliary data (second row) using the specified training method (left column), averaged across 11 target datasets on 5 random seeds (each cell is the average of 55 experiments). Target-Only does not utilize auxiliary data. **Bolded** scores are those with highest mean for a given base model and auxiliary dataset (column-wise), underlined scores are those where a Wilcoxon rank-sum test fails to find significant difference from the highest score (\(p>0.05\)). Expanded results are found in Appendix D.** Findings and analysis In Table 1 we compare the empirical results of our MAB-based methods (EXP3-FLAD and UCB1-FLAD) and corresponding baselines on 11 target datasets (expanded results in Appendix D. For each base model and auxiliary data combination (each column) EXP3-FLAD and UCB1-FLAD outperform all the baselines. In fact, we find that _for every single task_ our methods always perform equal to or better than the baselines. This demonstrates that our MAB-based methods provide a strong improvement in few-shot generalization over previous FLAD methods. For a fair comparison where each method utilizes equal data, we compare the performance of Target-Only using T0 and T0Mix (56.44) against the proposed FLAD methods and baselines using T5 and T0Mix (left column). From this comparison it becomes clear that Loss-Scaling actually does worse than multitask training followed by direct fine-tuning by 0.5-3.2%. However, we do find that the remaining FLAD methods lead to improvements (between 2.7-5.6% absolute improvement). We find small performance differences between EXP3-FLAD and UCB1-FLAD across the three reward functions. In general, \(\mathcal{R}^{AGG}\) leads to the best performance, but we perform a two-sided Wilcoxon rank-sum test to check for significance between average scores and find that the other rewards frequently have no significant difference (\(p>0.05\)). The importance of prioritized sampling.Loss-Scaling was originally proposed for use with only a single auxiliary dataset and it was unclear, a priori, how it would cope with larger quantities. Additionally, Du et al. [6] purposefully choose an auxiliary dataset that is related to the target, while in our setting we make no such assumptions. We find that our methods outperform Loss-Scaling methods by 6.3% on average. In Figure 3 (and Figure 4 in Appendix E) we show that, over the course of training, the value of gradient alignments and gradient magnitude similarities for most datasets will converge to 0, leading to very small gradient updates for Loss-Scaling. More importantly, _the auxiliary data that is relevant to the target task is seen less frequently for Loss-Scaling_ than our MAB-based methods. This can be seen by comparing the difference in performance of Loss-Scaling methods when using less (T0Mix) vs. more (P3) auxiliary data. We find that, at best, Loss-Scaling (\(GA\)) improves 2% when using T5 and, at worst, only 0.2% for Loss-Scaling (\(GMS\)) with T0. This is compared with the notable improvements of EXP3-FLAD and UCB1-FLAD of 2.6-4% when considering the same data increase from T0Mix to P3. The importance of exploration _and_ exploitation.Interestingly, we expected that Exploit-Only would outperform the Explore-Only method because it utilizes relational information between the target and auxiliary tasks, but find no statistical difference between the methods (two-sided Wilcoxon rank-sum test gives \(p>0.05\)). Furthermore, when comparing the ability to leverage additional auxiliary data (i.e. going from T0Mix to all of P3), we find that the improvement for Explore- and Exploit-Only methods is minimal with only 0.7-2% improvement. On the other hand, EXP3-FLAD and UCB1-FLAD show a notable improvement of 2.6-4%, emphasizing the importance of both exploration _and_ exploitation, particularly when dealing with large collections of auxiliary data. FLAD provides improved generalization over non-FLAD methods.Next, we compare the performance of our best models trained on P3 using \(\mathcal{R}^{AGG}\) with state-of-the-art few-shot methods: T-Few, DEFT-Few, and GPT-3. T-Few [56] is a variant of the T0-3B model that multi-task pre-trains parameter-efficient (IA)\({}^{3}\) modules followed by target-only fine-tuning of the (IA)\({}^{3}\) modules. DEFT-Few [28] is a variant of the T5-XL model that uses retrieved auxiliary data for multi-task training. It first trains a T5-XL model on the 500 nearest neighbor samples from P3 using 1000 unlabeled target dataset samples, and then performs few-shot target-only fine-tuning with the (IA)\({}^{3}\) modules from Liu et al. [56]. Finally, we also compare against the 175 billion parameter variant of GPT-3 [55], which utilizes in-context learning. We find that, on average, models trained using our FLAD-based methods outperform all other methods and, to the best of our knowledge, our methods lead to the first 3 billion parameter model that outperforms GPT-3 on this dataset mixture (previous smallest models have 11 billion parameters), despite using \(62.5\) times fewer parameters than GPT-3. Additionally, we find that our FLAD-based methods provide robust performance across datasets, achieving the best or second-best performance on \(8/11\) datasets, and never performing worst. The use of task-specific modules lead T-Few and DEFT-Few to significant improvements over target-only fine-tuning, preventing the models from ending up in poor local minima. However, these results demonstrate that with the same data, simultaneously fine-tuning with auxiliary and target data leads to improved few-shot generalization, providing a complementary means of improving performance. Investigating the Reward-Generating Processes.In Section 3.2, we mention that due to the highly non-convex loss landscape and the use of stochastic gradient descent-based optimization techniques, we cannot ensure that our reward generating process is stationary, independent across auxiliary datasets, or follows a normal distribution. To gain a deeper understanding of our reward-generating processes, we examine the distribution of each reward using 5,000 samples from all 35 auxiliary datasets of T0Mix and 32 samples from a few-shot target dataset, WSC [52]. Resulting histograms at every 100 steps can be found in Appendix E, and Figure 3 shows an abbreviated version. The left side of Figure 3 demonstrates that for \(\mathcal{R}^{GA}\), almost every dataset yields a Gaussian reward distribution, with a few multi-modal distributions. Notably, WikiBio [59] (dark orange) exhibits peaks at 0.25 and -0.75. Interestingly, \(\mathcal{R}^{GA}\) results in polarized rewards across datasets, with minimal distribution density between -0.75 and 0.25. In contrast, the right side of Figure 3 displays more non-Gaussian distributions for \(\mathcal{R}^{GMS}\), as well as flatter distributions compared to \(\mathcal{R}^{GA}\). Remarkably, we observe that \(\mathcal{R}^{GA}\) produces more stationary reward distributions, as the distribution for almost every dataset (30/35) converges rapidly towards 0 after only 100 steps. Although most distributions for \(\mathcal{R}^{GMS}\) also converge towards 0, the convergence occurs at a slower pace, taking nearly 500 steps. Probing the training dynamics.To better understand the training dynamics of our proposed methods, we perform a case study on T5-XL with T0Mix and \(\mathcal{R}^{GA}\) and find two datasets where either algorithm improves significantly over the other (full details and figures in Appendix F). First, we study RTE, where UCB1-FLAD outperforms EXP3-FLAD. We calculate the empirical distribution of samples seen from each auxiliary dataset and find that EXP3-FLAD samples nearly uniformly from all datasets while UCB1-FLAD forms a bimodal sampling distribution with peaks at 2.5% and 3.25% (30% relative difference). The uniformity of the EXP3-FLAD distribution is counterintuitive, as we do find that it achieves separation between auxiliary tasks in the cumulative estimated reward (as shown in Figure 6), but this does not lead to separation in the sampling probability space. Additionally we find that even on COPA, where EXP3-FLAD outperforms UCB1-FLAD, EXP3-FLAD still achieves good separation between cumulative estimated rewards, but has a unimodal sampling distribution, while UCB1-FLAD does not have as clear of a bimodal distribution as in RTE. The difference in empirical sampling distributions is likely due to the difference between the greedy policy of UCB1-FLAD and the stochastic policy of EXP3-FLAD. Empirically, we find that EXP3-FLAD very rarely assigns an auxiliary dataset a probability \(<1\%\), leading to many "bad" batches over the course of thousands of turns. On the other hand, the optimistic policy of UCB1-FLAD spends much less time exploring and will sample "bad" batches much less frequently. Figure 2: **Comparison of state-of-the-art few-shot methods with FLAD methods trained on P3 using \(\mathcal{R}^{\text{AGG}}\). T-Few scores are from [56]. DEFT-Few scores are from [28]. GPT-3 scores are from [55] and utilize few-shot in-context learning. All models utilize the same number of few-shot examples and (other than GPT-3) have 3B parameters.** ## 6 Discussion Discussion on reward functions.In FLAD we want to prioritize training on auxiliary datasets with similar solution spaces as the target task without overfitting to the few-shot target data, and our reward functions are designed to serve this goal. To better understand the reward signal of our aggregate reward, \(\mathcal{R}^{AGG}\), we examine four combinations of rewards: low \(\mathcal{R}^{GA}\) and \(\mathcal{R}^{GMS}\), high \(\mathcal{R}^{GA}\) but low \(\mathcal{R}^{GMS}\), low \(\mathcal{R}^{GA}\) but high \(\mathcal{R}^{GMS}\), and high \(\mathcal{R}^{GA}\) and \(\mathcal{R}^{GMS}\). When both rewards are high, we can assume that the auxiliary gradient is useful. However, when one reward is high and the other is low, it is difficult to draw conclusions as a high \(\mathcal{R}^{GA}\) on its own means the auxiliary gradient will update weights in the right direction, but low \(\mathcal{R}^{GMS}\) can mean that we significantly overshoot _or_ undershoot the target, where overshooting can be much more detrimental than undershooting. If both \(\mathcal{R}^{GA}\) and \(\mathcal{R}^{GMS}\) are small, we know the auxiliary gradient leads us away from the target solution space, but we don't know if its magnitude is much larger or smaller than the target. At the beginning of training, we can't know if the target or auxiliary gradient has larger magnitude, but as training progresses, it becomes significantly more likely that the auxiliary gradient is greater than the target. Thus, when both \(\mathcal{R}^{GA}\) and \(\mathcal{R}^{GMS}\) are low, we are likely to be pulled far from our current solution. This work uses training set-based rewards, but validation set-based rewards are also possible. One downside of validation-based rewards is they calculate validation score frequently, which increases computational complexity. Additionally, we focus on the few-shot setting and use validation-based early stopping. If we use a validation-based reward, then to prevent overfitting we will need to further split the data into 3 partitions: train, early-stopping validation, and reward-validation. Choice of baselines.With respect to the number of auxiliary datasets \(|\mathcal{A}|\) and target datasets \(|\mathcal{T}|\), our methods and the baselines we compare against have a computational complexity of \(O(|\mathcal{T}|)\), independent of \(|\mathcal{D}|\). For our model and these baselines, the models we train require \(\sim 6\) GPU-hours per target dataset. If we were to consider a baseline whose computation grows linearly w.r.t. \(|\mathcal{A}|\), \(O(|\mathcal{A}||\mathcal{T}|)\) (e.g. [8; 18]), these experiments would not be feasible without a large amount of hardware: _Training such a model with T0Mix would take over 200 GPU-hours (over 8 GPU-days) for a single target dataset_, and over 1500 GPU-hours (_over 2 GPU-months_) when using all of P3. Why we don't include theoretical guarantees.The design of MAB algorithms generally comes with theoretical proofs of regret bounds, but in this work we do not. Although we _can_ make guarantees on the regret bounds of our algorithms, they will not be meaningful. Our regret bounds would be measured with respect to the rewards, but our objective is to train a model for a target task, which is measured by accuracy on a held-out dataset and not by the reward. Figure 3: **Reward distributions of \(R^{GA}\) and \(R^{GMS}\) prior to training (step 0) and after 300 gradient updates for the T5-XL model with T0Mix as the auxiliary dataset and WSC [52] as the target dataset. For each step we show the histograms of reward distributions for all 35 auxiliary datasets.** Conclusion Recall the desiderata for our algorithm, expressed in the introduction: our algorithm should **(1)** make no assumptions on the available auxiliary data a-priori, **(2)** scale well with the number of auxiliary datasets, and **(3)** add minimal memory and computational overhead. **(1)** When designing our algorithm, we purposefully formulate the problem as a multi-armed bandit. MAB algorithms, in general, make no assumptions on the quality of rewards and, in particular, EXP3 even assumes that the auxiliary datasets will play an adversarial role when returning rewards. **(2)** As previously mentioned, our algorithms have a computational complexity that is independent of the number of auxiliary datasets. **(3)** Finally, our method adds minimal computational overhead beyond usual training computations. Every gradient that we utilize for our reward functions are also used to update the model, adding no additional computations. The only computational overhead is to compute gradient alignment (three vector dot products, two scalar square roots, and two scalar multiplications) or magnitude similarity (four vector dot products, two scalar square roots, three scalar multiplications, and one scalar addition). Additionally, our method adds a small amount of memory overhead, used to store gradients between model updates. Our rewards consider only the gradient w.r.t the language modelling head and, in practice, require 0.25Gb per auxiliary gradient to store, slightly increasing the space complexity above standard fine-tuning. The methods proposed in this work demonstrate the effectiveness of simultaneous training on auxiliary and target datasets in few-shot settings, continuously updating beliefs by exploring _and_ exploiting auxiliary data, and framing FLAD as a MAB problem. We further showed that by satisfying our desiderata, we are able to scale up FLAD to hundreds of auxiliary datasets and outperform traditional few-shot fine-tuning and in-context learning methods. While the presented algorithms satisfy our desiderata, the findings from this study can inform future work to further improve upon these methods in a number of ways, such as improving the reward function and reducing the space complexity.
2309.01618
Critical Behavioral Traits Foster Peer Engagement in Online Mental Health Communities
Online Mental Health Communities (OMHCs), such as Reddit, have witnessed a surge in popularity as go-to platforms for seeking information and support in managing mental health needs. Platforms like Reddit offer immediate interactions with peers, granting users a vital space for seeking mental health assistance. However, the largely unregulated nature of these platforms introduces intricate challenges for both users and society at large. This study explores the factors that drive peer engagement within counseling threads, aiming to enhance our understanding of this critical phenomenon. We introduce BeCOPE, a novel behavior encoded Peer counseling dataset comprising over 10,118 posts and 58,279 comments sourced from 21 mental health-specific subreddits. The dataset is annotated using three major fine-grained behavior labels: (a) intent, (b) criticism, and (c) readability, along with the emotion labels. Our analysis indicates the prominence of ``self-criticism'' as the most prevalent form of criticism expressed by help-seekers, accounting for a significant 43% of interactions. Intriguingly, we observe that individuals who explicitly express their need for help are 18.01% more likely to receive assistance compared to those who present ``surveys'' or engage in ``rants.'' Furthermore, we highlight the pivotal role of well-articulated problem descriptions, showing that superior readability effectively doubles the likelihood of receiving the sought-after support. Our study emphasizes the essential role of OMHCs in offering personalized guidance and unveils behavior-driven engagement patterns.
Aseem Srivastava, Tanya Gupta, Alison Cerezo, Sarah Peregrine, Lord, Md Shad Akhtar, Tanmoy Chakraborty
2023-09-04T14:00:12Z
http://arxiv.org/abs/2309.01618v1
# Critical Behavioral Traits Foster Peer Engagement in Online Mental Health Communities ###### Abstract Online Mental Health Communities (OMHCs), such as Reddit, have witnessed a surge in popularity as go-to platforms for seeking information and support in managing mental health needs. Platforms like Reddit offer immediate interactions with peers, granting users a vital space for seeking mental health assistance. However, the largely unregulated nature of these platforms introduces intricate challenges for both users and society at large. This study explores the factors that drive peer engagement within counseling threads, aiming to enhance our understanding of this critical phenomenon. We introduce BeCOPE, a novel behavior encoded Peer counseling dataset comprising over \(10,118\) posts and \(58,279\) comments sourced from \(21\) mental health-specific subreddits. The dataset is annotated using three major fine-grained behavior labels: (a) intent, (b) criticism, and (c) readability, along with the emotion labels. Our analysis indicates the prominence of "self-criticism" as the most prevalent form of criticism expressed by help-seekers, accounting for a significant \(43\%\) of interactions. Intriguingly, we observe that individuals who explicitly express their need for help are \(18.01\%\) more likely to receive assistance compared to those who present "surveys" or engage in "rants." Furthermore, we highlight the pivotal role of well-articulated problem descriptions, showing that superior readability effectively doubles the likelihood of receiving the sought-after support. Our study emphasizes the essential role of OMHCs in offering personalized guidance and unveils behavior-driven engagement patterns. The prevalence of mental health distress has risen sharply in the last several years. A recent report reveals that one in six individuals suffers from mental health-related challenges1.At the same time, there is a severe shortage of mental health providers to facilitate adequate support to those in need [21, 2]. As a result of these growing challenges, we specifically examined the patterns and factors that drive individuals to engage with peer-to-peer mental health threads, focusing on the impact of behavioral, emotional, textual, and topical signals during peer-to-peer interactions. Footnote 1: [https://www.who.int/news/item/17-06-2022-who-highlights-urgent-need-to-transform-mental-health-and-mental-health-care](https://www.who.int/news/item/17-06-2022-who-highlights-urgent-need-to-transform-mental-health-and-mental-health-care) To this end, we develop the BeCOPE (BEhavior enCOded PEer Counseling) dataset, composed of peer-to-peer mental health conversational interactions across \(10,118\) posts and \(58,279\) comments from \(21\) mental health-specific subreddits. We inspect the level of engagement on Reddit for three different OMHC categories - (a) interactive, (b) non-interactive, and (c) isolated - based on the pattern of interaction between users and the original help-seeker (see Figure 1). Analyzing the critical factors in each engagement category, we comprehend factors and patterns that lead to constructive versus detrimental peer-to-peer mental health interactions. Understanding peer-to-peer interactions on OMHCs is key to the ethical and safe monitoring of these communities, including the moderation of safe interactions and sharing of accurate mental health information. We explore the following research questions: 1. [label=**RQ0.0**, leftmargin=*,noitemsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep0pt=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parseppt=0pt,topsep=0pt,parseppt=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parseppt=0pt,topsep=0pt,parseppt=0pt,topsep=0pt,parseppt=0pt,topsep=0pt,parseppt=0pt,topsep=0pt,parseppt=0pt,topsep=0pt,parseppt=0pt,topsep=0pt,parseppt=0pt,topsep=0pt,parseppt=0pt,topsep=0pt,parseppt=0pt,topsep=0pt,parseppt,topsep=0pt,parseppt=0pt,topsep=0pt,parseppt=0pt,topsep=0pt,parseppt=0pt,topsep=0pt,parseppt=0pt,topsep=0pt,parseppt=0pt,topsep=0pt,parseppt=0pt,topsep=0pt,topsep=0pt,parseppt=0pt,topsep=0pt,parseppt=0pt,topsep=0pt,parseppt=0pt,topsep=0pt,parseppt=0pt,topsep=0pt,parseppt,topsep=0pt,topsep=0pt,parseppt=0pt,topsep=0pt,parseppt,topsep=0pt,topsep=0pt,parseppt,topsep=0pt,topsep=0pt,parseppt,topsep=0pt,topsep=0pt,parseppt,topsep=0pt,parseppt=0pt,topsep=0pt,topsep=0pt,parseppt=0pt,topsep=0pt,parseppt=0pt,topsep=0pt,topsep=0pt,parseppt,topsep=0pt,parseppt=0pt,topsep=0pt,topsep=0pt,parseppt,topsep=0pt,topsep=0pt,parseppt=0pt,topsep=0pt,topsep=0pt,topsep=0pt,parseppt,topsep=0pt,topsep=0pt,topsep=0pt,parseppt=0pt,topsep=0pt,topsep=0pt,topsep=0pt,parseppt,topsep=0pt,topsep=0pt,parseppt,topsep=0pt,topsep=0pt,topsep,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,parseppt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsepsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsepsep=0pt,topsep=0pt,topsepsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsepsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsepsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsepsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsepsep=0pt,topsepsep=0pt,topsepsep=0pt,topsepsep=0pt,topsep=0pt,topsepsep=0pt,topsep=0pt,topsep=0pt,topsepsep=0pt,topsepsep=0pt,topsepsepsep=0pt,topsepsep=0pt,topsepsepsep=0pt,topsepsep=0pt,topsepsep=0pt,topsepsep=0pt,topsepsepsep=0pt,topsepsepsep=0pt,topsepsep=0pt,topsepsep=0pt,topsepsepsep=0pt,topsepsepsep=0pt,topsepsepsepsep=0pt,topsepsepsep=0pt,topsepsepsep=0pt,topsepsep=0pt,topsepsepsep=0pt,topsepsepsep=0pt,topsepsepsepsep=0pt,topsepsepsep=0pt,topsepsepsep=0pt,topsepsepsep=0pt,topsepsepsep=0pt,topsepsepsep=0pt,topsepsepsep=0pt,topsepsepsepsep=0pt,topsepsepsepsepsep=0pt,topsepsepsepsep=0pt,topsepsepsep=0pt,topsepsepsepsepsep=0pt,topsepsepsepsepsep=0pt,topsepsepsepsep=0pt,topsepsepsepsepsepsepsep=0pt,topsepsepsepsepsepsep=0pt,topsepsepsepsepsep=0pt,topsepsepsepsepsepsep=0pt,topsepsepsepsepsepsep=0pt,topsepsepsepsepsepsepsepsep=0pt,topsepsepsepsepsepsepsep=0pt,topsepsepsepsepsepsepsepsep=0pt,topsepsepsepsepsepsepsep=0pt,topsepsepsepsepsepsepsepsep=0pt,topsepsepsepsepsepsepsepsep=0pt,topsepsepsepsepsepsepsepsepsep=0pt,topsep Reddit is a popular OMHC platform that has steadily emerged as a platform for seeking help concerning a spectrum of mental challenges with specific posts devoted to disorders such as depression, attention-deficit/ hyperactivity disorder (ADHD, sometimes ADD), bipolar disorder and alcohol and substance use [3, 4, 5]. Typically, users (i.e., support-seekers) create original posts to discuss their mental health issues, describing their symptoms and the contexts of their specific situations, like job loss or a recent divorce. The support-seekers, in turn, receive replies from peers (other users on the platform) with advice, recommendations for symptom management, and general support. This process allows support-seekers to share and ask for help for their mental health challenges in a cost-effective, convenient, and anonymous manner that typically results in immediate support. A recent study [6] analyzed patterns of posts on two popular OMHC platforms, Talklife and Reddit, by leveraging natural language processing for communication models in human-computer interaction and communication theory, operationalizing a set of four engagement indicators based on attention and interaction. The authors found that the back-and-forth peer platform communication effectively contributes to early support. A similar study [7] examined the change in sentiment to analyze peer-to-peer counseling settings to read whether a counseling thread or a post on the platform is correlated with a moment of cognitive change. It turned out that behavioral signals such as sentiment, affect, and topics associated with language are decisive toward effective counseling. On the same track, another study discussed the temporal engagement on social media correlating with patient disclosure [8]. The authors developed an autoregressive time series computational model that assesses engagement patterns and subsequently forecasts alteration in the intimacy of disclosures. They found that attributes of audience engagement, like emotional support, personal behavior, and self-disclosure, strongly predict patterns in future counseling behavior. Previous studies on the analysis of peer-to-peer mental health interactions identified threads that fall into affective [5], content-based [9, 10], and supportive [11] categories, thus demonstrating reliability for the functioning of peer-to-peer mental health platforms. However, little is known about how these categories of peer-to-peer mental health interactions are associated with constructive and/or detrimental outcomes. Understanding the characteristics of such OMHC users [12, 13, 14, 15] and given the widespread use of OMHC platforms, specific patterns and factors that drive engagement in peer-to-peer mental health interaction must be identified [16, 17, 18]. In doing so, social media platforms should be better able to monitor and intervene for the benefit of their users in distress [19, 20, 21]. ## Results RQ1: When examining peer-to-peer OMHC interactions, how do intent (i.e., help-seeking), readability, and criticism impact peer willingness to engage with the original post (e.g., validation, advice giving)? Intent.We observe that help-seekers on OMHC platforms are 18.01% more likely to receive help when they explicitly convey their pressing needs through queries, as opposed to when they make statements about their experiences. When an original Figure 1: **Taxonomy of counseling methods along with examples.** Here, OP (original poster) is a common Internet terminology for the person who creates posts on peer-to-peer platforms. In peer-to-peer therapy, we inspect the level of engagement in three different categories based on the abundance of interaction with the help-seeker – **(a) interactive:** if there are back-and-forth conversations between the OP and peers, **(b) non-interactive:** if the post engages peers, but the OP does not reply to peers, and **(c) isolated:** if the post does not have any comment, but one-to-one therapy involves the continuous exchange of dialogues between therapist and client (help-seeker). post contains a help-seeking approach, it increases peer engagement. Specifically, 45.35% of interactive posts, 42.16% of non-interactive, and 27.34% of isolated posts are help-seeking in nature, indicating that peers who explicitly ask for help for their mental issues experience greater peer engagement. We also observe that when an original post is constructed as a "rant" (a long statement of the problem with no explicit ask for help/advice), it receives less peer engagement. The number of isolated posts labelled with the rant intent (38.11%) exceeds non-interactive posts (34.73%) and interactive posts (32.56%) by a margin of 3.38% and 5.55%, respectively. Further, posts with rant intent receive the least interaction compared to other intent labels across all engagement categories, showing that the survey posts do not elicit peers' attention toward assistance. Our analysis reveals that the majority of posts are not in the same category, and the majority of posts are not in the same category. Figure 2: **Distribution of behavioral signals and readability in BeCOPE across all engagement categories.****(a)** The intent distribution indicates that a majority (45.35%) of posts show explicit intentions (seek-help) through queries or the articulation of pressing needs on OMHC platforms, yielding a more productive response as opposed to merely airing surveys or rants. **(b)** The criticism distribution shows that help-seekers are more likely to engage in self-criticism (43.32%), and those who criticise openly on others with proper reasoning are more likely to receive assistance. (c) The readability statistics of posts in BeCOPE state that well-written posts receive 2.2\(\times\) more support (responses) as compared to poorly written posts. sheds light on RQ1 by indicating the conveyance of explicit intentions through queries or the articulation of pressing needs on the OMHC platforms yields a more efficacious response. We present the distribution of intents across three engagement categories in Figure 2(a). The four annotated intent labels receive a significant agreement score with a confidence \(\geq\) 95% on the \(p\)-values of help-seeking (0.022), rant (0.046), chitchat (0.016), and survey (0.028). Furthermore, _SI Appendix_ (Section 1) presents fine-grained details of intent labels and their annotation. Criticism.We observe that isolated posts have maximum _no-criticism_ (NC) labels (50.34%) as compared to non-interactive (34.92%) and interactive (34.87%) posts. Figure 2(b) shows the distribution of _criticism_ labels across all engagement categories. Conversely, individuals who can obtain support from their peers on OMHCs are frequently found to engage in criticising themselves and others. We bifurcate the criticism of others into two indicative categories - _criticism with reasoning_ (CR) (i.e., a logical presentation of one's experience), and _criticism with no-reasoning_ (CNR). Out of all three engagement categories, interactive engagement carries the maximum CR label, 2.75% and 5.39% more than non-interactive and isolated engagement categories, respectively. This trend directly draws attention to the fact that proper reasoning in criticism is vital for receiving help. In contrast, CNR is most prevalent in the isolated engagement, highlighting that criticism without proper reasoning only adds noisy understanding to the reader's mind. Similarly, _self-criticism_ is considered the most prevalent type of criticism among those who receive help. This implies that people seeking support are more likely to engage in self-criticism, and those who express their emotions more openly are more likely to receive assistance. As a result, we infer that peers who criticise and have a profound comprehension of the topic at hand are more apt to receive assistance. The four annotated criticism labels receive adequate agreement score with confidence \(\geq\) 95% on the \(p\)-values of _criticism w/ reasoning_ (0.043), _criticism w/ no reasoning_ (0.010), _no criticism_ (0.009), and _self-criticism_ (0.035). Additional details related to the selection and annotation of the criticism labels are presented in _SI Appendix_ (Section 1). Readability.We hypothesize that well-written posts (i.e., easier to read) foster better understanding and subsequently attracted more peers to engage. Our initial observation supports the hypothesis; most of the posts in the BeCOPE dataset are hard to read, i.e., rated \(\leq\) 2 on a scale of 1 to 5, with 1 being the least comprehensible. Our analyses reveal that posts scoring higher in readability result in 2.2\(\times\) greater support ratings from peers, as shown in Figure 2(c). We further employ experts in linguistics to understand what contributes more toward understanding posts. We observe that factors like the length of the post, the division into paragraphs and listicles, grammar, spelling, clarity of the issue, and usage of short forms (SMS language) are critical that peers take into consideration when reading and deciding to engage with a post. The readability score receives significant confidence of \(\geq\) 95% with average \(p\)-values across all five labels to be 0.040. More details related to the annotation of readability score are presented in _SI Appendix_ (Section 1). RQ2: How does the expression of emotions in posts impact user engagement in the OMHC platforms? Emotion labels.Emotions play a vital role in mental health support seeking. Empathetic understanding is an attempt by the observers/experts to regulate emotions that help-seekers express [22]. Figure 3 shows a frequency-based radial distribution of the most frequent emotion labels in BeCOPE. Our analysis of emotion labels shows that 10% of the isolated posts carry _neutral_ emotion labels. In contrast, only 3% posts carry _neutral_ emotions for both interactive and non-interactive posts combined. Furthermore, 12.3% of the non-isolated posts exhibit _curiosity_ as the secondary emotion compared to 7% isolated posts. Evidently, labels such as _sadness, curiosity, fear,_ and _realization_ are more prevalent in non-isolated posts. On the other hand, emotion labels such as _caring, confusion, approval, joy,_ and _neutral_ are more prevalent in isolated posts. Consequently, peers exhibiting explicit emotional expression in posts, such as curiosity, fear, and sadness, receive more significant support in 86% of the cases. For the remaining 14% of the posts, emotions are observed to be with tepid emotional labels, such as caring, confusion, or neutral, to which peers often ignored responding, leading to no interaction. On analyzing a sample of 100 posts, we subjectively categorize extreme emotions expressed into various types, including fear, excitement, sadness, etc. In the category-wise emotion distribution (Figure 3), we observe that posts expressing such explicit extreme emotions have a higher chance of receiving a response, whereas posts with tepid emotional labels, such as caring, confusion, and neutral tend to be ignored. Metadata and Content AnalysisWe conduct an auxiliary analysis of the BeCOPE dataset with a prime focus on metadata and textual properties. These experiments aim to assess the impact of minor actions, such as subjectivity, interaction count, time of posting, anonymity, etc., on help-seeking. We conclude that specific minor actions taken by help-seekers on OMHC platforms can increase the probability of receiving assistance. Our initial findings suggest that descriptive titles and body content attract more help than compact usage of words. Likewise, the active participation of the help-seeker in the conversation (through comments) increases the chances of receiving help two-fold. Such approaches might assist help-seekers in gaining early access to assistance. Similarly, the time of seeking help also plays a vital role in peer assistance to the help-seeker. To this end, we further extend our analyses to understand the impact of time of seeking help on OMHC platforms. The results show that - first, seeking help during the night hours is the most common time to post; second, the availability of helpers is also at a maximum during night hours. These findings are essential because they reveal that many users seek help on OMHCs when health providers are typically unavailable, particularly during business hours. It is, therefore, critical that help-seekers post to the most suitable mental health subreddit in their time of need. We observe that a few mental health subreddit channels like r/OpiatesRecovery are entirely dedicated to providing frequent assistance to help-seekers, including during late hours. A detailed analysis with additional experiments is presented in SI Appendix (Section 2). ### Topical Analysis We also perform a topical analysis of peer-to-peer interactions, aiming to understand what specific topics and keywords drive the conversation in three engagement categories (viz. interactive, non-interactive, and isolated). To this end, we apply Latent Dirichlet Allocation (LDA) [23] on the posts in each engagement category. The idea is to understand the topics on which peers respond and don't respond. Therefore, we segregate isolated and non-isolated posts to study the topics on which the support is received and not received, respectively. We observe that the most ordinary topics are in isolated posts, which include discussions about school-related issues, abuse, rape, pressure to meet society's standards, salary, and freedom to express opinions and feelings. On the other hand, we observe that the frequently discussed topics from the non-isolated category are anxiety, drugs, common symptoms/illness and diagnosis, parenting behaviors, body image issues, food and weight, anxiety, and relapsing on drugs. Figure 3 shows a cluster of topics for posts from each category to obtain the most common topics in conversations. Evidently, the common topics of discussion in isolated posts elucidate that people shared experiences about many sensitive and Figure 3: **(a) Distribution of emotion labels in the BeCOPE dataset.** For brevity, we show plots for the top 10 emotion labels only. Each post is tagged with primary and secondary emotion labels. We further analyze the emotion label distribution across three engagement categories. **(b) Topical analysis on the BeCOPE dataset.** We perform Latent Dirichlet Allocation (LDA) [23] to form 8 clusters of topics. To analyze the topics on which peers respond, we club interactive and non-interactive posts, where peers respond and compare them with topics from isolated posts. stigmatized issues; subsequently, they remain unexplored, as indicated by the number of isolated posts. As a result, ordinary topics that resonate with peers and enjoy widespread prevalence tend to attract more interactions and are more likely to receive active engagement from peers on OMHCs. ## Discussion Understanding user behavior and online engagement is consistently challenging, particularly in comprehending the complexities of individuals in distress. OMHC platforms have emerged as crucial spaces for peer-based mental health discussions, enabling individuals to discuss their intrinsic thoughts and mental health issues openly. Beyond the OMHC's function, only a handful of these users interact, with even fewer users receiving the anticipated assistance. The most effective way of assessing peer engagement is to understand the factors on which peer interaction depends. Platforms like Reddit, containing dedicated mental health subreddits, offer rich repositories of discussions on relevant topics. Our formulated hypothesis posits that the comprehension of peer behavioral attributes such as intent, criticism, and readability significantly contributes to a holistic understanding. In addition, the expressivity of emotions on OMHCs can further concentrate on the causal underpinnings of these behavioral dynamics. However, this research area has remained under-resourced and insufficiently explored. Our newly introduced BeCOPE dataset holds significant implications beyond the insights drawn in this study. It can serve as a valuable resource across various research domains with dimensions ranging from empathetic to behavioral conduct of peers on OMHCs and further epitomizing explanations and casualty of such implicit underlying causes. Our research examines the behavioral, emotional, and topical dynamics associated with varying levels of engagement among peers within OMHCs. We perceive engagement as an indication of a peer's preparedness to provide support. Our findings underscore that simple behavioral characteristics such as explicitly seeking help and refraining from criticizing others can increase peer engagement, as observed in \(\sim\)50% of the cases. This observation emphasizes that behaviors like ranting, criticising others, and generic chit-chatting do not elicit productive peer attention. At the same time, users express themselves in different styles, and the underlying concept of peers being able to understand others hinges on the clarity of the posts' readability. Earlier research shows that using short sentences is more engaging [24]. In contrast, we show that peers with intricate thoughts aren't constrained to concise posts; instead, they often require more extensive elaboration [25]. Our research demonstrates a twofold increase in support for individuals openly expressing their concerns on the OMHC platforms. Conversely, the illustration of emotion dynamics is an additional gauge to evaluate the user's context. In alignment with our formulated hypothesis, the intricate interplay of emotions articulated within OMHCs demonstrated a direct correlation with the level of peer interaction. Analogous to socio-cultural implications, instances where individuals convey heightened emotional intensity consistently involve more engagement, while expressions characterized by emotional neutrality tend to diminish in terms of peer involvement. This phenomenon potentially stems from underlying factors such as relatability, the emergence of a palpable sense of urgency, and a compelling inclination to provide empathetic validation and support. These emotionally charged interactions establish a conspicuously relatable presence, effectively motivating peers to participate in discussions and disseminate adaptive coping techniques actively. Consequently, the assessment of peer engagement within OMHCs stands as pertinent societal research that aims to assess the intricate dynamics underpinning an effective peer support framework. Such OMHCs serve as forums where peers engage in a wide spectrum of discussions, yet only a few receive the required assistance. We are convinced that a crucial void in this landscape lies in fostering societal awareness regarding the nature of these challenges and their appropriate navigation. For instance, individuals often discuss sensitive and stigmatized matters, which, although prevalent in volume, remain relatively unexplored, as substantiated by the prevalence of isolated posts. As a result, topics of a more general nature are observed to attract increased interaction. Furthermore, there exist a few impactful takeaways from our auxiliary content (metadata) analysis. We present a detailed discussion in _SI Appendix_ (Section3). These perceptive insights inherently underscore the significance of understanding the factors of the support ecosystem before its effective utilization for constructive engagement. ## Conclusion OMHC platforms have become a popular way to seek help for people struggling with mental health issues [26, 27, 28, 29]. Our work analyzed the granular user posting behaviors that foster peer engagement with the mental health content on OMHC platforms, specifically subreddits. The primary aim of this work was to better understand the behaviors of support seekers and the factors that drive peer engagement with the original post. We found that the intent of a post (seeking support versus ranting about one's experience), the readability, and the criticism elements of a post were associated with peer engagement. Further, we also found that emotional expression, the original post's content, and contextual details like the time that a post was made impacted peer engagement. Our proposed dataset and empirical study call for more research to understand peer engagement on mental health platforms, including elements that lead to constructive versus detrimental engagement [30, 28, 31]. These data are critical in understanding how OMHC can best support users experiencing distress in addition to preventing the proliferation of harmful and inaccurate mental health advice and information [32, 33, 34]. Understanding user behavior and online activity is challenging, and even harder to understand individuals in distress. The current study primarily focused on peer-to-peer engagement concerning mental health content. We understand that the findings can vary across other platforms like Twitter, Talklife, 7Cups, Facebook, Instagram, and even other subreddit channels. The future direction of this work will be to better understand user behavior on OMHCs, including how to monitor and moderate peer engagement so that it is not harmful to individuals in distress. Although our findings shed light on the connecting patterns of peer-to-peer online engagement, more research is needed to develop computational methods to gauge user satisfaction and behavior by exploiting the annotations we have done in BeCOPE. ## Methods ### Data Collection To study latent signals in peer-to-peer mental health interactions, we develop BeCOPE by curating posts from 21 subreddits. Reddit is organized into spaces called subreddits, where each subreddit is specific to a certain discussion topic. To analyze behaviors on peer-to-peer mental health platforms, we scraped, processed, and annotated subreddit data to develop the dataset. We explored numerous subreddits and handpicked 21 most active mental health-related subreddits, as shown in Table 1. For each shown subreddit, we curated 500 posts and their comments from January 2020 to December 2020. Further, we performed a sanity check to ensure that conversations were acceptable (e.g., noise-free, written in English). We collected \(10,118\) posts and \(58,279\) comments along with their metadata, such as author information, score (upvotes), time of creation, and the number of comments. Step 1: Categorization of interactions by the level of peer engagement.Depending on the comments on a post, we classified the collected conversations into one of the three engagement categories: (i) interactive, (ii) non-interactive, or (iii) isolated. If an original post involved back-and-forth comments from the original user and peers, the conversation was deemed "interactive" (see _SI Appendix_, Table S1 for an example). If an original post had zero comments, the conversation was deemed "isolated." Finally, if an original post received more than one comment from peers, but the original user did not acknowledge or reply to peers' comments, the conversation was deemed "non-interactive". Step 2: Annotation of posts by behavioral and emotional labels.The first step in the annotation process was the curation of Reddit posts on mental health topics by categorizing them based on (i) intent, (ii) criticism, (iii) readability, and (iv) emotion labels. We manually annotated \(\sim\)5K posts and subsequently learned respective classifiers to obtain pseudo-labels for another \begin{table} \begin{tabular}{l|c c|c c c|c c c c|c c|c} \multirow{2}{*}{**Subreddits**} & \multirow{2}{*}{**Posts**} & \multirow{2}{*}{**Comments**} & \multicolumn{3}{c|}{**Intent**} & \multicolumn{3}{c|}{**Criticism**} & \multicolumn{2}{c}{**Readability**} \\ & & & **Help** & **seeking** & **Rant** & **Survey** & **Chitchat** & **Self** & **Other w/o** & **Other w/o** & **No** & **Clear** & **Non-clear** \\ \hline \(\alpha\)/Anxiety & 469 & 1773 & 252 & 129 & 62 & 26 & 278 & 48 & 7 & 136 & 467 & 2 \\ \(\alpha\)/psd & 494 & 1567 & 221 & 144 & 64 & 65 & 180 & 135 & 1 & 178 & 494 & 0 \\ \(\alpha\)/nuclide/Watch & 403 & 2545 & 90 & 246 & 17 & 50 & 231 & 34 & 10 & 128 & 378 & 25 \\ \(\alpha\)/raddiction & 487 & 3581 & 217 & 148 & 43 & 79 & 246 & 67 & 6 & 168 & 466 & 21 \\ \(\alpha\)/DHD & 423 & 3856 & 169 & 104 & 78 & 72 & 139 & 31 & 9 & 247 & 418 & 5 \\ \(\alpha\)/lacothelioma/anonymous & 498 & 6021 & 181 & 107 & 47 & 163 & 155 & 58 & 5 & 280 & 490 & 8 \\ \(\alpha\)/Anger & 464 & 2620 & 233 & 184 & 31 & 16 & 245 & 140 & 16 & 63 & 462 & 2 \\ \(\alpha\)/BPD & 519 & 2744 & 180 & 185 & 113 & 41 & 234 & 99 & 4 & 182 & 518 & 1 \\ \(\alpha\)/depression & 547 & 1951 & 83 & 363 & 26 & 75 & 243 & 91 & 18 & 195 & 546 & 1 \\ \(\alpha\)/domestic/iudence & 425 & 2847 & 254 & 94 & 25 & 52 & 34 & 277 & 1 & 113 & 421 & 4 \\ \(\alpha\)/eating\_disorders & 568 & 2021 & 256 & 209 & 51 & 52 & 346 & 43 & 1 & 178 & 567 & 1 \\ \(\alpha\)/getting\_over\_it & 476 & 2551 & 230 & 163 & 35 & 48 & 258 & 72 & 2 & 144 & 473 & 3 \\ \(\alpha\)/mentalities & 484 & 1895 & 208 & 155 & 52 & 69 & 209 & 99 & 2 & 174 & 480 & 4 \\ \(\alpha\)/OpatesRecovery & 493 & 6112 & 215 & 116 & 62 & 100 & 185 & 28 & 3 & 277 & 493 & 0 \\ \(\alpha\)/rapecounseling & 481 & 2390 & 288 & 142 & 26 & 25 & 125 & 269 & 1 & 86 & 481 & 0 \\ \(\alpha\)/sad & 486 & 2258 & 44 & 287 & 27 & 128 & 115 & 71 & 8 & 292 & 485 & 1 \\ \(\alpha\)/selfharm & 467 & 1928 & 136 & 232 & 52 & 47 & 243 & 39 & 0 & 185 & 465 & 2 \\ \(\alpha\)/selfhelp & 419 & 2001 & 177 & 60 & 28 & 154 & 163 & 37 & 0 & 219 & 390 & 29 \\ \(\alpha\)/socialimetry & 461 & 2798 & 167 & 128 & 64 & 102 & 201 & 58 & 0 & 202 & 428 & 33 \\ \(\alpha\)/OCD & 424 & 2528 & 159 & 117 & 63 & 85 & 209 & 29 & 3 & 183 & 424 & 0 \\ \(\alpha\)/helpmemcope & 473 & 2121 & 277 & 127 & 17 & 52 & 170 & 160 & 2 & 141 & 471 & 2 \\ \hline **Total** & **9961** & **58108** & **4037** & **3440** & **983** & **1501** & **4209** & **1855** & **99** & **3771** & **9817** & **144** \\ \hline **IAA (\(\alpha\))** & - & - & \multicolumn{3}{c|}{**0.963**} & \multicolumn{3}{c|}{**0.885**} & \multicolumn{3}{c}{**0.861**} \\ \end{tabular} \end{table} Table 1: **Statistics of the BeCOPE dataset**. We collected a total of \(\sim 10K\) posts and \(\sim 50K\) comments. We annotated all the posts using three core labels – (i) intent, (ii) criticism, and (iii) readability (Clear: Excellent, Good and Average; Non-clear: Mediocre and Poor). IAA (\(\kappa\)) represents the inter-Annotator agreement using Cohen’s kappa score. \(\sim\)5K posts. Next, a sanity check of the annotated dataset was performed to ensure the reliability of the annotations. Finally, we used the resultant dataset of \(\sim\)10 posts for our analyses. Detailed statistics of the annotated BeCOPE dataset (including pseudo labels, discussed later) are presented in Table 1. We discuss the pseudo-modeling and annotation details in _SI Appendix_ (Section 1: Emotion). ### Ethical Consideration Considering the sensitivity of research in mental health, this paper does not include any personal, identifiable information of any OMHC user. Further, our models involve sophisticated deep-learning models, which are careful not to take any bias toward any gender, case, race, diagnosis, or peers with specific symptoms. We collected data solely based on the most relevant mental health subreddits and did not include any bias in the choice of particular subreddit channels. Finally, we conducted all Figure 4: **(a)** Confusion matrix to represent the performance of pseudo labeling of criticism, intent, and readability labels. We exploit BERT to fine-tune on \(\sim\)5K manually annotated posts to predict criticism, intent, and readability on the remaining posts. **(b)** Distribution of behavioral signals (criticism and intent) along with readability in the complete BeCOPE dataset. experiments without compromising the anonymity of online users in BeCOPE. ## Data Availability The BeCOPE data used for this study's analysis has undergone a rigorous pipeline and subjected to expert validation to ensure its quality and relevance. Researchers interested in utilizing this dataset for their research purposes can request access by contacting the corresponding authors. A sample of the BeCOPE dataset is available at [https://github.com/LCS2-IIITD/peer_study_omhc](https://github.com/LCS2-IIITD/peer_study_omhc). This sample dataset provides an overview of the type and structure of the data and can aid reviewers in understanding the scope and nature of the dataset used in this study. ## Author Contributions A.S., M.S.A., and T.C. conceived and designed the study. A.S. and T.G. performed the experiments. A.S., T.G., M.S.A., and T.C. acquired, analyzed and interpreted the results. All the authors drafted the paper. A.S., M.S.A., A.C., S.P.L., and T.C. critically revised the paper. M.S.A. and T.C. supervised the work. T.C. arranged the funding. ## Funding Information The work is financially supported by ihub-Anubhuti-iiitd Foundation, set up under the NM-ICPS scheme of the DST. ## Competing Interests The authors declare no competing interests. ## Additional Information **Supplementary information.** The online version contains supplementary material. **Correspondence and requests for materials** should be emailed to Aseem Srivastava ([email protected]) and Tanmoy Chakraborty ([email protected]).
2308.11722
Non-linear top-Higgs CP violation
Searches for additional sources of CP violation at the Large Hadron Collider are a central part of the Higgs physics programme beyond the Standard Model. Studies employing so-called signed observables that track CP violation through purpose-built asymmetries bolster efforts based on Higgs boson rate analyses under clear assumptions. A possibility, which is so far unexplored at the LHC, is a significant non-linear realisation of CP-violation, which is naturally described in non-linear Higgs Effective Field Theory (HEFT). We perform an analysis of the HL-LHC potential to constrain such interactions considering a large range of single and double Higgs production processes, including differential information where this is statistically and theoretically possible. A particular emphasis of our work is distinguishing expected correlations in the Standard Model Effective Field Theory from those attainable in HEFT.
Akanksha Bhardwaj, Christoph Englert, Dorival Gonçalves, Alberto Navarro
2023-08-22T18:24:02Z
http://arxiv.org/abs/2308.11722v1
# Non-linear top-Higgs CP violation ###### Abstract Searches for additional sources of CP violation at the Large Hadron Collider are a central part of the Higgs physics programme beyond the Standard Model. Studies employing so-called signed observables that track CP violation through purpose-built asymmetries bolster efforts based on Higgs boson rate analyses under clear assumptions. A possibility, which is so far unexplored at the LHC, is a significant non-linear realisation of CP-violation, which is naturally described in non-linear Higgs Effective Field Theory (HEFT). We perform an analysis of the HL-LHC potential to constrain such interactions considering a large range of single and double Higgs production processes, including differential information where this is statistically and theoretically possible. A particular emphasis of our work is distinguishing expected correlations in the Standard Model Effective Field Theory from those attainable in HEFT. ## I Introduction The interactions of the Higgs boson are generally considered as harbingers of new interactions beyond the Standard Model (BSM). The precision study of the Higgs boson at the Large Hadron Collider (LHC) has therefore opened a new territory in our understanding of the electroweak scale. While the precise nature of the latter is still unclear, it is reasonable to expect that whatever the mechanism responsible for electroweak symmetry breaking, it might have wider ramifications for the as yet unresolved questions of the SM. In BSM scenarios, such as multi-Higgs extensions, the Higgs boson interactions can introduce additional sources of CP violation which can address one of the Sakharov criteria that the SM falls short of [1; 2; 3]. From a theoretical standpoint, certain Higgs couplings are more susceptible to pronounced new physics effects. For instance, the extensively studied CP-odd Higgs-vector boson interactions can appear only through operators of dimension-six or higher [4; 5], being naturally suppressed by the new physics scale. In contrast, CP-odd Higgs-fermion couplings can already appear at tree level leading to naturally larger CP violation effects. The top quark Yukawa coupling, owing to its magnitude, plays a crucial role in this discussion and emerges as a particularly sensitive probe for physics beyond the SM. Model-agnostic approaches employing effective field theory highlight a range of effective interactions in a coarse-grained dimension-six approach that have been scrutinized in a range of experimental analyses at the LHC so far. In particular, additional (C)P violation in the top-Higgs sector \[\sim i\,\bar{t}\,\gamma^{5}t\,h \tag{1}\] can be constrained in gluon fusion [6; 7; 8] and top-Higgs production [9; 10; 11; 12].1 The relevance of CP-violating Yukawa interactions for low-energy precision dipole measurements have been revisited recently in Ref. [14]. Footnote 1: Approaches to disentangle these top-Yukawa interaction modifications from \(\sim G_{\mu\nu}G^{\mu\nu}h\) contact interactions have been discussed in [13]. One way of pinning down such interactions phenomenologically at hadron colliders is the construction of asymmetric observables, which then serve as strong tests of CP-violation without relying on CP-even rate information such as cross sections or transverse momentum spectra. However, for some processes, the expected rate even at 3/ab of the high-luminosity (HL) LHC phase is too limited to construct statistically sensible asymmetries. In addition, some processes, e.g. involving scalar final states, do not show interference-related asymmetries. Either case then warrants their inclusion under the hypothesis that no additional sources of new physics are present, relying on simple hypothesis testing. In this work, we ask the question of how sensitive the LHC can be to sources of _non-linear CP violation_. While Ref. [13] discusses approaches to disentangle gluonic from top-philic sources, the question of how correlated CP violation across different Higgs multiplicities remains open. Such freedom becomes apparent within the context of Higgs Effective Field Theory (HEFT) when contrasted with correlations expected within the Standard Model Effective Field Theory (SMEFT) [5; 15]. This possibility also opens up a novel avenue to decouple dipole moment constraints from TeV scale investigations. As shown in [14], dipole constraints are highly constraining when considering exclusively the interaction of Eq. (1), but can be significantly relaxed when considering analogous CP violation for light flavour Higgs interactions. This comes at the price of a loss of phenomenological sensitivity, as such Higgs interactions are phenomenologically not always accessible at the LHC. CP violation measured in low energy dipole measurements dominantly sourced in di-Higgs interactions would be further loop and light-flavour Yukawa suppressed and will therefore be less constrained. This note is organized as follows: In Sec. II, we introduce the interactions studied in this work. Particular emphasis is given to the distinctive patterns of CP violation predicted in SMEFT as opposed to the more general HEFT parametrisation. The accurate discrimination of non-linear CP violation requires a robust statistical handle on single Higgs production processes, serving as a prerequisite for the subsequent utilization of di-Higgs production to effectively constrain non-linearity. The processes and the assumptions under which they are included in this work are detailed in Sec. III. Sec. IV is devoted to the discussion of our fit to non-linear CP violation. We summarize in Sec. V. ## II Heavy CP Violation As alluded to above, within the SMEFT approach, we consider the operator \[\mathcal{O}_{t\Phi}=|\Phi|^{2}\bar{Q}_{L}\Phi^{c}t_{R}\,. \tag{2}\] \(\Phi\) denotes the Higgs doublet, \(\Phi^{c}=i\sigma^{2}\Phi^{*}\), and \(Q_{L},t_{R}\) are the left and right-chiral fermion doublet and singlet relevant for the top interactions, respectively. This operator leads in the broken phase to P-violating interactions for complex Wilson coefficients. Therefore, signs of CP-violation across different Higgs multiplicities are correlated as a consequence of the \(SU(2)\) doublet structure of the Higgs boson. For instance, the CP-violating tree-level three and four-point irreducible vertex functions obey \[\frac{\Gamma_{th}}{\Gamma_{thh^{2}}}\bigg{|}_{\gamma^{5},\text{SMEFT}}=\frac{v }{3}\,, \tag{3}\] with \(v\simeq 246\) GeV as the vacuum expectation value of the Higgs field. In this context, although additional sensitivity from CP-sensitive observables in \(t\bar{t}hh\) production are welcome, CP-violation in the top Higgs sector under the assumptions of SMEFT should manifest themselves predominantly in single Higgs physics, which provides the most significant statistical pull in a global analysis. A phenomenologically identical parametrisation of Eq. (2), which we will use in the following, is given by \[\mathcal{L}^{\text{SMEFT}}_{\alpha,1}=-\frac{m_{t}}{v}\,\kappa_{t}\,\bar{t}( \cos\alpha+i\gamma^{5}\sin\alpha)\,t\,h\,. \tag{4}\] Here, \(\alpha\) represents the CP-phase and \(\kappa_{t}\) is a real number that determines the strength of the interaction. In this parametrization, the SM is characterized by \(\kappa_{t}=1\) and \(\alpha=0\). Conversely, for a purely CP-odd interaction, \(\alpha\) would be equal to \(\pi/2\). This parametrization can be identified with Eq. (2) (after renormalisation of the SM Yukawa couplings and assuming a purely CP-even SM coupling of the top quark) \[\frac{1}{\Lambda^{2}}\begin{pmatrix}\operatorname{Re}C_{t\Phi}\\ \operatorname{Im}C_{t\Phi}\end{pmatrix}=-\frac{\sqrt{2}\,m_{t}}{v^{3}}\begin{pmatrix} \kappa_{t}\cos\alpha-1\\ \kappa_{t}\sin\alpha\end{pmatrix}\,. \tag{5}\] This directly leads to \[\mathcal{L}^{\text{SMEFT}}_{\alpha,2}\supset-\frac{3m_{t}}{2v^{2}}\,\bar{t}( \{\kappa_{t}\cos\alpha-1\}+i\kappa_{t}\gamma^{5}\sin\alpha)\,t\,h^{2}\,, \tag{6}\] which also shows that the \(t\bar{t}hh\) interactions vanish for the SM point, \((\kappa_{t},\alpha)_{\text{SM}}=(1,0)\). Turning to HEFT, which highlights the Higgs boson as a custodial singlet [16; 17; 18; 19; 20; 21; 22], the top quark mass arises from the non-linear sigma model of \(SU(2)_{L}\times SU(2)_{R}\to SU(2)_{V}\) that can be parametrized as \[U(\pi^{a})=\exp{(i\pi^{a}\tau^{a}/v)}\, \tag{7}\] with \(SU(2)\) generators \(\tau^{a}\) and Goldstone fields \(\pi^{a}\). This field transforms under general \(SU(2)_{L}\times SU(2)_{R}\) transformations as \(U\to L\,UR^{\dagger}\) so that the top quark mass arises from \[\mathcal{O}_{\bar{t}t}=-m_{t}\,\bar{Q}_{L}Ut_{R}\,. \tag{8}\] Owing to the singlet character of the Higgs boson in HEFT, this operator can be dressed with a "flare" function \[Y_{t}(h)=1+c^{(1)}\frac{h}{v}+c^{(2)}\frac{h^{2}}{2v^{2}}+\dots\,, \tag{9}\] suppressing higher monomials of the singlet Higgs field, which are phenomenologically not relevant. This leads to CP-violating effects analogous to \(\mathcal{L}_{\alpha}\) \[\mathcal{L}_{\text{HEFT}}\supset-\frac{m_{t}}{v}\,\kappa_{t}\, \bar{t}(\cos\alpha+i\gamma^{5}\sin\alpha)\,t\,h\\ -\frac{m_{t}}{2v^{2}}\,\kappa_{tt}\,\bar{t}(\cos\beta+i\gamma^{5} \sin\beta)\,t\,h^{2}\,. \tag{10}\] However, it is important to note a significant exception: the Higgs multiplicities remain uncorrelated in this context. The expressions for \(c^{(1)}\) and \(c^{(2)}\) become \[c^{(1)}=\kappa_{t}\,e^{i\alpha}\,,\ c^{(2)}=\kappa_{tt}\,e^{i\beta}\,. \tag{11}\] The relative strength of CP-violation for the three and four-point interactions is now characterized by \[\frac{\Gamma_{\bar{t}th}}{\Gamma_{\bar{t}th^{2}}}\bigg{|}_{\gamma^{5},\text{ HEFT}}=\frac{\kappa_{t}}{\kappa_{tt}}\,\frac{\sin\alpha}{\sin\beta}\,v\,, \tag{12}\] where the SMEFT trajectory can be recovered by the HEFT choices \[\begin{split}\kappa_{tt}^{2}&=9(1-2\kappa_{t}\cos \alpha+\kappa_{t}^{2})\,,\\ \tan\beta&=\frac{\kappa_{t}\sin\alpha}{\kappa_{t}\cos \alpha-1}\,.\end{split} \tag{13}\] CP measurements at ATLAS and CMS are typically carried by constructing asymmetries or "signed" observables which isolate interference effects between new physics and SM contributions. Writing the amplitude of the scattering process \(\mathcal{M}=\mathcal{M}_{\rm SM}+\mathcal{M}_{\mathcal{O}}\), with \(\mathcal{M}_{\mathcal{O}}\) denoting the BSM part, the partonic cross sections scale as \[\frac{\mathrm{d}\sigma}{\mathrm{d}\mathrm{LIPS}}\sim|\mathcal{M}_{\rm SM}|+2 \mathrm{Re}(\mathcal{M}_{\rm SM}\mathcal{M}_{\mathcal{O}}^{*})+|\mathcal{M}_{ \mathcal{O}}|^{2}\,. \tag{14}\] Squared CP-odd contributions manifest in CP-even distributions, such as cross sections, transverse momentum distributions, etc. The interference effects between SM and new physics cancel in these CP-even distributions and are resolved through purpose-built observables. However, for processes with limited statistics, achieving a binned distribution might not always be attainable, even during the high-luminosity phase of the Large Hadron Collider (HL-LHC). We detail the processes we include in our study in the next Sec. III. ## III Sensitive processes and details of the analysis ### Inclusive \(gg\to h\) production Gluon fusion Higgs production has become one of the standard candles to study electroweak symmetry breaking at the LHC ever since the Higgs boson's discovery. The phenomenological precision programme is well underway and the experiments have laid out a detailed roadmap towards the HL-LHC phase. When rate information is considered, the cross section and decay widths are known to provide important handles on potential CP violation (see, e.g., Refs. [23; 24]). To reflect the sensitivity of this process to phases of Yukawa interactions as discussed above, we employ the ECFA extrapolation by CMS outlined in Ref. [25]. Specifically, we consider the \(h\to\gamma\gamma\) and \(h\to ZZ\) signal strength extrapolations, which forecast a sensitivity at 95% CL of \[\frac{\delta\mu}{\mu}(gg\to h\to\gamma\gamma)=3.3\%\,, \tag{15}\] \[\frac{\delta\mu}{\mu}(gg\to h\to ZZ)=4.6\%\,. \tag{16}\] We also include \(h\to\tau\tau\) based on an extrapolation of Ref. [26] which sets \[\frac{\delta\mu}{\mu}(gg\to h\to\tau\tau)=9.7\%\,. \tag{17}\] This aligns with the ECFA projection presented in Ref. [25]. To achieve this, we use MadGraph5_aMC@NLO to interpolate the cross section, using a model generated with FeynRules[27], NLOCT [28], and UFO[29] in the finite top mass limit. This interpolation accounts for various coupling choices and is then reweighed based on the SM result to reflect higher-order QCD corrections [30; 31; 32]. Throughout this work, we take into account the modifications of the Higgs branching ratios due to the modified top-Yukawa couplings. ### Gluon fusion \(h+2j\) production The production of a single Higgs boson in association with two jets is a sensitive process due to the introduction of the'signed' angular separation between the tagging jets [33; 6; 8]. Ordering the jets in rapidity \(\eta_{j1}>\eta_{j2}\), the azimuthal angular difference \[\Delta\phi_{jj}=\phi_{j1}-\phi_{j2} \tag{18}\] leads to a characteristic angular modulation, which can be exploited to set constraints on the involved CP-odd interactions. This renders \(h+2j\) as a prime candidate for constraining the single Higgs property as compared to the non-linear deviations.2 Therefore, this process has been used relatively early in the LHC Higgs programme to set constraints on sources of CP violation. Footnote 2: Gluon fusion of Higgs pairs in association with two jets has been studied in Ref. [34] and faces significant phenomenological challenges at the LHC. Therefore, we will not discuss this process further. For our analysis, we use these ATLAS results as a baseline for extrapolation [35]. We employ the Vbfnlo[36; 37] Monte Carlo to include the finite top-mass effects that shape the phenomenology of the \(h+2j\) final state, including the phase of the top Yukawa interaction. For illustrational purposes, we present the SM and new physics \(\phi_{jj}\) distributions in Fig. 1. We extract efficiencies for a SM sample mapped onto the results of [35] and generalize these to the BSM parameter choices involving CP-odd contributions, following the procedure detailed in [13]. ### Top-associated Higgs production \(t\bar{t}h\) The \(pp\to t\bar{t}h\) channel plays a crucial role in probing the Higgs-top CP-structure at the tree-level, disentan Figure 1: Distribution for the azimuthal angle difference between the two tagging jets \(\Delta\phi_{jj}\), as defined in Eq. (18), specifically for the \(h+2j\) sample. gling possible new physics effects [38; 39; 40; 39; 50; 38; 39; 51; 52; 9; 10; 53; 37; 9; 11; 54]. Several kinematic observables have been proposed in the literature to investigate the CP structure of the Higgs-top interaction in this channel. Among those, the Collins-Soper angle \(\theta^{*}\), which is the angle between the top quark and the beam direction in the \(t\bar{t}\) CM frame, features as one of the most sensitive observables to CP at the nonlinear level [45; 51] (in the sense of Eq. (14)). Genuine CP-odd observables can also be defined exploiting the top-quark polarization that is carried over to its decay products. It is possible to form tensor products involving the top quark pair and their decay products, represented as \(\epsilon(p_{t},p_{t},p_{k})\equiv\epsilon_{\mu\nu\rho\sigma}p_{t}^{\mu}p_{t}^ {e}p_{t}^{\rho}p_{t}^{\nu}\)[45; 55]. This tensor product can be simplified as \(\mathbf{p}_{t}\cdot(\mathbf{p}_{i}\times\mathbf{p}_{k})\) in the \(t\bar{t}\) CM frame and provides a basis for defining azimuthal angle differences that exhibit an odd behavior under CP transformations \[\Delta\phi_{ik}^{t\bar{t}}\!=\!\text{sgn}\left[\mathbf{p}_{t} \!\cdot\!(\mathbf{p}_{i}\!\times\!\mathbf{p}_{k})\right]\arccos\!\left(\frac {\mathbf{p}_{t}\!\times\!\mathbf{p}_{i}}{|\mathbf{p}_{t}\!\times\!\mathbf{p}_{ i}|}\cdot\frac{\mathbf{p}_{t}\!\times\!\mathbf{p}_{k}}{|\mathbf{p}_{t} \!\times\!\mathbf{p}_{k}|}\right)\!. \tag{19}\] We present both the Collins-Soper \(\theta^{*}\) and the azimuthal angle distribution \(\Delta\phi_{t\ell}^{t\ell}\) for dileptonic top pair final states in the top panel of Fig. 2. The \(t\bar{t}hh\) channel, which we will discuss further below, may provide another complementary avenue to probe the Higgs-top coupling at the tree-level [56]. Observables that mirror those defined for the \(t\bar{t}h\) process can also be established for this additional channel as presented in the bottom panel of Fig. 2 (see also [57]). We extract the direct Higgs-top CP sensitivity at the HL-LHC from our previous analysis in Ref. [55]. In this study, we employ a synergy of machine learning techniques and streamlined kinematic reconstruction methods to enhance the new physics sensitivity, exploring the complex \(t\bar{t}h\) multi-particle phase space. The analysis encompasses a range of final states, including hadronic, semi-leptonic, and di-leptonic top pair decays, all in conjunction with the Higgs decay \(h\to\gamma\gamma\). It is noteworthy that the experimental projections from ATLAS and CMS indicate that the \(h\to\gamma\gamma\) final state will display the dominant sensitivities to the \(t\bar{t}h\) channel at the HL-LHC [58]. ### \(Z\) boson-associated Higgs production Although the leading contribution for the Bigstrahlung channel \(Zh\) arises at tree level with \(q\bar{q}\to Zh\), this channel displays relevant \(\mathcal{O}(\alpha_{s}^{2})\) corrections through the loop-induced gluon fusion \(gg\to Zh\)[59; 60], which are particularly important in the boosted regime, \(p_{Th}\sim m_{t}\)[61; 62; 63; 64]. Setting limits in these exclusive phase-space Figure 2: Collins-Soper angle \(\theta^{*}\) (left) and azimuthal angle distribution \(\Delta\phi_{t\ell}^{t\ell}\) (right) for \(t\bar{t}h\) (top) and \(t\bar{t}hh\) (bottom) processes with dileptonic top pair final state. We consider the SMEFT framework for demonstration purposes. regions is an experimental challenge and to obtain a qualitative sensitivity estimate, we perform a more detailed signal vs. background investigation. We denote the \(q\bar{q}\) and \(gg\) subprocesses as \(Zh_{\rm DY}\) and \(Zh_{\rm GF}\), respectively. The \(Zh_{\rm GF}\) process exhibits sensitivity to the linear and quadratic terms of the top-Higgs Yukawa coupling. Owing to the large destructive interference for the top Yukawa terms, the \(Zh_{\rm GF}\) contribution can be sensitive to the magnitude and sign of a possible non-standard top-Higgs coupling (\(\kappa_{t},\alpha\)).3 Footnote 3: A comprehensive study of the angular moments for the \(Z\) boson in the \(Zh_{\rm GF}\) channel is presented in Appendix A. These probes work as additional analyzers for the Higgs-top CP violation effects. We now investigate the sensitivity to new physics in the \(gg\to Z(\ell\ell)h(b\bar{b})\) channel. Our signal comprises two charged leptons, \(\ell=e\) or \(\mu\), reconstructing a boosted \(Z\) boson recoiling against two \(b\)-jets. The main background processes are \(Zb\bar{b}\), \(t\bar{t}\)+jets, and \(ZZ\). For our analysis, we generate the signal sample \(Zh_{GF}\) using MadGraph5_aMC@NLO[65; 66], while the background samples are simulated with Sherpa+OpenLoops [67; 68; 69], following the study presented in Ref. [70]. The \(Zh_{\rm DY}\), \(Zb\bar{b}\), and \(ZZ\) background samples are merged at LO with up to one additional jet emission using the CKKW algorithm [71; 72]. We normalize their cross sections to the NLO rates obtained from Ref. [62]. Additionally, we generate the \(t\bar{t}\) background at NLO using the MC@NLO algorithm [73; 74], considering hadronization and underlying event effects in our simulation. To reconstruct the signal events, we require two same-flavor leptons with opposite-sign charges satisfying \(p_{T\ell}>30\) GeV and \(|\eta_{\ell}|<2.5\), within the invariant mass range \(75\) GeV \(<m_{\ell\ell}<105\) GeV. The \(Z\) boson is required to have a large boost, \(p_{T\ell\ell}>200\)GeV. We adopt the BDRS analysis for the \(h\to b\bar{b}\) tagging [75], which involves re-clustering the hadronic activity using the Cambridge-Aachen jet algorithm [76] with \(R=1.2\). We impose at least one boosted fat-jet with \(p_{TJ}>200\) GeV and \(|\eta_{J}|<2.5\), Higgs-tagged using the BDRS algorithm, which demands three sub-jets with the two leading sub-jets being \(b\)-tagged. We assume a flat \(70\%\)\(b\)-tagging efficiency and a \(1\%\) mis-tag rate. To further improve the signal-to-background ratio, we impose a constraint on the filtered Higgs mass within the range \(|m_{h}^{\rm BDRS}-m_{h}|<10\) GeV, where \(m_{h}=125\) GeV. The resulting event rate is presented in Tab. 1. ### Beyond linearity: \(t\bar{t}hh\) inclusive \(hh\) production We now turn to the discussion of processes that provide genuine sensitivity to non-linearity via the production of final states containing a pair of Higgs bosons. Such processes are statistically limited at the LHC, yet in the case of gluon fusion production \(gg\to hh\) relatively well understood, both theoretically and experimentally. In particular, Higgs pair production has been subject to considerable experimental scrutiny already, and detailed experimental forecasts for the HI-LHC frontier have been made available, similar to the case of \(gg\to h\) production. To this end, we consider the \(b\bar{b}\gamma\gamma+b\bar{b}\tau\tau\) extrapolation of [77] \[\frac{\sigma(hh)}{\sigma(hh)_{\rm SM}}<2\,, \tag{20}\] at \(95\%\) confidence level (CL), which could lower to \(1.1\) if systematics become sufficiently well-controlled. Both \(b\bar{b}\tau\tau\) and \(b\bar{b}\gamma\gamma\) have comparable statistical sensitivity and we include them on an equal footing to our statistical analysis, again taking into account the effect of modified Higgs branching ratios as a function of \((\kappa_{t},\alpha)\). Similar to the \(gg\to h\) process, we interpolate \(gg\to hh\) production using MadGraph5_aMC@NLO in the finite top mass limit to reflect the constraint from Eq. (20) within our combined analysis in Sec. IV. In comparison, the \(t\bar{t}hh\) process is rather more complex and currently only proof-of-principle analyses exist, e.g., Refs. [78; 56] for the HL-LHC. The former predicts around \(10\) signal events in the SM for a \(b\)-rich final state. Being statistically limited, shape analyses of signed observables, which can be constructed similar to the \(t\bar{t}h\) process, will not yield relevant exclusion constraints. Selected, relevant observables for this channel are illustrated in Fig. 2 (bottom panel) and Fig. 3 within the SMEFT and HEFT frameworks, respectively. Given the statistical limitation, we incorporate the \(95\%\) CL cross Figure 3: Collins-Soper angle \(\theta^{*}\) for the \(t\bar{t}hh\) process in the HEFT framework. We use \(\kappa_{t}=\kappa_{tt}=1\) and \(\alpha=0\) for illustration. section exclusion limit for \(t\bar{t}hh\) \[\frac{\sigma(t\bar{t}hh)}{\sigma(t\bar{t}hh)_{\rm SM}}<1.4...6.8 \tag{21}\] based on the analyses of Refs. [56; 78]. This limit does not include the impact of background systematics, which can reduce this estimate. However, it is worth highlighting that within the experimental context, the potential to improve this channel remains relatively unexplored. We note that the cross section is driven by the four-point interactions [79], similar to \(gg\to hh\), and has been a focus of studies like for the composite Higgs framework [80]. ## IV A fit to non-linear CP violation in the top-higgs sector The asymmetries and total rates are used to set CL limits on the parameter space \((\kappa_{t},\alpha,\kappa_{tt},\beta)\), assuming the SM as the null hypothesis. To this end, we consider a \(\chi^{2}\) statistic defined as \[\chi^{2}=\sum_{i}\frac{(N_{i}-N_{i}^{\rm SM})^{2}}{\sigma_{i}^{2}}\,. \tag{22}\] Here, the index \(i\) runs over a binned distribution where this is statistically warranted, or \(i=1\) for constraints from cross sections. \(N_{i}\) denotes the event count in a particular bin (or the entire signal event count for cross sections) for a given luminosity, which we set to \(\mathcal{L}=3\ {\rm ab}^{-1}\). We tune the uncertainties \(\sigma_{i}^{2}\) to reproduce the quoted single channel sensitivities. Given these individual \(\chi^{2}\) contributions, we can then consider their combination, increasing the degrees of freedom depending on the hypothesis under investigation. Before we turn to combinations and the comparison between SMEFT and HEFT, it is instructive to highlight the sensitivity of each of these channels and how multi-Higgs production serves as means to distinguish non-linearity. We will focus on the HL-LHC data set in the following. In Fig. 4, we show the sensitivity of all channels before combination, focusing on the SMEFT parametrization that singles out the correlation of Eq. (13) across the different Higgs multiplicities. As expected, the most sensitive channels in SMEFT are those with highest statistical abundance. Under the assumption of suppressed competing coupling modification in SMEFT, this is given by the inclusive gluon fusion rate along the \(\kappa_{t}\) direction. Exploitable angular correlations in the \(h+2j\) mode augment the sensitivity along the direction of the CP angle.4 Figure 5: 95% CL limits on the \((\kappa_{t},\alpha)\) plane at the 13 TeV HL-LHC with 3 \({\rm ab}^{-1}\) of data for \(hh\) and \(t\bar{t}hh\) channels in SMEFT (top) and HEFT (bottom) frameworks with \((\kappa_{tt}=0,\beta=0)\). We also highlight the importance of the \(t\bar{t}hh\) channel in collapsing the available parameter space, a non-trivial combination is shown for the stringent \(t\bar{t}hh\) assumption. Directly probing the top Yukawa coupling through the \(t\bar{t}h\) channel also leads to relevant complementary constraints. Given the reduced sensitivity in the multi-Higgs production, the \((\kappa_{t},\alpha)\) constraints carry over from the SMEFT parametrization to HEFT, modulo changes in the number of degrees of freedom and the small pull provided by the SMEFT correlation in light of the correlation of Eq. (13). The importance of the latter correlation becomes clear when contrasting the \(gg\to hh\) and \(t\bar{t}hh\) combination in SMEFT against HEFT for \(\kappa_{tt}=\beta=0\), as depicted in Fig. 5. This comparison highlights the relevance of quartic \(t\bar{t}hh\) contact interactions for these final states. As can be seen, these particular BSM contact interactions drive the cross section for the di-Higgs production modes. Assuming a SM value in HEFT for the single Higgs modes, \((\kappa_{t},\alpha)=(1,0)\), the expected constraints from purely non-linear CP violation are given in Fig. 6. As can be seen, the multi-Higgs production modes can be used to set constraints mostly on the magnitude of the contact interaction, whilst the expected sensitivity is not large enough to constrain its phase. This blind direction could potentially be explored through the multi-particle final state kinematics as illustrated in Fig. 3. However, achieving this may necessitate a higher event rate and might realistically only become feasible at upcoming higher-energy colliders, such as the FCC-hh [79]. SMEFTy extensions close to the decoupling limit select a subspace of HEFT. Given the correlation predicted by SMEFT-like extensions of the SM, we can therefore employ these production modes to highlight the expected sensitivity for \(\kappa_{tt},\beta\) when comparing SMEFT and HEFT in Fig. 6. The SMEFT contour highlights the correlation of a combined fit of the most sensitive single Higgs channels in SMEFT projected onto the \((\beta,\kappa_{tt})\) plane given the correlations of Eq. (13). For illustration purposes, we limit the HEFT parameter space to SM couplings in the single Higgs sector (the corresponding couplings will be relatively well measured at 3/ab and the di-Higgs cross sections are predominantly sensitive to the multi-Higgs couplings). Clearly, a SM-like outcome of the single Higgs measurements renders the available parameter space in the di-Higgs couplings relatively limited in SMEFT. Even if the optimistic \(t\bar{t}hh\) constrain is relaxed to looser constraints, \(gg\to hh\) production is still sensitive to significant quartic \(t\bar{t}hh\) vertices and associated CP violation in HEFT. When reducing the size of \(\kappa_{tt}\), the sensitivity to \(\beta\) is naturally suppressed. Higher sensitivity in the relevant channels is therefore key to further maximise the LHC potential: the \(t\bar{t}hh\) contour in Fig. 6 only slightly bends for larger angles \(\beta\). Perhaps an unrealistic improvement over the quoted constraints would extend the \(\beta\) sensitivity. The role of \(t\bar{t}hh\) production remains critical, even when only the \(\kappa_{tt}\) effects are considered. Feasibility studies beyond a first exploratory studies, e.g. [78], should continue to maximise the value of LHC data. Of course, the statistical limitations present for multi-Higgs mode at the LHC are naturally relaxed at a future hadron machine such as FCC-hh, envisioned to operate at 100 TeV with a target luminosity of 30 \(\mathrm{ab}^{-1}\). A more fine grained approach exploiting angular correlations as demonstrated in Fig. 2 (bottom panel) and Fig. 3 will become possible, which will lead to a qualitatively new \(t\bar{t}hh\) exclusion. ## V Conclusions In this work, we have examined the potential of the LHC to constrain CP phases of the top-Yukawa interactions combining the sensitivity of a range of single- and double Higgs production processes. Single Higgs processes encompass all the relevant correlations in dimension-six SMEFT, and multi-Higgs production does not lead to significant sensitivity gain. However, this paradigm shifts when considering non-linear sources of CP violation. Given the limited rates of multi-Higgs production at the LHC, the resulting constraints are naturally less stringent than those anticipated from single Higgs physics, especially when incorporating rate information under appropriate assumptions. Nonetheless, the LHC shows sensitivity, in particular when discriminating between SMEFTy and HEFTy CP violation in the top-Higgs sector. Our work re-advertises the relevance of the \(t\bar{t}hh\) and inclusive \(hh\) sensitivity studies. For scenarios that are more closely related to the HEFT parametrisation, the multi-Higgs rates also play central roles in honing sensitivity to non-linear CP violation. Although these processes suffer from limitations at the LHC and their resulting constraints are relatively weak when compared to SMEFT correlations, they provide unique avenues for probing such interactions, in particular because low energy precision experiments (e.g. dipole measurements) Figure 6: 95% CL limits on \((\kappa_{tt},\beta)\) at the 13 TeV HL-LHC with 3 \(\mathrm{ab}^{-1}\) of data for \(hh\) and \(t\bar{t}hh\) channels in the HEFT framework with \((\kappa_{t}=1,\alpha=0)\). The SMEFT region selected from a fit to single Higgs data is highlighted for comparison. will have reduced sensitivity compared to SMEFT. ###### Acknowledgements. A.B. and C.E. are supported by the STFC under grant ST/T000945/1. C.E. is supported by the Leverhulme Trust under grant RPG-2021-031 and the IPPP Asociateship Scheme. DG and AN thank the U.S. Department of Energy for the financial support, under grant number DE-SC 0016013. ## Appendix A CP-violation effects to the \(Z\) boson angular moments in the \(gg\to Zh\) process The angular moments for the \(Z\) boson can be used as analyzers for the Higgs-top CP violation effects in the loop-induced \(gg\to Zh\) process. In general, the differential cross-section for the described process can be represented as \[\frac{1}{\sigma}\frac{\mathrm{d}\sigma}{\mathrm{d}\cos\theta \mathrm{d}\phi}=\] \[\frac{3}{16\pi}[1+\cos^{2}\theta+\frac{A_{0}}{2}(1-3\cos^{2} \theta)+A_{1}\sin 2\theta\cos\phi\] \[+\frac{A_{2}}{2}\sin^{2}\theta\cos 2\phi+A_{3}\sin\theta\cos \phi+A_{4}\cos\theta\] \[+A_{5}\sin^{2}\theta\sin 2\phi+A_{6}\sin 2\theta\sin\phi+A_{7} \sin\theta\sin\phi]\,, \tag{24}\] where \(\theta\) and \(\phi\) denote the polar and azimuthal angles of the \(\ell^{-}\) lepton in the \(Z\) boson rest frame. The eight coefficients \(A_{i}\), \(i=0,..,7\), correspond to the degrees of freedom for the polarization density matrix of a spin-1 particle. Remarkably, the three coefficients \(A_{5,6,7}\) are proportional to the relative complex phases of the scattering amplitudes [70]. Hence, when associated to depleted strong phase contributions from loop contributions, these coefficients can be sensitive to truly CP-violation effects. To extract the angular coefficients \(A_{i}\) from our Monte Carlo simulation, we recognize that Eq. (A.2) represents a spherical harmonic decomposition for the differential cross-section, utilizing real spherical harmonics \(Y_{lm}(\theta,\phi)\) of order \(l\leq 2\)[82]. Consequently, we can access the angular coefficients by exploring the orthogonality relations of the spherical harmonics. The angular coefficients are projected out using the following relations \[A_{0} =4-\left\langle 10\cos^{2}\theta\right\rangle, A_{1} =\left\langle 5\sin 2\theta\cos\phi\right\rangle,\] \[A_{2} =\left\langle 10\sin^{2}\theta\cos 2\phi\right\rangle, A_{3} =\left\langle 4\sin\theta\cos\phi\right\rangle,\] \[A_{4} =\left\langle 4\cos\theta\right\rangle, A_{5} =\left\langle 5\sin^{2}\theta\sin 2\phi\right\rangle,\] \[A_{6} =\left\langle 5\sin 2\theta\sin\phi\right\rangle, A_{7} =\left\langle 4\sin\theta\sin\phi\right\rangle, \tag{25}\] and the weighted normalization is defined as \[\left\langle f(\theta,\phi)\right\rangle\equiv\int_{-1}^{1}\mathrm{d}\cos \theta\int_{0}^{2\pi}\mathrm{d}\phi\,\frac{f(\theta,\phi)}{\sigma}\frac{ \mathrm{d}\sigma}{\mathrm{d}\cos\theta\mathrm{d}\phi}\,. \tag{26}\] In Table 2, we present the angular coefficients \(A_{i}\) for the \(gg\to\ell^{+}\ell^{-}h\) process, considering the SM and new physics scenarios \(\alpha=\pi/4\) and \(-\pi/4\). Two comments are in order. First, we observe sub-leading strong phase contributions from the one-loop calculation to the coefficients \(A_{5,6,7}\), as evident from the SM scenario. Second, CP-violation effects are also depleted in the same coefficients as seen for the \(\alpha=\pi/4\) and \(-\pi/4\) scenarios. Notably, the only statistically significant CP-phase \(\alpha\) for the spin density parametrization arises in the coefficient \(A_{0}\).
2306.08907
MCPI: Integrating Multimodal Data for Enhanced Prediction of Compound Protein Interactions
The identification of compound-protein interactions (CPI) plays a critical role in drug screening, drug repurposing, and combination therapy studies. The effectiveness of CPI prediction relies heavily on the features extracted from both compounds and target proteins. While various prediction methods employ different feature combinations, both molecular-based and network-based models encounter the common obstacle of incomplete feature representations. Thus, a promising solution to this issue is to fully integrate all relevant CPI features. This study proposed a novel model named MCPI, which is designed to improve the prediction performance of CPI by integrating multiple sources of information, including the PPI network, CCI network, and structural features of CPI. The results of the study indicate that the MCPI model outperformed other existing methods for predicting CPI on public datasets. Furthermore, the study has practical implications for drug development, as the model was applied to search for potential inhibitors among FDA-approved drugs in response to the SARS-CoV-2 pandemic. The prediction results were then validated through the literature, suggesting that the MCPI model could be a useful tool for identifying potential drug candidates. Overall, this study has the potential to advance our understanding of CPI and guide drug development efforts.
Li Zhang, Wenhao Li, Haotian Guan, Zhiquan He, Mingjun Cheng, Han Wang
2023-06-15T07:20:26Z
http://arxiv.org/abs/2306.08907v1
# MCPI: Integrating Multimodal Data for Enhanced Prediction of Compound-Protein Interactions ###### Abstract The identification of compound-protein interactions (CPI) plays a critical role in drug screening, drug repurposing, and combination therapy studies. The effectiveness of CPI prediction relies heavily on the features extracted from both compounds and target proteins. While various prediction methods employ different feature combinations, both molecular-based and network-based models encounter the common obstacle of incomplete feature representations. Thus, a promising solution to this issue is to fully integrate all relevant CPI features. This study proposed a novel model named MCPI, which is designed to improve the prediction performance of CPI by integrating multiple sources of information, including the PPI network, CCI network, and structural features of CPI. The results of the study indicate that the MCPI model outperformed other existing methods for predicting CPI on public datasets. Furthermore, the study has practical implications for drug development, as the model was applied to search for potential inhibitors among FDA-approved drugs in response to the SARS-CoV-2 pandemic. The prediction results were then validated through the literature, suggesting that the MCPI model could be a useful tool for identifying potential drug candidates. Overall, this study has the potential to advance our understanding of CPI and guide drug development efforts. Compound-protein interaction, Convolutional neural networks, Word embedding, Network integration ## I Introduction Identifying Compound-Protein Interactions (CPIs) on a large scale is a critical step in drug discovery and development. By comprehensively understanding how a chemical compound interacts with various proteins in a living body, researchers can identify potential targets for drug development and better understand the mechanisms of action for existing drugs[1]. The identification of CPIs can also lead to the development of combination therapies[2, 3], where two or more drugs are used together to target multiple proteins or pathways involved in a disease. This approach can be especially useful in treating complex diseases such as cancer[4] or Alzheimer's disease [5]. In addition, the identification of CPIs can also facilitate drug repositioning[6, 7], which involves repurposing existing drugs for new indications. By understanding the full range of proteins targeted by a drug, researchers can identify new therapeutic uses for the drug beyond its originally intended purpose. Finally, the identification of CPIs can also be useful in the modernization of traditional medicine, such as traditional Chinese medicine[8]. By identifying the specific proteins targeted by traditional remedies, researchers can better understand their mechanisms of action and potentially develop more effective and targeted treatments based on these traditional remedies. Overall, the identification of CPIs is a crucial step in the drug discovery process and has the potential to improve the efficacy and safety of drug treatments for a wide range of diseases. CPI prediction using traditional biology-based methods and machine learning is challenging, while molecular docking[9] and molecular dynamics simulations[10] have been used for drug research for decades, identifying CPI from large-scale chemical spaces using current experimental methods is still difficult, especially for proteins with unknown structures[11]. Therefore, computational methods[12] have been increasingly applied to predict CPI. Several computational methods have been proposed for predicting CPI, including tensor-product-based elements[13] between chemical substructures and protein families, supervised learning methods such as bipartite graphs[14], and feature selection techniques using support vector machines[15]. Traditional machine learning methods[16] have performed better in CPI prediction, but they require large-scale manual labeling and feature computation before modeling, which can be limiting when dealing with vast amounts of CPI data. With the advancement of deep learning, various end-to-end frameworks have been introduced to overcome the inefficiencies of traditional machine learning methods[17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28]. Deep learning methods for predicting protein-compound interactions can be broadly categorized into two groups: those based on molecular structure data and those based on network data. The former utilizes protein and compound data represented as amino acid sequences and chemical structure formulas. Some of these methods, such as DeepDTA[29], WideDTA[30], and Conv-DTI[31], use convolutional neural networks to extract low-dimensional features of chemical compounds and proteins. Others, such as Gao et al.[32] and GraphDTA[33], use molecular graphs to represent chemical compounds and proteins by RDkit tool[34]. CPI-GNN[35], on the other hand, uses the r-radius subgraph algorithm[36] to obtain graph representations. In addition, Transformer CPI[37], a novel Transformer neural network, was proposed, in which chemical compounds and proteins are considered as two sequences. Additionally, network-based methods have also been used for CPI prediction. Interaction networks are often used to represent interactions between molecules. Based on this idea, Zitnik et al.[38] proposed a deep learning method based on multimodal graphs to predict multi-drug side effects. Deep learning methods based on heterogeneous graphs, such as DTINet[39], deepDTnet[40] and NeoDTI[41], have been proposed to predict interactions between molecules. Moreover, some methods combine network and molecular information to predict drug targets. Yu et al.[42] proposed a method that combines network and molecular information for predicting drug targets. This study proposed a new method called MCPI for predicting molecular Compound-Protein Interactions (CPI) by integrating relevant network and structure features. The network features were obtained from Protein-Protein Interaction (PPI) and Compound-Compound Interaction (CCI) separately to enhance the systemic background of those interactions. The molecular structures were represented using the distance matrix and Morgan fingerprint for the compound molecule and a pre-trained Word2vec model for the protein sequence representation. The features were learned using Gated CNN and ResNet deep learning models separately for molecular structures. Finally, a linear classifier was used to identify CPI using the embedded network and structure features. The experimental results showed that MCPI outperforms other methods on human and C. elegans benchmark datasets. The study also applied MCPI to COVID-19 data to find potential drugs for useful therapeutic leads in response to the SARS-CoV-2 pandemic. ## II Materials and Methods ### _Datasets_ In this study, we integrated both molecular features data and network features data to enhance the prediction performance of compound-protein interactions (CPI). The data for constructing the PPI and CCI networks contained both protein and compound interaction data. The protein interaction information was obtained from the STRING database[47], covering 24.58 million proteins from 14,094 species. When downloading protein interaction data from STRING, we filtered the data according to the reliability score of experimental data and set the threshold to 150. Compound interaction data were obtained from the STITCH database[48]. For each compound, STITCH reliability-based score was created for all the interaction evidence. When downloading compound interaction data from STITCH, we filtered the data based on the reliability score by setting the threshold to 150 to remove the noisy data. This experiment used two Benchmark datasets, Human and _C.elegans_, to assess model performance[49]. The positive samples in the datasets were retrieved from DrugBank[50] and Matador[51]. The negative sample candidates were collected by using a systematic screening framework and then filtered by feature scattering to obtain negative samples with high confidence[49]. The total number of samples in the Human dataset was 5502, and the _C.elegans_ dataset was 6673. It contains the protein sequence and the compound SMILES. Single-atom molecules were removed during data preprocessing because they could not generate a distance matrix, and they represent a tiny proportion of the total sample. They do not affect the overall data distribution. In our experiments, the model was trained on the training set and then used the model on the test set to predict the results. As in Tsubaki et al.[35], the training, validation, and test sets were randomly split, and the ratio was 8:1:1. ### _Model Architecture_ The MCPI model is a novel and integrated approach for predicting compound-protein interactions (CPI). By leveraging multiple sources of information, including the PPI network, CCI network, and structural features of CPI, the MCPI model effectively addresses the challenge of incomplete feature representations in existing prediction methods. It offers enhanced prediction performance compared to other methods. The model consists of four main components: Interaction network embedding, protein sequences and chemical compound coding, feature learning, and the linear classifier. MCPI model uses protein sequences and compound SMILES as inputs to identify Compound-Protein Interactions. In the component of network embedding, the PPI and CCI networks are processed using Node2vec[52] to generate network feature vectors for each protein and compound. In the molecular structure coding part, the molecular structures are coded from the protein sequence and chemical compound. For compounds, the distance matrix and Morgan fingerprint are used to represent compound molecules by using the RDKit tool[34]. Then, the distance matrices were sent to the residual network for feature learning. After that, the feature vectors obtained from the residual network, the molecular fingerprints, and the network feature vectors of the CCI network were connected into a complete vector as a compound representation. For proteins, UniRef50 is used as a corpus and a pre-trained Word2Vec model[53] is employed to obtain protein sequence representation. The obtained feature vectors are then fed into a gated convolutional neural network[54] to learn high-level protein features. Finally, the feature vectors of proteins and compounds are separately fed into a fully connected layer, and a linear classifier is used to identify CPIs using the embedded network and structure features. Overall, the model architecture involves several state-of-the-art techniques for network representation learning, molecular structure coding, and deep learning. Fig. 1 presumably illustrates the model architecture. ### _Features Extraction_ #### Iii-C1 Protein Sequence Coding The word embedding techniques, such as Word2vec, which were originally developed for natural language processing[55], have also been used to represent biological sequences, including DNA, RNA, and proteins. In MCPI, the Skip-Gram model in Word2vec[53] was used to obtain the protein sequence representation. Word2vec is an unsupervised machine learning technique that can learn high-quality representations of words by considering the context in which they appear. In the case of protein sequences, the Skip-Gram model in Word2vec has been commonly used to learn a distributed representation of the amino acid sequences. In Skip-Gram, the model learns to predict the context (neighboring amino acids) given a target amino acid, this allows the model to capture the syntactic and semantic relationships between amino acids. In our experiment, the large protein database UniRef50 was used as a corpus for pre-training. The amino acid sequences were considered as a sentence with different lengths. Then the original sequences were partitioned into overlapping 3-grams and trained using their contexts to obtain an embedding vector for each 3-gram. The dimensionality was set to 100. The feature vectors of the protein were delivered to the convolutional neural network to learn the high-level representation. #### Iii-C2 Chemical Compound Coding Among previous studies, the molecular representations used in most approaches were divided into three main categories: Linear symbolic category, Molecular descriptor category, and Graphical symbol categories. Different representations had a significant impact on prediction performance. For example, a recent benchmark study showed that traditional descriptor-based machine learning models outperformed graph-based neural networks on 11 datasets relevant to drug discovery[46]. Its important to note that the best choice of molecular representation depends on the specific task at hand and the available data. To represent the structural information of compound molecules, MCPI introduced two representations: the distance matrix and the Morgan fingerprint. The distance matrix is a matrix that represents the pairwise distances between all atoms in a molecule. Specifically, for a molecule with N atoms, the distance matrix is an \(N\)\(\times\)\(N\) matrix, where each element _(i,j)_ represents the Euclidean distance between atoms i and j. Like Qian et al.[56], we used the RDKit[34] software to obtain distance matrices.To learn high-level representations from the distance matrices, MCPI used a residual network, which is a deep neural network architecture that allows for the training of very deep networks without suffering from vanishing gradients. The feature vectors output from the residual network were then concatenated with the Morgan fingerprint to form the final compound representation. This representation was then fed into a neural network to predict CPI. #### Iii-C3 Compound/Protein Network Embedding Network embedding is a technique used to represent nodes in a network as low-dimensional vectors while preserving the network structure and topology. In this case, the pre-trained Node2vec models were applied to the PPI and CCI networks generated from the STRING and STITCH databases. The Node2vec algorithm is a popular network embedding method that learns node representations by performing random walks on the network and optimizing a skip-gram model. This results in vectors that capture the local and global network properties of each node. The models were trained to generate low Fig. 1: MCPI model architecture. The PPI and CCI networks were processed using the Node2vec to generate network embeddings. The MCPI model combined the atom distance matrix with the molecular fingerprint to represent the compound molecule, and the pre-trained Word2vec model was applied for protein sequence representation. These features were fed into Gated CNN and ResNet to learn high-level molecular representations that capture the complex interactions between the compound and protein. Finally, a linear classifier is employed to identify the CPI. -dimensional vector representations of proteins and compounds that capture their interactions and molecular activities within the network. The implementation of the Node2vec algorithm in this paper used the Node2vec (version 0.4.3) module from the Python lib, where the Node2Vec parameters were set using these values (Embedding dimension: 128; Number of nodes searched in one random wander: WALK_LENGTH=80; Number of random wanders for each node: NUM_WALK=10; the probability of revisiting a wandering node: P=1; the search speed and range: R=1; whether to reflect the graph weights: Weight_Key=Weight). Finally, one 128-dimensional feature vector was generated for each node in the network. The network feature vectors are connected to the corresponding protein/compound molecular feature vectors to obtain the final feature representation. ### _Feature Learning Model_ #### Iii-D1 Protein Sequence Feature Learning After the word2vec pre-training process, each amino acid in the sequence is represented by a word vector, and the entire sequence can be represented as a matrix of these vectors. This matrix can then be used as input to a convolutional neural network (CNN) to extract high-level features of the protein sequence. Previous mainstream approaches to language models have been based on RNN[57], Dauphin et al.[54] proposed a novel gating mechanism integrating CNN networks applied to language models witch is useful for tasks that involve processing sequential data, as they can perform parallel computation and speed up training. Additionally, the hierarchical structure of CNNs can simplify learning and mitigate the vanishing gradient problem that can arise in RNNs. In the context of protein sequence analysis, CNNs can help to identify patterns and features that are important for predicting protein structure or function. Therefore, we used a gated convolutional network with Conv 1D and gated linear units [54] to perform feature learning on proteins. Unlike regular convolution, the gated convolution was divided into two parts. One part was the convolutional activation value, which differs from normal convolution in that it was not activated with Tanh but was directly linear together with a sigmoid operator. The other part was the gate value, which was obtained directly linearly. Next, the gating unit made the element-wise product of the value obtained directly linearly and the value obtained after sigmoid activation to get the result after convolution. Finally, the convolution and gating unit were combined into a residual block. The overall structure is shown in Fig. 2, and the hidden layer _h0_,..., _hN_ was calculated according to (1). \[h_{n}(X)=(X*W_{1}+b)\bigoplus\sigma(X*W_{2}+c) \tag{1}\] Where \(X\in\mathbb{R}^{l\times m_{1}}\) is the input of the \(h_{i}\) layer, \(W_{1}\in\mathbb{R}^{l\times m_{1}\times m_{2}}\), \(W_{2}\in\mathbb{R}^{m_{2}}\), and \(t\in\mathbb{R}^{m_{2}}\) are the learning parameters, \(n\) is the number of hidden layers, \(l\) is the amino acid sequence length, \(m_{i}\) and \(m_{2}\) are the dimensions of the input and hidden features, \(k\) is the block size, \(\sigma\) is the sigmoid function, and \(\bigoplus\) is the product of elements between matrices. The gated convolutional network's output was the protein sequence's final representation. In our experiments, \(n\) is 3, \(m_{1}\) is 100, \(m_{2}\) is 128, and \(k\) is 3. The final output of the network is a 100-dimensional vector of protein representations. #### Iii-D2 Compound Feature Learning The compound distance matrixes were obtained from the open-source cheminformatics software RDKit, which were represented as \(D\in\mathbb{R}^{d\times d}\), where \(d\) denotes the number of atoms in a single compound molecule. After obtaining the distance matrices, residual networks[58] were used to learn molecular structure features. Residual networks are a type of neural network architecture that uses shortcut connections between layers to address the problem of vanishing gradients and network degradation. By adding shortcut connections between every two/three layers, the input and output through redundant network layers remain unchanged, which speeds up computation. The compound feature learning architecture is shown in Fig. 3. In MCPI, the ResNet-V2 module was used for compound feature learning. Since the information transfer between nodes is very important during the propagation of the residual neural network, ResNet-V2 made a comprehensive improvement on the position of skip connection and activation in the residual module. In ResNet-V2, a new concept of "pre-activation" was proposed, which means that the activation function (Relu and BN, batch normalization) was placed before the weight layer. For the traditional "post-activation" method, the output of the first residual unit was regularized by the BN layer and a shortcut layer was immediately added, but the combined signal was non-regularized. This non-regularized signal was directly used as the input of the next residual unit, which may affect the effect. The "pre-activation" method had all inputs regularized before the weighting layer, and this design could improve the performance of ResNet. The original residual module can be expressed in the generic form of Equations (2), (3). \[y_{l}=h(x_{l})+F(x_{l},W_{l}) \tag{2}\] \[X_{l+1}=f(y_{1}) \tag{3}\] Where _h(x)_ is the mapping function, \(F\) is the residual function, called the residual mapping, which consists of two or three convolution operations. \(W_{i}\)is the weight of the residual block, xl and \(X_{l+1}\) are the input and output of the lth residual unit, respectively. In ResNet V1, \(\mathrm{f}\) is the activation function. The idea of ResNet V2 is that this identity mapping occurs not only in individual residual units but throughout the entire network. To do this, two conditions need to be satisfied, which are _h(x)_ = \(x_{l}\) and _f(y)_ = \(y_{l}\). If \(\mathrm{f}\) is also an identity mapping, then \(X_{l}+l\equiv y_{l}\). Then taking (3) into (2) yields (4). \[x_{l+1}=x_{l}+F(x_{l},W_{l}) \tag{4}\] In this way, the deep network unit \(x_{l+1}\) can be expressed as the sum of the shallow network unit \(x_{l}\) and the residual unit \(F(x_{l}\), \(W_{l})\), which is a great property in backpropagation, and the gradient of the deep network will be directly passed to the shallow network. The gradient decay problem is well controlled. For the compound distance matrix \(D\in\mathbb{R}^{d\times d}\), the size of the distance matrix differs from compound to compound because the number of atoms in each compound is different. Although the residual network does not require a fixed input size and it can produce feature maps of any size, after the residual network processing, the fully connected layer requires a fixed size input, and cropping causes the loss of information. Therefore, the problem of fixed size originates from the fully connected layer, which is the final stage of the network. To solve this problem, we used a spatial pyramid pooling layer[59] after the residual network, and the primary purpose was to generate fixed-size outputs for arbitrarily sized inputs. Spatial pyramid pooling achieved multi-scale input by splitting the feature map of arbitrary size into 16, 4, and 1 blocks according to a (4 \(\times\) 4), (2 \(\times\) 2), and (1 \(\times\) 1) grid. Then, it used a different size of max-pooling operation for each grid. The structure is shown in Fig. 4. The grid size determined each pooling layer's window size and step size. The window size is accomplished by dynamic calculations \(d\) / 4, \(d\) / 2 and \(d\) / 1. The input of the pyramid pooling layer is \(e\in\mathbbm{R}^{d\times d\times m}\), where m is the number of channels. So, after three pooling operations, the obtained feature dimensions were \(p_{\textit{1}}\in\mathbbm{R}^{16\times m}\), \(p_{\textit{2}}\in\mathbbm{R}^{4\times m}\) and \(p_{\textit{3}}\in\mathbbm{R}^{1\times m}\). Finally, they were stitched together to form a fixed-length feature vector \(p_{\textit{final}}\in\mathbbm{R}^{21\times m}\), used as the final output of compound feature extraction. #### Iii-B3 Linear Classifier After extracting the features of compounds and proteins, we obtain two 100-dimensional feature vectors: \(p_{\textit{protein}}\), which is the feature vector of the amino acid sequence processed by 1D-CNN, and \(c_{\textit{compounds}}\), which is the feature vector of the compound distance matrix processed by residual network. Before inputting the feature vectors to the linear classifier, we need to connect the feature vectors from different data to obtain the complete features of compounds and proteins. We obtain the node embedding vectors of PPI network and CCI network using Node2vec, which are represented by \(N_{\textit{protein}}\) and \(N_{\textit{compounds}}\), respectively. Additionally, we obtain the Morgan fingerprint vector of the compound, which is represented by \(c_{\textit{finger}}\). To create the protein expression, we add the two eigenvectors of protein, \(p_{\textit{protein}}\) and \(N_{\textit{protein}}\) to a vector \(v_{\textit{protein}}\). For the compound representation, we add the three characteristic vectors \(c_{\textit{compounds}}\), \(N_{\textit{compounds}}\), and \(c_{\textit{finger}}\) to the vector \(v_{\textit{compounds}}\). Finally, we map the eigenvectors \(v_{\textit{protein}}\) and \(v_{\textit{compounds}}\) to the same fixed dimensional potential space using the fully connected layer \(f\). The calculation process was shown by (5), (6), (7). \[v_{\textit{protein}}=(p_{\textit{protein}}\otimes N_{\textit{ protein}}) \tag{5}\] \[v_{\textit{compound}}=(c_{\textit{compound}}\otimes c_{\textit{finger}} \otimes N_{\textit{finger}}) \tag{6}\] \[T=f(v_{\textit{protein}})\otimes f(v_{\textit{compound}}) \tag{7}\] Fig. 3: Protein feature learning model architecture: The ResNet which combined by convolutional layers and res-layers was used to learn molecular structure features. Then the feature map of arbitrary size was sent to the spatial pyramid pooling layer to form a fixed-length feature vector. Fig. 2: Protein feature learning model architecture: The pre-trained word2vec model was used to obtain the representation of protein sequences. Each of the obtained word vectors was embedded into the continuous space. After getting the embedding matrices, they were fed into gated convolutional neural networks to learn high-level features. The final vectors in our machine learning model were generated by performing an element product operation on protein and compound features of the same dimension. We then calculated the similarity between these vectors in the potential space, and fed the result into a fully connected layer to obtain the final prediction. If the prediction exceeded a predefined threshold (set at 0.5 by default), the model predicted an interaction between the input pairs. Otherwise, the model predicted no interaction. To optimize the model's parameters, we utilized the Adam algorithm[60], which updated the network weights more efficiently than the common stochastic gradient descent (SGD) method. The MCPI updated the parameters by minimizing the loss function during the training process. We used binary cross-entropy as the loss function, which was defined as follows (8). \[Loss=-\frac{1}{N}\sum_{i=1}^{N}y_{i}\cdot\log\bigl{(}p(y_{i})\bigr{)}+(1-y_{i} )\cdot\log\bigl{(}1-p(y_{i})\bigr{)} \tag{8}\] Where the binary label, denoted as \(y\), can take on either the value of 0 or 1. The probability that the model's output belongs to label \(y\) is represented as \(p(y)\), while \(N\) signifies the total number of training samples. The binary cross-entropy is used to evaluate the performance of a binary classification model. Essentially, if the label \(y\) is 1 and the prediction value \(p(y)\) approaches 1, the loss function value should approach 0. Conversely, if the prediction value \(p(y)\) approaches 0, the loss function value should be exceedingly high. Fig. 5 depicts the architecture of the classifier. ## III Results and Discussions ### _Experimental Environment and Parameters_ We evaluated the predictive performance of MCPI by comparing it with state-of-the-art deep learning methods and traditional machine learning methods on two public benchmark sets, namely the human dataset and the _C.elegans_ dataset[49]. To implement the model, we utilized Pytorch 1.8.0 for protein features, word2vec model from Gensim 4.0.1 to generate a 100-dimensional embedding vector of amino acid sequences, Node2Vec 0.4.0 to obtain 128 dimensional embedding vectors for network features, and Rdkit tool to obtain Morgan fingerprint vectors of length 50 for compound fingerprints. In the experiments, we set the learning rate to 0.0001 and the batch size to 1. We performed hyperparameter optimization by first setting the initial range of each hyperparameter, then randomly selecting parameter values for training, verifying the accuracy and narrowing the range of parameter values based on the prediction results until the best value was found. ### _Evaluation Metrics_ During our experiments, we utilized four standard performance metrics for evaluating binary classification problems: the area under the receiver operating characteristic curve (AUC), the area under precision/recall curve(AUPR), Precision, and Recall. AUC is defined as the area under the ROC curve, and represents the likelihood of ranking the positive cases above the negative cases when the model is scored. Higher values of AUC, Precision, and Recall, which approach 1, indicate superior model performance. In addition, in order to highlight the resolution of the model for positive samples, we also introduced AUPR to evaluate the model. We calculated Precision and Recall using the (9), (10). \[Recall=TP/(TN+FN) \tag{9}\] \[Precision=TP/(TP+FP) \tag{10}\] Where \(TP\) is the number of successfully identified compound-protein pairs with interactions (true positives). \(FP\) indicates the number of incorrectly identified interacting pairs that were predicted to be positive samples, but in fact, it was a negative sample (false positives). \(FN\) indicates the number of interacting pairs incorrectly identified as negative samples, Fig. 4: Spatial pyramid pooling layer: First, for a feature map of arbitrary size, it is divided into blocks according to the grid of three different sizes. Then a different size of max-pooling operation is used for each grid. Finally, the vectors obtained from the pooling are stitched together to form a fixed-length feature vector. Fig. 5: Structure of the linear classifier: As shown in the figure, before feeding into the classifier, the embedding vectors of different features need to be connected to obtain the complete feature representations of compounds and proteins. After that, a fully connected layer was applied to map these feature vectors to the same latent space with a fixed dimension. Finally, the similarity of two vectors in the potential space was calculated by the element product operation, and then the fully connected layer was applied to obtain the final prediction result. that were predicted to be negative samples, but in fact, it was a positive sample (false negatives). ### _Testing on Human Dataset_ We conducted a comparative analysis of MCPI with traditional machine learning methods and various advanced deep learning methods. The traditional machine learning methods we compared against included K-nearest neighbors[61], random forest[62], L2-logistic[63], and support vector machine (SVM)[64]. To ensure the robustness of our findings, we repeated all comparison experiments five times and calculated the mean and standard deviation of the results. We evaluated the experimental results using standard performance metrics such as the area under the receiver operating characteristic curve (AUC), the area under precision/recall curve(AUPR), accuracy (Precision), and recall (Recall). Table I presents the AUC, AUPR, Precision and Recall results for the human dataset in comparison with traditional machine learning methods. Table I demonstrates that MCPI outperforms most traditional machine learning methods in all metrics, except for a slightly lower Precision performance than SVM. A comparison was also conducted with other deep learning-based methods, namely GraphDTA[33], GCN[65], GNN[35] and Transformer CPI[37]. GraphDTA, GCN, and GNN utilize molecular graphs to represent compound structural information and extract features using graph neural network models, while Transformer CPI modifies the Transformer architecture with a self-attentive mechanism to solve the sequence-based CPI classification task. Fig. 6 depicts a histogram with error lines, which indicates that MCPI significantly outperforms existing deep learning methods. Among these, Transformer CPI exhibit better performance with AUC values ranging from 0.971 to 0.975, while the molecular map-based method performs poorly compared to distance matrix-based and Transformer architecture-based methods. For human datasets, our MCPI method achieves the best results in terms of Precision, and Recall compared to the second best method, Transformer CPI, with performance gains of 4.80%, and 3.57%, respectively. The GCN-based method achieves an accuracy (Precision) of only 0.862, which is significantly lower than the deep learning and machine learning methods. At the same time, in order to comprehensively evaluate the resolution of MCPI to positive samples, we used AUC and AUPR as evaluation indicators, the experimental results show that MCPI is superior to other methods in resolving positive samples. ### _Testing on C.elegans Dataset_ Similarly, on the _C.elegans_ dataset, we compare MCPI with traditional machine learning methods and several advanced deep learning methods. The AUC, AUPR, Precision, and Recall results for the human dataset compared to the traditional machine learning approach are shown in Table II. As shown in Table II, MCPI performed better on C. elegans than on the Human dataset. Compared to traditional machine learning methods, MCPI significantly outperforms most traditional machine learning methods in all metrics, with an AUC of about 0.99. In addition, a comparison with other deep learning based methods was also conducted, which include GraphDTA, GCN, GNN and Transformer CPI. Fig. 7 shows the histogram with error lines. \begin{table} \begin{tabular}{c c c c c} \hline \hline **Method** & **AUC** & **AUPR** & **Precision** & **Recall** \\ \hline Cover[43]KNN & 0.858 & 0.862 & 0.801 & 0.827 \\ & \(\pm\)0.008 & \(\pm\)0.009 & \(\pm\)0.010 & \(\pm\)0.009 \\ Liaw[44]RF & 0.904 & 0.951 & 0.897 & 0.861 \\ & \(\pm\)0.004 & \(\pm\)0.003 & \(\pm\)0.003 & \(\pm\)0.003 \\ Kleinbaum[45]L2 & 0.911 & 0.921 & 0.913 & 0.867 \\ & \(\pm\)0.010 & \(\pm\)0.009 & \(\pm\)0.007 & \(\pm\)0.008 \\ Cortes[46]SVM & 0.910 & 0.923 & 0.910 & 0.939 \\ & \(\pm\)0.023 & \(\pm\)0.026 & \(\pm\)0.023 & \(\pm\)0.025 \\ **MCPI** & **0.978** & **0.986** & **0.960** & **0.958** \\ & **\(\pm\)0.002** & **\(\pm\)0.002** & **\(\pm\)0.004** & **\(\pm\)0.005** \\ \hline \hline \end{tabular} \end{table} TABLE II: _C.elegans_ dataset results: This table shows the prediction performance of our model compared to traditional machine learning methods on the C. elegans dataset. MCPI significantly outperforms most traditional machine learning methods in all metrics. \begin{table} \begin{tabular}{c c c c c} \hline \hline **Method** & **AUC** & **AUPR** & **Precision** & **Recall** \\ \hline Cover[43]KNN & 0.862 & 0.871 & 0.927 & 0.798 \\ & \(\pm\)0.008 & \(\pm\)0.009 & \(\pm\)0.005 & \(\pm\)0.012 \\ Liaw[44]RF & 0.940 & 0.951 & 0.897 & 0.861 \\ & \(\pm\)0.004 & \(\pm\)0.003 & \(\pm\)0.003 & \(\pm\)0.003 \\ Kleinbaum[45]L2 & 0.911 & 0.921 & 0.913 & 0.867 \\ & \(\pm\)0.010 & \(\pm\)0.009 & \(\pm\)0.007 & \(\pm\)0.008 \\ Cortes[46]SVM & 0.910 & 0.923 & 0.910 & 0.939 \\ & \(\pm\)0.023 & \(\pm\)0.026 & \(\pm\)0.023 & \(\pm\)0.025 \\ & **0.978** & **0.986** & **0.960** & **0.958** \\ & **\(\pm\)0.002** & **\(\pm\)0.002** & **\(\pm\)0.004** & **\(\pm\)0.005** \\ \hline \hline \end{tabular} \end{table} TABLE I: Human dataset results: This table shows the prediction performance of our model compared to traditional machine learning methods on the human dataset. MCPI significantly outperforms every traditional machine learning methods in all metrics. Fig. 6: Comparison with deep learning methods on the Human dataset, the figure shows the AUC, AUPR, precision, and recall scores of GraphDTA, GCN, GNN, TransformerCPI, and MCPI. The error line at the top of the histogram represents the standard deviation of the experiment, including the ”upper error” and ”lower error”. As shown in the figure, MCPI outperforms other methods in all metrics As shown in Fig. 7, MCPI clearly outperforms existing deep learning methods. Among them, the method based on Transformer architecture has a better Recall of 0.953. When evaluated on _C.elegans_ datasets, our MCPI method outperforms the second best method, Transformer CPI, across all key metrics including AUC, AUPR, Precision, and Recall, highlighting its superiority in accurately predicting CPIs. Overall, MCPI has more or fewer advantages over other methods in terms of AUC, Precision, Recall and is the best performance among all methods. Similarly, MCPI MCPI outperforms other advanced deep learning methods in the AUPR evaluation metric, demonstrating its superiority in both comprehensive CPI prediction performance and its ability to effectively distinguish positive samples. ### _Ablation Experiments for Features_ In order to assess the efficacy of the MCPI model, which incorporates relevant network and structure features, we conducted a series of ablation experiments on both the Human dataset and C. elegans. The purpose of these experiments was to validate the effectiveness of each individual feature embedded in the model. To achieve this, we designed four different combination feature methods: the first method only included molecular features (Molecular features); the second method included PPI and CCI network features (Network features); the third method included both molecular and PPI network features (P features); and the fourth method included both molecular and CCI network features (C features). We compared the performance of these methods with that of the MCPI model and presented the experimental results in Table III and Table IV. The corresponding figures for the table data are shown in Fig. 8. The results of the ablation experiments reveal a significant decrease in the model's performance. Notably, the MCPI model, which integrates the relevant network and structure features, provides the most informative data, with AUC values of 0.978 and 0.989 for the Human and \(C.elegans\) dataset. However, intriguingly, the models that contained only single network features exhibited different performances. While neither the P features method nor the C features method performed as well as the MCPI model, the P features method appeared to outperform the C features method. Specifically, the PPI network features seem to provide more information than the CCI network features, especially in the C. elegans dataset where the performance gap between the two methods is larger. This may be due to the compounds on the C. elegans dataset being less connected and less correlated, providing less additional information for CPI prediction. The performance of the Molecular features method was intermediate between the MCPI and the individual Network features methods. In contrast, the Network features method exhibited a larger performance gap than the other models as it solely contained interaction network features without any molecular features. Therefore, the molecular and interaction network features complement each other, and the effectiveness of the MCPI model with both features was also verified, demonstrating that the interaction network plays a crucial role in CPI prediction. ### _Prediction of Potential Inhibitors for SARS-CoV-2 3CL Protease_ SARS-CoV-2, a member of the Coronavirus family, is an enveloped virus with a positive single-stranded RNA genome. \begin{table} \begin{tabular}{c c c c} \hline Method & AUC & Precision & Recall \\ \hline **MCPI** & **0.990** & **0.955** & **0.954** \\ P features\({}^{1}\) & 0.981 & 0.946 & 0.942 \\ C features\({}^{2}\) & 0.952 & 0.924 & 0.917 \\ Molecular features\({}^{3}\) & 0.950 & 0.924 & 0.915 \\ Network features\({}^{4}\) & 0.883 & 0.870 & 0.859 \\ \hline \end{tabular} \end{table} TABLE IV: Comparison of four different feature combination methods with MCPI on C. elegans dataset: the table shows the features method (containing only molecular and PPI networks features), C features method (containing only molecular and CCI networks only network features), Network features method (containing only molecular features) and MCPI. The data in the table shows that the ablation process can significantly impair the performance of the model. \begin{table} \begin{tabular}{l c c c} \hline Method & AUC & Precision & Recall \\ \hline **MCPI** & **0.978** & **0.960** & **0.958** \\ P features\({}^{1}\) & 0.965 & 0.951 & 0.947 \\ C features\({}^{2}\) & 0.940 & 0.927 & 0.920 \\ Molecular features\({}^{3}\) & 0.934 & 0.921 & 0.913 \\ Network features\({}^{4}\) & 0.900 & 0.882 & 0.869 \\ \hline \end{tabular} \end{table} TABLE III: Human DATASET RESULTS \begin{table} \begin{tabular}{l c c c} \hline Method & AUC & Precision & Recall \\ \hline **MCPI** & **0.978** & **0.960** & **0.958** \\ P features\({}^{1}\) & 0.965 & 0.951 & 0.947 \\ C features\({}^{2}\) & 0.940 & 0.927 & 0.920 \\ Molecular features\({}^{3}\) & 0.934 & 0.921 & 0.913 \\ Network features\({}^{4}\) & 0.900 & 0.882 & 0.869 \\ \hline \end{tabular} \end{table} TABLE III: Comparison of four different feature combination methods with MCPI on Human dataset: the table shows the AUC, Precision, and Recall scores of P features method (containing only molecular and PPI networks features), C features method (containing only molecular and CCI networks only network features) and MCPI. The data in the table shows that the ablation process can significantly impair the performance of the model. Fig. 7: Comparison with deep learning methods on the C. elegans dataset, the figure shows the AUC, precision, and recall scores of GraphDTA, GCN, GNN, TransformerCPI, and our proposed model. The error line at the top of the histogram represents the standard deviation of the experiment, including the “upper error” and “lower error. ”As shown in the figure, MCPI outperforms other methods in all metrics. This pathogen has led to almost 100 million cases of COVID-19 worldwide, resulting in millions of fatalities[66]. Although there have been some discoveries of treatments for COVID-19, their clinical efficacy is low or they need to be administered within a narrow therapeutic window. Therefore, there is a need for ongoing research into various treatment methods. SARS-CoV-2, like other coronavirus, relies on an essential 3CL protease (3CLPro or MPRO) for processing its polyproteins. This enzyme plays a crucial role in the polyprotein processing of viral RNA translation and is considered the major protease. In fact, 3CLPro contains at least 11 cleavage sites on the large polyprotein 1AB (Replicase 1AB), and blocking its activity can prevent virus replication[67]. Hence, the potential of 3CL protease as a target for antiviral drugs has piqued the interest of scientists. The SARS-CoV-2 pandemic necessitates the swift development of drugs, and repurposing existing drugs is a viable approach to overcome the research and development obstacles associated with creating new drugs. Repurposed drugs also carry lower risk of failure and require less investment than new drug development [68]. Therefore, this study aimed to utilize our predictive model to identify potential inhibitors among drugs already approved by the US Food and Drug Administration (FDA). Our model was trained on a dataset consisting of molecules screened for fragments bound to SARS-CoV-2 3CL protease (3CLpro) using crystallographic techniques. We chose the SARS-CoV-2 major protease with active site 6YB7 (PDB ID) as the predicted target based on data published by the Diamond Light Source group, which contains roughly 880 sample compounds with 78 interactions. Furthermore, we extracted all drugs in DrugBank [50] except those in the training dataset and generated test samples by combining each drug with the SARS-CoV-2 major protease (6YB7). Table V includes some of the compounds (drugs) predicted to interact with proteases. Upon analyzing the prediction results from the literature, we discovered that anti-cancer drugs and anti-viral medications may have more interactions that could assist in treating the Corona Virus. For instance, Ibruitinib, an anti-neoplastic drug used to treat chronic lymphocytic leukemia, is a tyrosine kinase (BTK) inhibitor and has been investigated as a possible target for reducing the overly severe immune response in Corona Virus [69]. Similarly, Imatinib, another tyrosine kinase (BTK) inhibitor used to treat leukemia, has been proven to effectively treat SARS-CoV-2 infection, according to related studies [70]. Famotidine, a competitive histamine-2 (H2) receptor antagonist, is also a promising drug, slowing disease progression in Corona Virus patients by reducing the histamine-mediated cytokine storm [71]. Furthermore, Linagliptin, a DPP-4 inhibitor used to treat type II diabetes, can reduce Corona Virus severity by reducing inflammation [72]. Although the following drugs are predicted to be potentially effective, they are not currently known to have inhibitory effects on Corona Virus: Mechlorethamine, a nitrogen mustard compound and anti-tumor drug; Abacavir, an anti-viral nucleoside reverse transcriptase inhibitor used to treat HIV and may be effective against Corona Virus; Oxaprozin, an anti-inflammatory drug mainly used for treating arthritis; and Sulfadiazine, a sulfonamide compound anti-bacterial drug used to treat various bacterial infections, such as bronchitis, which we speculate may be effective against bronchitis caused by Corona Virus. In this experiment, we identified potential inhibitors of SARS-CoV-2 3CLPro, and these candidates showed similarities in both chemical and pharmacological classification. Moreover, MCPI also predicted compounds that have shown efficacy in some ongoing studies, making this a significant finding. Nevertheless, proper drug application necessitates in vitro and in vivo validation experiments, as well as clinical trials, to verify the drug's efficacy and other desired properties. Figure 8: Comparison of four different feature combination ## IV Accessible Application We have developed an accessible MCPI server for researchers which enables us to predict potential interactions between drug molecules and target proteins, based on molecular features and network features of drug data and omics-scale protein data. Users simply need to provide the protein sequence and compound SMILES as inputs to MCPI. During the calculation process, MCPI can leverage additional data, including the compound distance matrix, compound molecular fingerprint, and biological interaction network, based on the user's input. This comprehensive approach ensures accurate predictions of compound-protein interactions. The overall architecture of the MCPI server is depicted in Fig. 9. Upon user input, the server generates word2vec encoding for protein residue sequences and utilize key functions of RDKit to obtain the fingerprint and distance matrix for compounds. Subsequently, the MCPI server searches the protein and compound interaction networks, employing the encapsulated Node2vec model to obtain network encoding. Finally, the acquired feature code is inputted into the prediction model, and the results are outputted on the page. The web server is available at [http://47.99.71.176:5000/index](http://47.99.71.176:5000/index), respectively. ## V Conclusion In this work, we introduced MCPI, a novel prediction model that aims to improve the accuracy of predicting protein-compound interactions. To achieve this, we integrated the PPI network, CCI network, and structural features of molecular CPI. To extract protein features, we utilized a gated CNN that operates on amino acid sequences pre-trained by Word2vec. For compounds, we used a combination of distance matrices and fingerprints, which allows for a comprehensive representation of both the molecular structure and semantic features. We further enhanced our model by using residual networks to extract advanced molecular features that outperform other methods that rely solely on pre-trained embeddings. To capture network embeddings, we used the Node2vec model on the interaction network. Our approach outperforms previously proposed CPI and traditional machine learning-based models, as demonstrated through our evaluation. Additionally, we leveraged MCPI to identify potential inhibitors among FDA-approved drugs for SARS-CoV-2, validating our predictions against literature. Our work may provide valuable insights for drug development and may guide future efforts in this area.
2310.04603
Phoretic swimming with bulk absorption
We consider phoretic self-propulsion of a chemically active colloid where solute is consumed at both the colloid boundary and within the bulk solution. Assuming first-order kinetics, the dimensionless transport problem is governed by the surface Damk\"ohler number ${\mathcal{S}}$ and the bulk Damk\"ohler number ${\mathcal B}$. The dimensionless colloid velocity $U$, normalized by a self-phoretic scale, is a nonlinear function of these two parameters. We identify two scenarios where these numbers are linked. When the controlling physical parameter is colloid size, ${\mathcal{S}}$ is proportional to ${\mathcal B}^{1/2}$; when the controlling parameter is solute diffusivity, ${\mathcal{S}}$ is proportional to ${\mathcal B}$. In the limit of small Damk\"ohler numbers, $U$ adopts the same asymptotic limit in both scenarios, proportional to ${\mathcal{S}}$. In the limit of large Damk\"ohler numbers, the deviations of solute concentration from the equilibrium value are restricted to a narrow layer about the active portion of the colloid boundary. The asymptotic predictions of the associated boundary-layer problem are corroborated by an eigenfunction solution of the exact problem. The boundary-layer structure breaks down near the transition between the active and inactive portions of the boundary. The transport problem in that local region partially resembles the classical Sommerfeld problem of wave diffraction from an edge.
Rodolfo Brandão, David Saintillan, Ehud Yariv
2023-10-06T21:41:06Z
http://arxiv.org/abs/2310.04603v1
# Phoretic swimming with bulk absorption ###### Abstract We consider phoretic self-propulsion of a chemically active colloid where solute is consumed at both the colloid boundary and within the bulk solution. Assuming first-order kinetics, the dimensionless transport problem is governed by the surface Damkohler number \(\mathcal{S}\) and the bulk Damkohler number \(\mathcal{B}\). The dimensionless colloid velocity \(U\), normalized by a self-phoretic scale, is a nonlinear function of these two parameters. We identify two scenarios where these numbers are linked. When the controlling physical parameter is colloid size, \(\mathcal{S}\) is proportional to \(\mathcal{B}^{1/2}\); when the controlling parameter is solute diffusivity, \(\mathcal{S}\) is proportional to \(\mathcal{B}\). In the limit of small Damkohler numbers, \(U\) adopts the same asymptotic limit in both scenarios, proportional to \(\mathcal{S}\). In the limit of large Damkohler numbers, the deviations of solute concentration from the equilibrium value are restricted to a narrow layer about the active portion of the colloid boundary. The asymptotic predictions of the associated boundary-layer problem are corroborated by an eigenfunction solution of the exact problem. The boundary-layer structure breaks down near the transition between the active and inactive portions of the boundary. The transport problem in that local region partially resembles the classical Sommerfeld problem of wave diffraction from an edge. Introduction The remarkable propulsion exhibited by chemically active particles in liquid solutions, known as self-diffusiophoresis, has garnered significant attention following experimental breakthroughs in catalytic swimmers [1]. The fundamental mechanism underpinning phoretic self-propulsion involves two key components: solute production or consumption at the particle boundary, coupled with short-range interactions between the solute molecules and that boundary. Golestanian _et al._[2] introduced the first macroscale model to describe self-diffusiophoresis under Stokes flow conditions, accounting for diffusive solute transport. In that mode, chemical reactions at the particle boundary are represented through a prescribed solute flux distribution, while mechanical interactions with solute molecules are captured through a diffusio-osmotic slip velocity -- proportional to the tangential solute gradient at the outer edge of the interaction layer [3]. In the absence of solute advection, the linearity of the governing equations and boundary conditions implies that an asymmetry in the particle shape or physicochemical properties is required for self-propulsion: in typical experiments involving spherical colloids, this asymmetry is achieved by coating half of a particle with a catalyst. A more sophisticated model of surface reactions, which better describes experimental systems, involve first-order chemical kinetics [4]. The associated boundary condition imposes a linear relation between the solute flux and local concentration, whose characteristic ratio defines the surface Dahmkohler number (hereafter denoted by \(\mathcal{S}\)). For slow reaction rates (\(\mathcal{S}\to 0\)), the imposed flux model is recovered. Accounting for finite Dahmkohler number has proven to be essential for capturing the dependence of the propulsion speed on particle size, as observed in experiments [4]. Of interest to us in this work is the case where the excess solute gets consumed in the bulk liquid surrounding the particle, for instance as a result of chemical degradation or bulk reaction with another solute. In that case, the strength of consumption is characterized by a bulk Dahmkohler number, hereafter denoted by \(\mathcal{B}\), defined as the ratio of the reactive to diffusive consumption rates. Solute bulk absorption has already been studied using both numerical simulations [5] as well as weakly nonlinear analyses near the threshold for spontaneous motion [6]. In certain ill-posed (e.g. steady self-phoresis in two dimensions [7; 8]) and singular (e.g. spontaneous particle motion in channels [9]) problems, even a weak bulk reaction may have significant effect. Here, we analyze the steady motion of a a spherical self-phoretic particle. The paper is organized as follows. We formulate the problem in Sec. II and present the dimensionless governing equations in Sec. III. An exact solution based upon an eigenfunction expansion is derived in Sec. IV. The linkage between the two Dahm&ohler numbers is discussed in Sec. V, where we also consider the case of small \(\mathcal{S}\) and \(\mathcal{B}\). The limit of large Dahm&ohler numbers is addressed in Sec. VI. The associated boundary-layer analysis breaks down near the junction between the active and inactive portions of the particle boundary. We analyze the structure of this transition region in Sec. VII. Illustrative examples are presented in Sec. VIII. We conclude in Sec. IX. ## II Problem formulation A chemically active spherical particle (radius \(a\)) is freely suspended in an unbounded solution (solute diffusivity \(D\)). The equilibrium solute concentration, at large distances from the particle, is denoted by \(c_{\infty}\). Solute transfer at the particle boundary is modeled using a first-order chemical reaction [4], \[\text{solute absorption (per unit area)}=k\times\text{local solute concentration}, \tag{1}\] where the (positive) rate constant \(k\) generally varies along the boundary. In addition, we assume [5; 10] that solute is consumed in the bulk in proportion to the excess concentration -- the deviation of its concentration from the equilibrium value, \[\text{solute consumption (per unit volume)}=k_{b}\times\left\{\text{local solute concentration}-c_{\infty}\right\}. \tag{2}\] The (positive) bulk rate \(k_{b}\) is assumed uniform. Following the prevailing practice [4; 11], we restrict the subsequent analysis to situations where \(k\) is symmetric about an axis passing through the particle center; self-propulsion accordingly takes place in the form of rigid translation in a direction parallel to that axis. Our goal is the associated speed. Defining \(\bar{k}\) as a characteristic norm of \(k\), relation (1) leads to the definition of the Dam&ohler number, \[\mathcal{S}=\frac{a\bar{k}}{D}, \tag{3}\] representing the ratio of reactive (\(\bar{k}c_{\infty}\)) to diffusive (\(Dc_{\infty}/a\)) solute flux densities. The problem is also affected by the bulk Damkohler number, \[\mathcal{B}=\frac{a^{2}k_{b}}{D}, \tag{4}\] representing the ratio of reactive (\(k_{b}c_{\infty}\)) to diffusive (\(Dc_{\infty}/a^{2}\)) consumption rates. We employ a macroscale description, where the short-range interaction between the solute molecules and the particle is manifested by diffusio-osmotic slip [3], \[\text{slip velocity}=b\times\text{surface gradient of solute concentration}. \tag{5}\] We assume that \(b\) is uniform. Note that \(b\) is a signed quantity, positive for repulsive interactions and negative for attractive ones. The velocity scale associated with (5) is \(\mathcal{U}=bc_{\infty}/a\). We adopt a co-moving reference frame with the origin at the particle center. In that frame we utilize the spherical coordinates \((ar,\theta,\phi)\) defined such that the axis \(\theta=0,\pi\) is aligned along the symmetry diameter of the particle and \(r=1\) is the particle boundary, see Fig. 1. The specified axially-symmetric activity is then represented by the function \(k(\theta)\). Consistently with the macroscale description, the particle acquires the rectilinear velocity required to keep it force-free. Due to the axial symmetry, the particle velocity relative to Figure 1: Schematic showing the particle geometry and coordinates. The zoomed region (rotated) describes the transition-region coordinates. the otherwise quiescent liquid must be parallel to the symmetry axis, say \(U^{*}\hat{\mathbf{i}}\) (\(\hat{\mathbf{i}}\) being a unit vector in the direction \(\theta=0\)). Our goal is the calculation of \(U^{*}\). ## III Dimensionless description In what follows we consider the coupled transport-flow problem governing the excess solute concentration \(c\), normalized by \(c_{\infty}\), and fluid velocity \(\mathbf{u}\), normalized by \(\mathcal{U}\). Given the presumed axial symmetry, \(c\) is a function of \(r\) and \(\theta\). In the particle-fixed reference frame, the velocity of the particle is manifested as the uniform streaming \(-U\hat{\mathbf{i}}\) at infinity, where \(U=U^{*}/\mathcal{U}\). The dimensionless solute transport problem is governed by: (i) the diffusion-reaction equation, \[\nabla^{2}c=\mathcal{B}c, \tag{10}\] wherein \[\nabla^{2}=\frac{\partial}{\partial r^{2}}+\frac{2}{r}\frac{\partial}{ \partial r}+\frac{1}{r^{2}\sin^{2}\theta}\frac{\partial^{2}}{\partial\theta^ {2}} \tag{11}\] is the pertinent dimensionless Laplacian; (ii) the kinetic condition at the particle boundary, \[\frac{\partial c}{\partial r}=\mathcal{S}(1+c)f(\theta)\quad\text{at}\quad r =1, \tag{12}\] where \(f(\theta)=k(\theta)/\bar{k}\) is the dimensionless distribution of rate constant; and (iii) the approach to equilibrium at large distances, \[c\to 0\quad\text{as}\quad r\to\infty. \tag{13}\] Note that condition (12) is meaningful only with solute consumption, \(f\geq 0\). The preceding problem provides \(c\) as a function of the governing parameters \(\mathcal{S}\) and \(\mathcal{B}\), as well as \(f(\theta)\). Once solved, we can consider the flow, governed by: (i) the continuity and Stokes equations [the former tacitly employed in (10)]; (ii) the diffusio-osmotic slip [cf. (5)] \[\mathbf{u}=\hat{\mathbf{e}}_{\theta}\frac{\partial c}{\partial\theta}\quad \text{at}\quad r=1; \tag{14}\] (iii) the far-field approach to a uniform stream (see Fig. 1), \[\mathbf{u}\to-U\hat{\mathbf{i}}\quad\text{as}\quad r\to\infty; \tag{15}\] and (iv) the requirement that the particle is force-free. In fact, the detailed calculation of the flow field is not required, as use of the reciprocal theorem [12] provides \(U\) as the quadrature \[U=\frac{1}{2}\int_{0}^{\pi}\left.\frac{\partial c}{\partial\theta}\right|_{r=1} \sin^{2}\theta\,d\theta, \tag{3.7}\] or, following integration by parts, \[U=-\left.\int_{0}^{\pi}\left.c\right|_{r=1}\sin\theta\cos\theta\,d\theta. \tag{3.8}\] In what follows, it may be convenient to employ \(\mu=\cos\theta\) instead of \(\theta\). Writing \(f(\theta)=F(\mu)\), it is natural to represent \(F\) as a series of surface harmonics, \[F(\mu)=\sum_{m=0}^{\infty}F_{m}P_{m}(\mu) \tag{3.9}\] wherein \(P_{m}\) are the Legendre polynomials of degree \(m\). Using the orthogonality of these polynomials, \[\int_{-1}^{1}P_{m}(\mu)P_{n}(\mu)\,d\mu=\frac{2}{2m+1}\delta_{mn}, \tag{3.10}\] we obtain \[F_{m}=\frac{2m+1}{2}\int_{-1}^{1}F(\mu)P_{m}(\mu)\,d\mu. \tag{3.11}\] When using \(\mu\), (3.8) simplifies to \[U=-\int_{-1}^{1}\mu\left.c\right|_{r=1}\,d\mu. \tag{3.12}\] ## IV Exact solution Using the eigenfunctions of the modified Helmholtz equation, we find that the most general axisymmetric solution of (3.1)-(3.2) and (3.4) is \[c=\sum_{n=0}^{\infty}A_{n}r^{-1/2}K_{n+1/2}(\mathcal{B}^{1/2}r)P_{n}(\cos \theta). \tag{4.1}\] Here \(K_{\nu}\) are the modified Bessel functions of the second kind with degree \(\nu\). Substitution of (3.9) and (4.1) into condition (3.3) yields \[\sum_{n=0}^{\infty}A_{n}\left[K_{n+1/2}^{\prime}(\mathcal{B}^{1/ 2})-\frac{1}{2}K_{n+1/2}(\mathcal{B}^{1/2})\right]P_{n}(\mu)\\ =\mathcal{S}\left[1+\sum_{n=0}^{\infty}A_{n}K_{n+1/2}(\mathcal{B }^{1/2})P_{n}(\mu)\right]\sum_{m=0}^{\infty}F_{m}P_{m}(\mu), \tag{4.2}\] where the prime denote differentiation with respect to the argument. Projection of (4.2) upon \(P_{m}(\mu)\) (\(m=0,1,2,\ldots\)) yields an infinite linear system governing the coefficients \(\{A_{n}\}_{n=0}^{\infty}\). Using controlled truncation, this system may be solved in principle for any values of \(\mathcal{B}\) and \(\mathcal{S}\) and a given activity distribution \(f(\theta)\). Once solved, substitution into (3.12) yields, upon making use of the orthogonality relations (3.10), \[U=-\frac{2}{3}A_{1}K_{3/2}(\mathcal{B}^{1/2}). \tag{4.3}\] Prior to illustrating the exact solution for a specific activity distribution, it is desirable to supplement it by asymptotic approximations. ## V Linked Damkohler numbers Considering the manner by which the Damkohler numbers (2.3)-(2.4) depend upon the dimensional quantities in the problem, there are two natural scenarios where these two numbers are linked. The first scenario, \[\mathcal{S}\propto\sqrt{\mathcal{B}}, \tag{5.1}\] corresponds to the situation where the particle size \(a\) is allowed to vary, while all other dimensional quantities are fixed. The second scenario, \[\mathcal{S}\propto\mathcal{B}, \tag{5.2}\] corresponds to the situation where it is the diffusivity \(D\) that is allowed to vary. These linkages suggest that in an asymptotic analysis, we should study the situation where both numbers are either small or large. For small Damkohler numbers, the leading-order calculation is actually independent of the linkage. Indeed, it is evident from (3.3) that \(c\) is of order \(\mathcal{S}\), while from (3.1) we see that at leading-order \(c\) is governed by Laplace's equation. Writing \(c=\mathcal{S}\dot{c}+\cdots\), we find from (3.3) that \(c^{\prime}\) satisfies \[\frac{\partial\dot{c}}{\partial r}=f(\theta)\quad\text{at}\quad r=1. \tag{5.3}\] Writing the harmonic field \(\dot{c}\) as a sum of spherical harmonics, \[\dot{c}=\sum_{m=0}^{\infty}a_{m}\frac{P_{m}(\mu)}{r^{m+1}}, \tag{5.4}\] we readily obtain using (3.9), \[a_{m}=-(m+1)^{-1}F_{m}\quad\text{for}\quad m\neq 0. \tag{5.5}\] The particle velocity is then obtained from (3.10), (3.12) and (5.5): \[U=\frac{\mathcal{S}F_{1}}{3}+\cdots. \tag{5.6}\] This leading-order velocity is unaffected by bulk absorption. ## VI Large Damkohler numbers In the limit of large Damkohler numbers we find from (3.1) and (3.4) that \[c\equiv 0. \tag{6.1}\] Since this is an exact solution of both (3.1) and (3.4), it is evident the asymptotic error is exponentially small. The trivial solution (6.1) is clearly incompatible with (3.3) at \(\mathcal{A}\), the active portion of the boundary (see Fig. 1), \[\mathcal{A}=\{\theta\in(0,\pi)|f(\theta)>0\}. \tag{6.2}\] Seeking an additional distinguished limit at large Damkohler numbers, we observe from (3.1) a possible dominant balance with spatial variations across a narrow region of \(\text{ord}(\mathcal{B}^{-1/2})\) width. We therefore postulate a boundary layer of that width about \(\mathcal{A}\). Defining the stretched coordinate \[Y=\mathcal{B}^{1/2}(r-1), \tag{6.3}\] we write in the boundary layer \[c(r,\theta;\mathcal{B})=\tilde{c}(Y,\theta;\mathcal{B}). \tag{6.4}\] Substitution of (6.3)-(6.4) into the diffusion-reaction equation (3.1) yields \[\frac{\partial^{2}\tilde{c}}{\partial Y^{2}}+\mathcal{B}^{-1/2}\frac{\partial \tilde{c}}{\partial Y}+\cdots=\tilde{c}\quad\text{for}\quad Y>0. \tag{6.5}\] Condition (3.3) becomes, \[\mathcal{B}^{1/2}\frac{\partial\tilde{c}}{\partial Y}=\mathcal{S}(1+\tilde{c })f(\theta)\quad\text{at}\quad Y=0, \tag{6.6}\] and the requirement of matching with the "outer" solution (6.1) implies the far-field decay \[\lim_{Y\to\infty}\tilde{c}=0. \tag{6.7}\] Once the boundary-layer problem is solved, the particle speed is readily obtained from (3.8) as \[U=-\int_{\cal A}\left.\tilde{c}\right|_{Y=0}\sin\theta\cos\theta\,d\theta. \tag{6.8}\] We only seek the leading-order solution. Thus, we have from (6.5) \[\frac{\partial^{2}\tilde{c}}{\partial Y^{2}}=\tilde{c}\quad\mbox{for}\quad Y>0. \tag{6.9}\] The solution of (6.7) and (6.9) is \[\tilde{c}(Y,\theta)=-L(\theta)e^{-Y}. \tag{6.10}\] Substitution into (6.8) gives \[U=\int_{\cal A}L(\theta)\sin\theta\cos\theta\,d\theta. \tag{6.11}\] Up to this point, the analysis has been independent of the linkage between \({\cal B}\) and \({\cal S}\). The distribution \(L(\theta)\), however, is determined by condition (6.6), whose leading-order form depends upon the specific linkage. The case (5.1) of linkage by size is conveniently represented by the relation \[{\cal S}=\alpha\sqrt{{\cal B}}, \tag{6.12}\] where \(\alpha\) is fixed. Condition (6.6) then reads, at leading order, \[\frac{\partial\tilde{c}}{\partial Y}=\alpha(1+\tilde{c})f(\theta)\quad\mbox{ at}\quad Y=0. \tag{6.13}\] Substitution of (6.10) then gives \[L(\theta)=\frac{\alpha f(\theta)}{1+\alpha f(\theta)}. \tag{6.14}\] We therefore find from (6.11) that, at leading order, \[U=\alpha\int_{\cal A}\frac{f(\theta)}{1+\alpha f(\theta)}\sin\theta\cos \theta\,d\theta. \tag{6.15}\] The case (5.2) of linkage by diffusivity is represented by the relation, \[{\cal S}=\beta{\cal B} \tag{6.16}\] where \(\beta\) is considered fixed. Here, at leading order, condition (6.6) gives \[\tilde{c}=-1\quad\text{at}\quad Y=0. \tag{6.17}\] Substitution of (6.10) then gives \[L(\theta)\equiv 1. \tag{6.18}\] We therefore find from (6.11) that \[U=\int_{\mathcal{A}}\sin\theta\cos\theta\,d\theta, \tag{6.19}\] at leading order. Remarkably, the particle velocity depends only upon the active fraction of boundary; it is independent of \(\beta\) and \(f\) and is accordingly insensitive to the details of the activity profile. In what follows, it is convenient to restrict the analysis to the case (see Fig. 1) where \(\mathcal{A}=(0,\theta^{*})\) with \(0<\theta^{*}<\pi\) [cf. (8.1) and (8.4)]. Under this modest restriction, (6.19) gives \[U=\frac{\sin^{2}\theta^{*}}{2}. \tag{6.20}\] ## VII Transition region The solution in the limit of large Damkohler numbers, with linkage by diffusivity, may appear to introduce a contradiction. Indeed, the nonzero velocity (6.20), which may be traced back to (3.8), is incompatible with the zero velocity predicted by a naive substitution of (6.10) and (6.18) into the original quadrature (3.7). The origin of this incompatibility has to do with smoothness at the boundary \(r=1\). Indeed, the excess concentration is discontinuous at the transition \(\theta=\theta^{*}\) between \(\mathcal{A}\), about which (6.10) and (6.18) hold, and its complement, about which (6.1) holds. With a finite discontinuity, expression (3.7) cannot be applied in a piecewise manner. The resolution of this apparent contradiction has to do with a breakdown of the boundary-layer structure. The boundary-layer solution, where variations with respect to \(\theta\) are assumed "small," is clearly incompatible with a finite discontinuity. A transition region is therefore formed about the edge (\(r=1\) and \(\theta=\theta^{*}\)) of \(\mathcal{A}\). In that region, the excess concentration smoothly varies from the boundary-layer solution (6.10) and (6.18) at \(\theta<\theta^{*}\) to the nil value (6.1) at \(\theta>\theta^{*}\). With the presence of such a region, the original quadrature (3.7) is dominated by a small neighborhood \({\cal N}\) of \(\theta^{*}\), which is still asymptotically larger than the width of the transition region. Since \(\theta\) is approximately constant in that neighborhood, we obtain from (3.7) \[U=\frac{\sin^{2}\theta^{*}}{2}\int_{\cal N}\frac{\partial c}{\partial\theta} \bigg{|}_{r=1}\ d\theta. \tag{7.1}\] Recalling the need to match the minus unity value for \(\theta^{*}>\theta\) and the zero value for \(\theta>\theta^{*}\), we retrieve (6.20). The boundary-layer scaling suggests that the lateral extent of the transition region is \({\cal B}^{-1/2}\). Defining the local coordinate [cf. (6.3)] \[X={\cal B}^{1/2}(\theta^{*}-\theta), \tag{7.2}\] and considering the limit \({\cal B}\to\infty\) with \(X,Y\) fixed we find that the transition region coincides the upper half \(XY\)-plane (see Fig. 1). Defining \(C(X,Y)=-c(r,\theta)\), \(C\) is governed by the modified Helmholtz equation \[\frac{\partial^{2}C}{\partial X^{2}}+\frac{\partial^{2}C}{\partial Y^{2}}=C \quad\mbox{for}\quad Y>0. \tag{7.3}\] At large \(Y\) it must satisfy \[\lim_{Y\to\infty}C=0, \tag{7.4}\] representing asymptotic matching with (6.1). It remains to specify the mixed boundary conditions at \(Y=0\), which follow from the exact condition (6.6) with linkage by diffusivity (6.16): \[\frac{\partial C}{\partial Y}=-\beta{\cal B}^{1/2}(1-C)f(\theta^{*}-{\cal B}^ {-1/2}X)\quad\mbox{at}\quad Y=0. \tag{7.5}\] In the inert portion of the boundary, where \(f=0\), we find \[\frac{\partial C}{\partial Y}=0\quad\mbox{for}\quad X<0. \tag{7.6}\] The condition on the active portion depends upon the asymptotic behavior of \(f(\theta)\) as \(\theta\nearrow\theta^{*}\). In the case where \(f(\theta)\) attains there a nonzero limit [cf. (8.1)], applying the limit \({\cal B}\to\infty\) to (7.5) yields \[C=1\quad\mbox{for}\quad X>0; \tag{7.7}\] in the case where \(f(\theta)\sim K(\theta^{*}-\theta)\) as \(\theta\nearrow\theta^{*}\) [cf. (8.4), where \(K=1\)], the appropriate condition in the limit \({\cal B}\to\infty\) is \[\frac{\partial C}{\partial Y}=-\beta K(1-C)X\quad\mbox{for}\quad X>0. \tag{7.8}\] For situations where (7.7) holds, the problem is reminiscent of the diffraction of plane waves of sound by the edge of a semi-infinite screen -- a problem originally solved by Sommerfeld [13]. Defining the local polar coordinates \((\rho,\vartheta)\) by \[X=\rho\cos\vartheta,\quad Y=\rho\sin\vartheta \tag{7.9}\] (see Fig. 1), the solution of (7.3)-(7.4) and (7.6)-(7.7), derived in the Appendix, is \[C=\frac{e^{-Y}}{2}\left\{1+\mathrm{erf}\left[\rho^{1/2}\left(\cos\frac{ \vartheta}{2}-\sin\frac{\vartheta}{2}\right)\right]\right\}+\frac{e^{Y}}{2} \left\{1-\mathrm{erf}\left[\rho^{1/2}\left(\cos\frac{\vartheta}{2}+\sin\frac{ \vartheta}{2}\right)\right]\right\}. \tag{7.10}\] In terms of the polar coordinates (7.9), the limit \(X\to\infty\) with \(Y\) fixed corresponds to \(\rho\to\infty\) with \(\vartheta=O(1/\rho)\). We then readily obtain \[\lim_{X\to\infty}C=e^{-Y}, \tag{7.11}\] which trivially matches the boundary-layer solution (6.10) and (6.18). The limit \(X\to-\infty\) with \(Y\) fixed corresponds to \(\rho\to\infty\) with \(\pi-\vartheta=O(1/\rho)\). Here, we obtain \[\lim_{X\to-\infty}C=0, \tag{7.12}\] which trivially matches the nil concentration about the inert portion of the particle boundary. ## VIII Illustrations We continue by illustrating our results, considering first the case of linkage by size. With \(\mathcal{B}\) locked to \(\mathcal{S}\) via (6.12), \(U\) becomes a function of \(\mathcal{S}\), \(\alpha\) and the activity profile. We use a Janus configuration, namely \[f(\theta)=\left\{\begin{array}{ll}1,&0<\theta<\pi/2,\\ 0,&\pi/2<\theta<\pi,\end{array}\right. \tag{8.1}\] for which (3.11) gives \[F_{2k}=\frac{\delta_{k,0}}{2},\quad F_{2k+1}=\frac{(-)^{k}(2k)!(4k+3)}{2^{2k+ 2}(k!)^{2}(k+1)}, \tag{8.2}\] and, in particular, \(F_{1}=3/4\). The velocity calculated using (4.3) is shown in Fig. 2 for \(\alpha=1/2\), \(1\) and \(2\). We also portray the \(\alpha\)-independent small Damkohler-number approximation (5.6), which here gives \(U={\cal S}/4\) for \({\cal S}\ll 1\). With (8.1), the large Damkohler-number approximation (6.15) gives \[\lim_{{\cal S}\to\infty}U=\frac{\alpha}{2(1+\alpha)}. \tag{8.3}\] For the aforementioned \(\alpha\) values, it implies the respective limits \(1/6\), \(1/4\) and \(1/3\). The approach at large \({\cal S}\) to these limits is evident in the figure. Consider now the linkage by diffusivity. With \({\cal B}\) locked to \({\cal S}\) via (6.16), \(U\) becomes a function of \({\cal S}\), \(\beta\) and the activity profile. We here use a single linkage value, \(\beta=1\), but consider both the Janus activity distribution (8.1) and the generalized Janus profile \[f(\theta)=\left\{\begin{array}{ll}\cos\theta,\ 0<\theta<\pi/2,\\ 0,\ \ \ \ \ \pi/2<\theta<\pi,\end{array}\right. \tag{8.4}\] for which \[F_{2k}=\frac{(-)^{k+1}(2k)!(4k+1)}{4^{k+1}(k!)^{2}(k+1)(2k-1)},\ \ \ F_{2k+1}=\frac{ \delta_{k0}}{2}, \tag{8.5}\] and, in particular, \(F_{1}=1/2\). For that profile the small Damkohler-number approximation (5.6) gives \(U={\cal S}/6\) for \({\cal S}\ll 1\). Since \({\cal A}=(0,\pi/2)\) for both (8.1) and (8.4), these distributions share the same large Damkohler-number limit (6.20), namely \[\lim_{{\cal S}\to\infty}U=\frac{1}{2}. \tag{8.6}\] The results are illustrated in Fig. (3). Figure 2: \(U\) versus \({\cal S}\) using linkage-by-size (6.12) for the Janus profile (8.1). Solid: exact result (4.3) for the indicated values of \(\alpha\). Dashed: small-\({\cal S}\) approximation (5.6). In calculating \(U\) using (4.3), we have encountered difficulties when applying the numerical scheme at large values of \(\mathcal{S}\). These are more pronounced for the Janus profile (8.1), where the interfacial activity undergoes a finite discontinuity at \(\theta=\pi/2\). Apparently, the associated non-smoothness escalates the Gibbs phenomenon. In any event, the approach to the limit (8.6) is unequivocal. ## IX Concluding remarks We have analyzed self-phoresis of active colloids in situations where solute is transported by diffusion and consumed by two chemical reactions, one at the colloid boundary and one within the bulk. The dimensionless problem is governed by the two associated Damkohler numbers, \(\mathcal{S}\) and \(\mathcal{B}\). We have solved the problem using an eigenfunction expansion. This semi-analytic solution, applicable for all values of \(\mathcal{S}\) and \(\mathcal{B}\), has been accompanied by asymptotic approximations. The chosen limits correspond to two possible natural linkages between \(\mathcal{S}\) and \(\mathcal{B}\). For small Damkohler numbers, the particle velocity is proportional to \(\mathcal{S}\) and independent of \(\mathcal{B}\). At large Damkohler numbers, the solute concentration is uniform except within a boundary layer about the active portion of the boundary. The details of the boundary-layer transport depend upon the linkage between \(\mathcal{S}\) and \(\mathcal{B}\). In particular, for \(\mathcal{S}\propto\mathcal{B}\) we find that the particle Figure 3: \(U\) versus \(\mathcal{B}\) using linkage-by-diffusivity (6.16) with \(\beta=1\) for both the Janus (8.1) and the generalized Janus (8.4) activity profiles. Solid: exact result (4.3). Dashed: small Damköhler-number approximation (5.6). velocity depends upon the relative fraction of the active boundary, but is otherwise indifferent to the activity details in that fraction. The associated boundary-layer solution breaks down near the edge of the active portion of the boundary. Following a similar analysis in a classical wave problem [13], we have obtained a close-form solution of the local transport problem in the edge region. We summarize our results in terms of dimensional quantities. For weak chemical activity, the particle velocity is proportional to \(b\bar{k}c_{\infty}/D\). This size-independent scaling is the same as in the simplest models of flux-prescribed distributions [2], the flux scale being \(\bar{k}c_{\infty}\). In the limit of strong activity, the velocity scales as \(bc_{\infty}/a\). Here, there are two situations. If the limit is realized by large values of \(a\), the ratio of the particle velocity to \(bc_{\infty}/a\) depends (nonlinearly) upon both the ratio \(\alpha=\bar{k}/\sqrt{k_{b}D}\) and the activity profile. If the limit is realized by small values of \(D\), the ratio of the particle velocity to \(bc_{\infty}/a\) is independent of the reaction coefficients, the solute diffusivity, and even the details of the reaction profile. * ## Appendix A Transition region Following [14], we seek a solution of (7.3) of the form \[C=e^{-Y}G+e^{Y}H.\] (A.1) Requiring the functions \(G\) and \(H\) to satisfy \[\frac{\partial^{2}G}{\partial X^{2}}+\frac{\partial^{2}G}{\partial Y^{2}}=2 \frac{\partial G}{\partial Y},\quad\frac{\partial^{2}H}{\partial X^{2}}+ \frac{\partial^{2}H}{\partial Y^{2}}=-2\frac{\partial H}{\partial Y},\] (A.2) (7.3) is trivially satisfied. To solve equations (A.2), we employ the parabolic-cylinder coordinates (see Fig. 1) \[\xi=\rho^{1/2}\cos\frac{\vartheta}{2},\quad\eta=\rho^{1/2}\sin\frac{\vartheta }{2}.\] (A.3) These are natural for the transition-region geometry and conditions (7.6)-(7.7), since the negative real axis becomes \(\xi=0\), while the positive real axis becomes \(\eta=0\). With \(\xi\) and \(\eta\) as independent variables, (A.2) become \[\frac{\partial^{2}G}{\partial\xi^{2}}+\frac{\partial^{2}G}{\partial\eta^{2}}= 4\left(\eta\frac{\partial G}{\partial\xi}+\xi\frac{\partial G}{\partial\eta} \right),\quad\frac{\partial^{2}H}{\partial\xi^{2}}+\frac{\partial^{2}H}{ \partial\eta^{2}}=-4\left(\eta\frac{\partial H}{\partial\xi}+\xi\frac{ \partial H}{\partial\eta}\right).\] (A.4 \[a,b\] ) The solution to (A.4a) can be written as a combination of two similarity solutions, \[G=G_{+}(\zeta_{+})+G_{-}(\zeta_{-}),\] (A.5) wherein \(\zeta_{\pm}=\xi\pm\eta\). We therefore obtain the ordinary differential equations, \[G^{\prime\prime}_{+}=2\zeta_{+}G^{\prime}_{+},\quad G^{\prime\prime}_{-}=-2\zeta_ {-}G^{\prime}_{-},\] (A.6) which integrate to give \(G^{\prime}_{\pm}=g_{\pm}e^{\pm\zeta_{\pm}^{2}}\). Similarly, the solution to (A.4b) is written as a combination of two similarity solutions, \[H=H_{+}(\zeta_{+})+H_{-}(\zeta_{-}).\] (A.7) The resulting equations, \[H^{\prime\prime}_{+}=-2\zeta_{+}H^{\prime}_{+},\quad H^{\prime\prime}_{-}=2 \zeta_{-}H^{\prime}_{-},\] (A.8) integrate to give \(H^{\prime}_{\pm}=h_{\pm}e^{\mp\zeta_{\pm}^{2}}\). Now, as \(\rho\to\infty\), it is evident that \(\zeta_{+}\to\infty\) for all \(0<\vartheta<\pi\) while \(\zeta_{-}\) tends to \(\infty\) for \(0<\vartheta<\pi/2\) and to \(-\infty\) for \(\pi/2<\vartheta<\pi\). To avoid a super-exponential divergence of \(C\) at large \(\rho\), which would clearly contradict (7.4), we must set \(g_{+}=h_{-}=0\). We conclude that the most general solutions of (A.4) are \[G(\xi,\eta)=\dot{g}+g\,\mathrm{erf}(\xi-\eta),\quad H(\xi,\eta)=\dot{h}+h\, \mathrm{erf}(\xi+\eta).\] (A.9) The four constants appearing in (A.9) are determined from the boundary conditions. With condition (7.6) applying at \(\xi=0\) we readily obtain \(\dot{h}=\dot{g}\) and \(h=-g\). Thus, (A.1) and (A.9) give \[C=e^{-Y}[\dot{g}+g\,\mathrm{erf}(\xi-\eta)]+e^{Y}[\dot{g}-g\,\mathrm{erf}(\xi+ \eta)].\] (A.10) Recalling that \(\mathrm{erf}\,z\sim 1-e^{-z^{2}}/z\sqrt{\pi}\) for \(z\to\infty\), we must impose \(\dot{g}=g\) to satisfy condition (7.4). Last, noting that the inhomogeneous condition (7.7) applies at \(\eta=0\), we readily obtain \(g=1/2\). We conclude that \[C=\frac{e^{-Y}}{2}\left[1+\mathrm{erf}(\xi-\eta)\right]+\frac{e^{Y}}{2}\left[ 1-\mathrm{erf}(\xi+\eta)\right].\] (A.11) Substitution of (A.3) yields (7.10).